id
stringlengths
18
42
text
stringlengths
0
8.25M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
163 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
1.44M
proofpile-arXiv_065-6766
\section{Introduction} \label{sec:Introduction} The present day space and velocity distributions of Galactic isolated old Neutron Stars (NS), are the subject of many studies. Among the reasons for deriving the positional and kinematical properties of isolated old NSs was the suggestion that Gamma-Ray Bursts (GRBs) may originate from Galactic NSs (e.g., Mazets et al. 1980). This was refuted\footnote{At least in the sense that the majority of GRBs are extragalactic; see Kasliwal et al. (2008).} by the homogeneous sky distribution of GRBs (e.g., Meegan et al. 1992), and later on by the discovery of their cosmic origin (e.g., Metzger et al. 1997; van Paradijs et al. 1997). Another exciting possibility, that re-ignited these efforts, was to detect isolated old NSs in soft X-ray radiation that may be emitted as they slowly accrete matter from the interstellar medium (ISM; Ostriker, Rees \& Silk 1970; Shvartsman 1971). Treves \& Colpi (1991), and Blaes \& Madau (1993) predicted that $\sim10^{3}$--$10^{4}$ isolated old NS accreting from the ISM would be detected by the {\it ROSAT} Position Sensitive Proportional Counters (PSPC) all sky survey (Voges et al. 1999). Although intensive searches for these objects were carried out (e.g., Motch et al. 1997; Maoz, Ofek, \& Shemi 1997; Haberl, Motch, \& Pietsch 1998; Rutledge et al. 2003; Ag{\"u}eros et al. 2006), only a handful of candidates were found. However, these are presumably young cooling NSs, rather than old NSs whose luminosity is dominated by accretion from the ISM (e.g., Neuh\"{a}user \& Tr\"{u}mper 1999; Popov et al. 2000; Treves et al. 2001). Apparently, the reasons for the rareness of these objects in the {\it ROSAT} source catalog is that their typical velocities were underestimated by earlier studies (i.e., Narayan \& Ostriker 1990). It is also possible that their accretion rate is below the Bondi-Hoyle rate\footnote{Bondi \& Hoyle (1944).} (e.g., Colpi et al. 1998; Perna et al. 2003; Toropina et al. 2001, 2003, 2005; see however Arons \& Lea 1976,1980), or that these NSs are in an ejection stage (e.g., Colpi et al. 1998; Livio et al. 1998; Popov \& Prokhorov 2000). During the last several years, new classes of radio transients were found (e.g., Bower et al. 2007; Matsumura et al. 2007; Niinuma et al. 2007; Kida et al. 2008; see also Levinson et al. 2002; Gal-Yam et al. 2006). Bower et al. (2007) found several examples of these transients in a single small field of view, and showed that they have time scales above 20\,min but below seven days, and they lack any quiescent X-ray, visible-light, near-Infrared (IR), and radio counterparts. Recently, Ofek et al. (2009) suggested that these transient events may be associated with Galactic isolated old NSs. However, testing this hypothesis requires knowledge of the NSs space distribution. There are two approaches in the literature for calculating the theoretical space and velocity distributions of old NSs. Paczynski (1990), Hartmann, Woosley, \& Epstein (1990), Blaes \& Rajagopal (1991), Blaes \& Madau (1993), and Posselt et al. (2008) carried out Monte-Carlo simulations of NS orbits. In such simulations, the positions and velocities of NS at birth are integrated assuming some non-evolving Galactic potential. A second approach is to use some sort of semi-analytic approximation in order to estimate the ``final'' NS space and velocity distributions. Frei, Huang, \& Paczynski (1992) calculated the final vertical density and velocity distributions of NSs, assuming that the gravitational potential is a function only of the height, $z$, above the Galactic plane. Using the epicyclic approximation, Blaes \& Rajagopal (1991) developed a technique that allows calculation of the full three dimensional velocity distribution of NSs. However, they showed that this method is not adequate for fast moving objects, which constitute the majority of the NS population. Blaes \& Madau (1993) used the thin-disk approximation to calculate the space and velocity distributions of NSs. In this prescription, the radial motion of NSs are controlled by the Galactic potential in the Galactic disk, regardless of the vertical height, $z$, of the NS above/below the Galactic plane. Finally, another solution was presented by Prokhorov \& Postnov (1994), who assume that the ergodic hypothesis is correct (see discussion in Binney \& Tremaine 1987). Madau \& Blaes (1994) noted that all these approaches neglect dynamical heating of NSs due to encounters with giant molecular clouds, spiral arms, and stellar ``collisions'' (e.g., Kamahori \& Fujimoto 1986, 1987; Barbanis \& Woltjer 1967; Carlberg \& Sellwood 1985; Jenkins \& Binney 1990). They crudely estimated the order of magnitude of this effect by applying the force-free diffusion equation to the vertical height, $z$, and NS speed distributions. They found that, at the solar neighborhood, dynamical heating may decrease the local density of NSs by a factor of $\sim1.5$ relative to a non-heated population. In this paper I present the results of Monte-Carlo orbital simulations of Galactic NSs. These simulations improve upon past efforts (e.g., Paczynski 1990; Hartmann, Epstein, \& Woosley 1990, Blaes \& Rajagopal 1991; Blaes \& Madau 1993; Popov et al. 2005; and Posselt et al. 2008) in several aspects. First, I use up-to-date space and velocity distributions of NSs at birth from Arzoumanian et al. (2002) and Faucher-Gigu\`{e}re \& Kaspi (2006). Second, I assume a birth rate of NSs along the Galaxy life time, instead of assuming that all the NSs were born about 10~Gyr ago. Third, I generate a large sample of simulated NSs, which is about one to two orders of magnitude larger than those presented by previous efforts. The structure of this paper is as follows: In \S\ref{sec:ModelIng} I present the model ingredients for the Monte-Carlo simulations. In \S\ref{sec:DynHeat} I discuss dynamical heating and show that it can be neglected. The Monte-Carlo simulations are described in \S\ref{sec:MC}, and the catalogue of simulated NSs are presented in \S\ref{sec:Cat}. The results of these simulations are presented in \S\ref{sec:NSres}, and in \S\ref{sec:Disc} I summarize the results and compare them with some of the previous efforts. \section{Model ingredients} \label{sec:ModelIng} In the following subsections, I present the ingredients of the Monte-Carlo simulations. These are: NSs birth rate (\S\ref{sec:NSbirthrate}); space and velocity distributions of NSs at birth (\S\ref{sec:NSbirthdist}); and the Galactic potential (\S\ref{sec:GalPot}). \subsection{NSs birth rate} \label{sec:NSbirthrate} The stellar population in the Milky Way is composed of at least two major components: a bulge and a disk. I therefore, simulate two populations of NSs. A Galactic disk-born population and a Galactic bulge population. For the Galactic-disk NSs, I assume a continuous constant formation rate in the past 12\,Gyr. This assumption is motivated by the analysis of Rocha-Pinto et al. (2000) that did~not find any major trends in the star formation rate in the disk of our Galaxy. I assume that the Galactic-bulge NSs were born in a single burst 12~Gyr ago (Ferreras, Wyse, \& Silk 2003; Ballero et al. 2007; Minniti \& Zoccali 2008). As I discuss in this section, the simulations presented here consist of $40\%$ disk-born NSs and $60\%$ bulge-born NSs. The actual ratio of disk to bulge NSs is unknown and may be significantly different than the one assumed here. Therefore, in the resulting catalogue (\S\ref{sec:Cat}) I specify the origin of each simulated NS and present the results also for each population separately. The total number of NSs in the Milky Way is constrained by the chemical composition of the Galaxy. Specifically, the iron content of the Galaxy implies that the total number of core collapse supernovae (SNe; and therefore NSs) that exploded in the Milky Way is about $10^{9}$ (Arnett, Schramm, \& Truran 1989). However, the star formation rate in the Galactic disk was approximately constant during the last $\sim12$\,Gyr (Rocha-Pinto et al. 2000; see also Noh \& Scalo 1990). Therefore, with the current NS birth rate of up to one in 30~years (e.g., Diehl et al. 2006), one expects that there will be $\ltorder4\times10^{8}$ disk-born NSs in our Galaxy. The predictions based on the SN rate ($\ltorder4\times10^{8}$) and the chemical evolution ($\sim10^{9}$) are therefore inconsistent. Ways around this problem include uncertainty in the SN rate (e.g., Arnett et al. 1989), and difficulty in estimating the star formation history of our Galaxy (Keane \& Kramer 2008). Another viable resolution of this discrepancy is that the Galactic bulge had a higher star formation rate at earlier times. The Galactic bulge contains a considerable number of stars that were born, apparently, in a $\sim1$\,Gyr-long, burst about 12\,Gyr ago (Ferreras, Wyse, \& Silk 2003; Ballero et al. 2007; Minniti \& Zoccali 2008). I therefore, assume here that in addition to the disk-born NSs there are up to $6\times10^{8}$ NSs that were formed in the Galactic bulge about 12\,Gyr ago. This number was selected such that the total number of NSs in the Galaxy is $10^{9}$. I note that the Galactic bulge contains only about 20\% of the Galactic stars (e.g., Klypin et al. 2002). However, Ballero et al. (2007) argued that the initial mass function, $dN/dM$, of the bulge is skewed toward high masses, with $x$, the power-law index in the stellar mass function ($dN/dM\propto M^{-(1+x)}$; e.g., Salpeter 1955), being $<0.95$. Assuming that stars with masses exceeding 8\,M$_{\odot}$ produce NSs within a relatively short time after their birth, this suggests that the number of NSs born in the Galactic bulge is a few times larger than the number of NSs born in the Galactic disk, per unit stellar mass. Therefore, the assumption that the majority of NSs were born in the Galactic bulge is in rough agreement with the expected fraction of high mass ($\gtorder8$\,M$_{\odot}$) stars in the bulge, relative to the disk. Nevertheless, as stated before, the ratio between the numbers of bulge-born to disk-born NSs is uncertain. The simulations presented here cover a wide range of possibilities regarding birth place and time scenarios. \subsection{NSs space and velocity distributions at birth} \label{sec:NSbirthdist} Following, Paczynski (1990), I adopt a radial birth probability distribution, $p(R)$, which follows the exponential stellar disk: \begin{equation} dp(R)=a_{R}\frac{R}{R_{d}^{2}}\exp{\Big(-\frac{R}{R_{d}}\Big)}dR, \label{ExpDisk} \end{equation} where $R$ is the distance from the Galactic center projected on the Galactic plane, $R_{d}=4.5$\,kpc is the exponential disk scale length, and $a_{R}$ is the normalization given by: \begin{equation} a_{R}=[1-e^{-R_{max}/R_{d}} (1 + R_{max}/R_{d})]^{-1}, \label{ExpDiskNorm} \end{equation} where $R_{max}=15$\,kpc, is the disk truncation radius. Yusifov \& K\"{u}\c{c}\"{u}k (2004) estimated the radial distribution (measured from the Galactic center) of Galactic pulsars, and found it to be more concentrated than predicted by the Paczynski (1990) distribution. However, I note that the supernova remnant Galactic radial distribution is in good agreement with the Paczynski (1990) model. For the NS bulge population I assume that the space distribution at birth has the same functional form and parameters as the NS disk population, but with $R_{d}=1$\,kpc in Eqs.~\ref{ExpDisk} and \ref{ExpDiskNorm}. For the initial velocity probability distribution, I use two different models. Both models are consistent with the observed velocity distribution of young pulsars. One is a bimodal velocity distribution composed of two Gaussians found by Arzoumanian et al. (2002), and the second is based on a double-sided exponential unimodal velocity distribution (Faucher-Gigu\`{e}re \& Kaspi 2006). {\bf Bimodal velocity distribution:} The first initial velocity distribution we used is from Arzoumanian et al. (2002). It consists of two components. The three-dimensional speed, $v$, probability density of this model is: \begin{equation} p(v) = 4\pi v^{2} \sum_{j=1,2}{ \frac{f_{j}}{(2\pi\sigma_{v,j}^{2})^{3/2}}\exp{(-v^{2}/[2\sigma_{v,j}^{2}])}} \label{BirthVelDist2} \end{equation} where $j\in \{1,2\}$, $f_{1}=0.40$ and $f_{2}=0.60$ are the fractions of the two NS velocity components, $\sigma_{v,1}=90$\,km\,s$^{-1}$, and $\sigma_{v,2}=500$\,km\,s$^{-1}$. {\bf Unimodal velocity distribution:} The second model I use for the velocity distribution of NSs at birth was advocated by Faucher-Gigu\`{e}re \& Kaspi (2006). In their model, each one-dimensional (projected) velocity component, $v_{i}$, consists of a double-sided exponential probability distribution: \begin{equation} p(v_{i}) = \frac{1}{2\langle v_{i}\rangle} \exp{\Big( -\frac{|v_{i}|}{\langle v_{i}\rangle} \Big)}, \label{BirthVelDist1} \end{equation} where $\langle v_{i}\rangle=180$\,km\,s$^{-1}$ is the mean velocity in each direction, $i$. I note that drawing the velocity components of simulated NSs directly from Eq.~\ref{BirthVelDist1} will result in an aspherical distribution. In order to avoid this problem the corresponding three-dimensional distribution is required. Faucher-Gigu\`{e}re \& Kaspi (2006) noted that this distribution is difficult to derive analytically, and did~not provide its functional form. However, it is possible to show that the corresponding spherically symmetric three-dimensional distribution corresponding to Eq.~\ref{BirthVelDist1} is given by \begin{equation} p(v) = \frac{v}{\langle v_{i} \rangle^{2}} \exp{\Big( -\frac{|v|}{\langle v_{i}\rangle} \Big)}. \label{BirthVelDist1_3} \end{equation} In Figure~\ref{fig:InitVelDist_Compare} I show the one-dimensional ({\it dashed} lines) and three-dimensional ({\it solid} lines) initial velocity probability distributions based on Faucher-Gigu\`{e}re \& Kaspi (2006) velocity distribution ({\it thin black} lines) and the Arzoumanian et al. (2002) distribution ({\it thick gray} lines). Several other fits for the pulsars velocity distribution at birth are available (e.g., Hansen \& Phinney 1997; Cordes \& Chernoff (1998); Hobbs et al. 2005; Zou et al. 2005). However, the Arzoumanian et al. (2002) and Faucher-Gigu\`{e}re \& Kaspi (2006) velocity distributions cover a wide range of possibilities, that reflect the current uncertainties in the pulsars velocity distribution at birth. The two models are considerably different from each other, mainly in the low and high velocity tails. Relative to the Arzoumanian et al. (2002) velocity distribution the Faucher-Gigu\`{e}re \& Kaspi (2006) distribution has excess of NSs at low velocities. Since the slowest NSs are the most efficient accretors from the ISM (e.g., Ostriker et al. 1970), these differences may have a significant impact on the detectability of isolated old NSs in X-ray wavebands (e.g., Blaes \& Madau 1993). \begin{figure} \centerline{\includegraphics[width=8.5cm]{f1.eps}} \caption{The initial velocity distribution of NSs at birth relative to a rotating galaxy. The {\it dashed} lines represent the one-dimensional (1D) velocity distributions, while the {\it solid} lines show the three-dimensional (3D) speed distributions. The {\it black thin} lines are for the Faucher-Gigu\`{e}re \& Kaspi (2006) unimodal double-sided exponential velocity distribution, while the {\it gray thick} lines show the Arzoumanian et al. (2002) bimodal Gaussian velocity distribution. \label{fig:InitVelDist_Compare}} \end{figure} The velocity distributions, in Eqs.~\ref{BirthVelDist2} and \ref{BirthVelDist1_3}, are given relative to a rotating disk. Therefore, I added the disk rotation velocity to these birth velocities \begin{equation} v'_{x} = v_{x} - V_{circ}(R)\sin(\theta), \label{VreldiskX} \end{equation} \begin{equation} v'_{y} = v_{y} + V_{circ}(R)\cos(\theta), \label{VreldiskX} \end{equation} \begin{equation} v'_{z} = v_{z}, \label{VreldiskX} \end{equation} where $v_{i}$ is the velocity vector relative to the rotating disk (i.e., the one obtained from Eqs.~\ref{BirthVelDist2} or \ref{BirthVelDist1_3}), $v'_{i}$ is the velocity vector in a non-rotating (inertial) reference frame, $R = (x^{2}+y^{2})^{1/2}$, $\theta$ is azimuthal angle on the Galactic disk ($={\rm arctan2}[y,z]$), which was selected from a uniform random distribution, and $x$, $y$, and $z$ are the position in a Cartesian, non-rotating, coordinate system, whose origin is the Galactic center, and $z$ is vertical to the Galactic plane. Finally, $V_{circ}$ was obtained from \begin{equation} V_{circ}(R) = \sqrt{-R \nabla_{R}{\Phi}}, \label{Vcirc} \end{equation} where $\Phi$ is the Galactic potential (see \S\ref{sec:GalPot}), at the point of interest. Finally, I assume that the vertical height above the Galactic mid-plane of NSs at birth, $z$, is drawn from a Gaussian distribution ($\propto \exp[-z^{2}/(2\sigma_{b})]$), with $\sigma_{b}=0.16$\,kpc and $0.05$\,kpc, for the Arzoumanian et al. (2002) and Faucher-Gigu\`{e}re \& Kaspi (2006) initial velocity distributions, respectively. \subsection{The Galactic potential} \label{sec:GalPot} Following Paczynski (1990), I assume that the Galactic potential is composed of a disk component, an spheroidal component, and an halo component. I also added a central black hole component. The disk and spheroid components are described by the following potential proposed by Miyamoto \& Nagai (1975), \begin{equation} \Phi_{k}(x,y,z)=-\frac{GM_{k}}{ \Big\{x^{2}+y^{2}+[a_{k}+(z^{2}+b_{k}^{2})^{1/2} ]^{2} \Big\}^{1/2}}, \label{potential_disk_sphere} \end{equation} where $a_{k}$, $b_{k}$ and $M_{k}$ are given in Equations~\ref{eq:Potential_d_par} and \ref{eq:Potential_s_par}, and where $k=d$ corresponds to the disk component and $k=s$ corresponds to the spheroid component, and $G$ is the Gravitational constant. Next, the halo potential is given by \begin{equation} \Phi_{h}(r)=-\frac{GM_{h}}{r_{h}} \Big[ \frac{1}{2} \ln{ \Big(1+\frac{r^{2}}{r_{h}^{2}} \Big)} + \frac{r_{h}}{r} \arctan{ \Big(\frac{r}{r_{h}} \Big) } \Big], \label{potential_halo} \end{equation} were $r^{2}=x^{2}+y^{2}+z^{2}$, $M_{h}$, and $r_{h}$ are listed in Eq.~\ref{Potential_h_par}. Furthermore, I added a component representing the Galactic central massive black-hole (e.g., Ghez et al. 1998): \begin{equation} \Phi_{bh}(r)=-\frac{GM_{bh}}{r}, \label{potential_halo} \end{equation} where $M_{bh}$ is given in Eq.~\ref{Potential_bh_par} (Eisenhauer et al. 2005). The choice of parameters listed here reproduces the observed Galactic rotation, local density, and local column density (see Paczynski 1990 for details): \begin{eqnarray} a_{d}=3.7~{\rm kpc},~ & b_{d}=0.20~{\rm kpc},~ & M_{d}=8.07\times10^{10}~{\rm M_{\odot}}, \label{eq:Potential_d_par} \end{eqnarray} \begin{eqnarray} a_{s}=0.0~{\rm kpc},~ & b_{s}=0.277~{\rm kpc},~ & M_{s}=1.12\times10^{10}~{\rm M_{\odot}}, \label{eq:Potential_s_par} \end{eqnarray} \begin{eqnarray} r_{h}=6.0~{\rm kpc},~ & M_{h}=5.0\times10^{10}~{\rm M_{\odot}}, \label{Potential_h_par} \end{eqnarray} \begin{equation} M_{bh}=3.6\times10^{6}~{\rm M_{\odot}}. \label{Potential_bh_par} \end{equation} I note that the black hole contributes about $4\%$ ($0.6\%$) of the gravitational potential at distance of 1\,pc (10\,pc) from the black hole. Therefore, its influence on the Galactic potential is negligible and it does~not change the fitted parameters in Eqs.~\ref{eq:Potential_d_par}-\ref{Potential_bh_par}. However, it may heat NSs passing nearby. Finally, the Galactic potential is the sum of these four components: \begin{equation} \Phi = \Phi_{d} + \Phi_{s} + \Phi_{h} + \Phi_{bh}. \label{TotalPotential} \end{equation} \section{Dynamical Heating} \label{sec:DynHeat} Dynamical heating, presumably by giant molecular clouds (e.g., Kamahori \& Fujimoto 1986; 1987), spiral structure (e.g., Barbanis \& Woltjer 1967; Carlberg \& Sellwood 1985; Jenkins \& Binney 1990), and stars, tends to broaden the velocity and spatial distributions of Galactic stars (e.g., Wielen 1977; Nordstrom et al. 2004). In order to roughly estimate the effect of dynamical heating on NSs, Madau \& Blaes (1994) applied the force-free diffusion equation to the velocity and vertical distance distributions of NSs. They adopted the total diffusion coefficient, $C$ ($=C_{U}+C_{V}+C_{W}$), measured by Wielen (1977; $C=600$\,km$^{2}$\,s$^{-2}$\,Gyr$^{-1}$). In this, $U$ corresponds to the radial direction, and is positive in the direction of the Galactic center; $V$ points in the direction of circular rotation; and $W$ is directed towards the North Galactic pole. Applying heating, they found that the local density of NSs is smaller by about 30\% relative to the case of no heating. In the following we discuss this approach in light of new observational data available. We estimate the diffusion coefficient, $C$, using modern data. Nordstrom et al. (2004), estimated the dynamical heating by the Galactic disk and showed that it does not saturate after some time as suggested by measurements based on smaller samples (e.g., Quillen \& Garnett 2001; see also Aumer \& Binney 2009). Approximating dynamical heating by a random walk process, I fit the Nordstrom et al. (2004) measurements with the function (e.g., Wielen 1977) \begin{equation} \sigma_{g} = (\sigma_{g,0}^{2} + C_{g}\tau)^{1/2}, \label{vel_diffus} \end{equation} where $\sigma_{g}$ is the velocity dispersion component at time, $\tau$, since birth, $\sigma_{g,0}$ is the initial velocity dispersion component and $g\in(U,V,W)$. The data and the best fit curves are shown in Figure~\ref{Fig:SigmaT}, and the best fit parameters are listed in Table~\ref{Tab:DiffusPar}. \begin{figure} \centerline{\includegraphics[width=8.5cm]{f2.eps}} \caption{Stellar velocity dispersion ($\sigma$) along the $U$ (radial; {\it circles}), $V$ (tangential; {\it triangles}) and $W$ (vertical; {\it squares}) Galactic directions as a function of age since birth, $\tau$, as measured by Nordstrom et al. (2004). The lines represent the best fit of Equation~\ref{vel_diffus} to the data. The best fit parameters are listed in Table~\ref{Tab:DiffusPar}. \label{Fig:SigmaT}} \end{figure} \begin{deluxetable}{lccc} \tablecolumns{4} \tablewidth{0pt} \tablecaption{The Galactic disk diffusion coefficients} \tablehead{ \colhead{Velocity component} & \colhead{$\sigma_{g,0}$} & \colhead{$C$} & \colhead{$\chi^{2}/dof$} \\ \colhead{} & \colhead{km~s$^{-1}$} & \colhead{km$^{2}$~s$^{-2}$~Gyr$^{-1}$} & \colhead{} } \startdata U & $18.7\pm1.2$ & $185\pm19 $ & $3.1/8$ \\ V & $9.7\pm0.9 $ & $81.7\pm7.5$ & $9.3/8$ \\ W & $5.5\pm1.0 $ & $61.3\pm5.1$ & $11.0/8$ \enddata \tablecomments{The diffusion coefficients of Galactic stars as obtained by fitting Eq.~\ref{vel_diffus} with the measurements of Nordstrom et al. (2004).} \label{Tab:DiffusPar} \end{deluxetable} The total diffusion coefficient that I find, $C_{U}+C_{V}+C_{W}=328\pm21$\,km$^{2}$\,s$^{-2}$\,Gyr$^{-1}$, is about half of the value found by Wielen (1977), and used by Madau \& Blaes (1994). Next, the initial velocities estimated by Narayan \& Ostriker (1990), that was used by Madau \& Blaes (1994), are considerably lower than the more recent estimates (e.g., Arzoumanian et al. 2002). As suggested by Eq.~\ref{vel_diffus}, diffusion affects mostly low velocity objects. Therefore, dynamical heating is less important than estimated by Madau \& Blaes (1994). Furthermore, the approach taken by Madau \& Blaes (1994) assumes that NSs are being affected by diffusion at all times. However, the scatterers are restricted to the Galactic plane. NSs are born with high velocities and spend $80\%$ to $90\%$ of the time at distances larger than 100\,pc from the Galactic plane (based on the results in \S\ref{sec:NSres}). Therefore, they are less susceptible to dynamical heating than disk stars. Based on these three arguments, I conclude that the importance of dynamical heating was over estimated by Madau \& Blaes (1994) by at least an order of a magnitude. Nevertheless, dynamical heating may effect some of the slow moving objects, but they are a minority among the Galactic NSs. Eq.~\ref{vel_diffus} roughly suggests that dynamical heating is important for NSs with speeds smaller than about $(C\tau)^{1/2}$. Even for $\tau=10$\,Gyr this gives 60\,km\,s$^{-1}$. However, only $2.8\%$ and $4.5\%$ of the NSs in the Arzoumanian et al. (2002) and Faucher-Gigu\`{e}re \& Kaspi (2006), initial velocity distributions, respectively, have speeds smaller than 60\,km\,s$^{-1}$ at birth (relative to their local standard of rest). \section{Monte-Carlo simulations} \label{sec:MC} To solve for NS orbits I integrate the equations of motion \begin{equation} \frac{d^{2}\vec{x}}{dt^{2}} = -\vec{\nabla} \Phi, \label{eq:EqMotion} \end{equation} using a Livermore ordinary differential equation solver\footnote{http://www.netlib.org/odepack/} (Hindmarsh 1983). The integration is performed in a non-rotating, Cartesian coordinates system, the origin of which is the Galactic center, and the $z$ axis is perpendicular to the Galactic plane. For each initial velocity distribution (i.e., bimodal or unimodal; see \S\ref{sec:NSbirthdist}), I simulated $3\times10^{6}$ NS orbits. In each simulation I randomly drew the NSs birth times, positions, and velocities, from the probability distributions described in \S\ref{sec:NSbirthrate}, and \S\ref{sec:NSbirthdist}. As explained before, $40\%$ of these NSs are disk-born, and the rest are bulge-born. At the end of each simulation I checked if the integration conserved the total energy. In cases in which the energy was not conserved to within $0.1\%$, I reran the integration using the same initial conditions, with a refined integration tolerance. In the second (and final) iteration, the energy was conserved to better than $2.8\%$ in all cases. \section{Catalogue} \label{sec:Cat} The catalogue of initial (i.e., at birth) and final (i.e., current epoch) simulated NSs space and velocity components are listed in Table~\ref{tab:MC_v2g} for the bimodal initial velocity distribution of Arzoumanian et al. (2002), and in Table~\ref{tab:MC_v1e} for the unimodal initial velocity of Faucher-Gigu\`{e}re \& Kaspi (2006). The first column in each table indicates whether the simulated NS belongs to the bulge population (code 0) or disk population (code 1). \begin{deluxetable*}{crrrrrrrr|rrrrrr} \tablecolumns{15} \tablewidth{0pt} \tablecaption{NS Monte-Carlo orbital simulations using the bimodal initial velocity distribution} \tablehead{ \multicolumn{9}{c}{Initial} & \multicolumn{6}{c}{Final} \\ \colhead{P} & \colhead{Age} & \colhead{$X$} & \colhead{$Y$} & \colhead{$Z$} & \colhead{$\dot{X}$} & \colhead{$\dot{Y}$} & \colhead{$\dot{Z}$} & \colhead{$V_{circ}$} & \colhead{$X$} & \colhead{$Y$} & \colhead{$Z$} & \colhead{$\dot{X}$} & \colhead{$\dot{Y}$} & \colhead{$\dot{Z}$} \\ \colhead{} & \colhead{Gyr} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} } \startdata 0& 12.0&$-2.04$& $ 1.16$ & $-0.07$ & $ -77.7$ & $-194.6$ &$ 8.8$&$208.7$&$ -0.71$&$ 2.41$&$ -0.07$&$-181.0$&$ -70.7$&$ 9.1$ \\ 0& 12.0&$-0.58$& $ 1.57$ & $-0.27$ & $-199.1$ & $ 0.1$ &$-376.7$&$206.7$&$ -1.40$&$ -8.23$&$ -5.08$&$ 42.4$&$ 25.4$&$ -70.3$ \\ 0& 12.0&$ 0.60$& $ 1.57$ & $ 0.02$ & $ 531.9$ & $ 507.1$ &$ -60.9$&$206.6$&$477.31$&$ 404.37$&$ -5.35$&$-154.9$&$ -132.3$&$ 2.3$ \\ 0& 12.0&$-1.27$& $ 2.33$ & $ 0.05$ & $-140.6$ & $ -96.1$ &$ -53.5$&$211.6$&$ -0.20$&$ 2.64$&$ 0.06$&$-167.7$&$ -32.5$&$ 52.4$ \\ 0& 12.0&$-1.54$& $-2.95$ & $-0.18$ & $-137.7$ & $-209.2$ &$-425.3$&$218.2$&$ -1.31$&$ -1.29$&$ -19.70$&$ -10.4$&$ 53.7$&$ 251.2$ \enddata \tablecomments{Catalog of initial and final space positions and velocity components for $3\times10^{6}$ simulated NSs. The NSs were simulated using the bimodal initial velocity distribution of Arzoumanian et al. (2002). The velocity components are given in kpc\,Gyr$^{-1}$, relative to the non-rotating Galaxy. In order to convert kpc\,Gyr$^{-1}$ to km\,s$^{-1}$, divide it by $1.0227$. P is the population type: 0 for bulge-born NS; 1 for disk-born NS. $V_{circ}$ refers to the Galaxy rotation speed at the projected location, on the Galactic disk, in which the NS was born. The initial (and final) velocity components are given relative to a non-rotating galaxy (inertial reference frame). The numbers in this table are rounded in order to fit into the page. This table is published in its entirety in the electronic edition of this paper. A portion of the full table is shown here for guidance regarding its form and content.} \label{tab:MC_v2g} \end{deluxetable*} \begin{deluxetable*}{crrrrrrrr|rrrrrr} \tablecolumns{15} \tablewidth{0pt} \tablecaption{NS Monte-Carlo orbital simulations using the unimodal initial velocity distribution} \tablehead{ \multicolumn{9}{c}{Initial} & \multicolumn{6}{c}{Final} \\ \colhead{P} & \colhead{Age} & \colhead{$X$} & \colhead{$Y$} & \colhead{$Z$} & \colhead{$\dot{X}$} & \colhead{$\dot{Y}$} & \colhead{$\dot{Z}$} & \colhead{$V_{circ}$} & \colhead{$X$} & \colhead{$Y$} & \colhead{$Z$} & \colhead{$\dot{X}$} & \colhead{$\dot{Y}$} & \colhead{$\dot{Z}$} \\ \colhead{} & \colhead{Gyr} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} & \colhead{kpc\,Gyr$^{-1}$} } \startdata 1& 2.3&$-7.05$&$-6.69$&$ 0.07$&$-258.0$&$-232.1$&$ -55.2$&$220.2$&$ -39.41$&$ -3.21$&$ -5.51$&$ 24.9$&$ 4.3$&$ 8.8$ \\ 0& 12.0&$ 0.38$&$ 0.88$&$-0.01$&$-124.3$&$-176.5$&$ 134.8$&$226.7$&$ 0.35$&$ 0.10$&$ 0.60$&$ 213.6$&$ 178.4$&$ 79.1$ \\ 1& 2.9&$-0.69$&$ 5.40$&$ 0.07$&$-124.5$&$ 122.9$&$ 294.4$&$227.1$&$ -6.97$&$ -5.47$&$ -11.00$&$ 34.2$&$ -57.3$&$ 105.9$ \\ 1& 0.6&$-2.02$&$-1.49$&$-0.02$&$-510.5$&$-374.6$&$ 5.6$&$210.2$&$-140.81$&$-103.14$&$ -0.48$&$ -139.0$&$ -101.7$&$ -0.8$ \\ 0& 12.0&$-2.34$&$-2.96$&$-0.02$&$ 178.0$&$-374.0$&$-108.7$&$221.6$&$ -7.62$&$ -17.92$&$ 2.54$&$ 115.5$&$ 87.9$&$ -21.8$ \enddata \tablecomments{Like Table~\ref{tab:MC_v2g}, but for the unimodal initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006).} \label{tab:MC_v1e} \end{deluxetable*} In addition, catalogue of simulated NS positions, distances, radial velocities, and proper motions for an observer located at the ``solar circle'', moving around the Galactic center with a velocity of 220\,km\,s$^{-1}$ is given in tables~\ref{tab:SunCentric2} and \ref{tab:SunCentric1}. The solar circle is defined to be on the Galactic plane (i.e., $z=0$\,kpc) at the distance of the Sun from the Galactic center ($R_{\odot}=8.0$\,kpc; Ghez et al. 2008). The catalogue in Tables~\ref{tab:SunCentric2} and \ref{tab:SunCentric1} are based on the initial velocity distribution of Arzoumanian et al. (2002) and the Faucher-Gigu\`{e}re \& Kaspi (2006), respectively. I note that the radial velocity and proper motions are calculated for a static observer with respect to the Local Standard of Rest (LSR; i.e., the solar motion with respect to the LSR is neglected). In order to reduce Poisson errors in the local properties of NSs (i.e., density; sky surface density), tables~\ref{tab:SunCentric2} and \ref{tab:SunCentric1} were produced by calculating the positions of the $3\times10^{6}$ simulated NSs in tables~\ref{tab:MC_v2g} and \ref{tab:MC_v1e}, respectively, from 100 random locations on the solar circle. Hence, tables~\ref{tab:SunCentric2} and \ref{tab:SunCentric1} list $3\times10^{8}$ simulated NSs. \begin{deluxetable*}{crrrrrrr} \tablecolumns{8} \tablewidth{0pt} \tablecaption{Catalog of simulated NSs as observed from the LSR - based on the bimodal initial velocity distribution} \tablehead{ \colhead{P} & \colhead{Age} & \colhead{$l$} & \colhead{$b$} & \colhead{dist} & \colhead{$\mu_{l}$} & \colhead{$\mu_{b}$} & \colhead{RV} \\ \colhead{} & \colhead{Gyr} & \colhead{deg} & \colhead{deg} & \colhead{kpc} & \colhead{$''$\,yr$^{-1}$} & \colhead{$''$\,yr$^{-1}$} & \colhead{km\,s$^{-1}$} } \startdata 0 & 12.0000 &$ 354.72497$&$ -0.39865$&$ 10.0003$&$ -0.0064166$&$ 0.0001654$&$ -149.7$ \\ 0 & 12.0000 &$ 72.14501$&$ -38.10227$&$ 8.2258$&$ -0.0025465$&$ -0.0041258$&$ -93.8$ \\ 0 & 12.0000 &$ 273.07904$&$ -0.49003$&$ 625.9548$&$ -0.0000573$&$ 0.0000017$&$ 340.7$ \\ 0 & 12.0000 &$ 351.53160$&$ 0.34229$&$ 9.9375$&$ -0.0057996$&$ 0.0011040$&$ -124.8$ \\ 0 & 12.0000 &$ 13.81616$&$ -68.84354$&$ 21.1283$&$ -0.0015999$&$ 0.0004210$&$ -247.1$ \enddata \tablecomments{Catalog of Galactic longitudes ($l$), latitudes ($b$), distances, proper motions in Galactic longitude ($\mu_{l}$) and latitude ($\mu_{b}$), and radial velocities (RV) from a point on the solar circle (e.g., the LSR) of $3\times10^{8}$ simulated NSs. The velocities and proper motions do~not include the motion of the Sun relative to the LSR. The proper motions are given in the Galactic coordinate system. The catalog was generated by calculating the positions of the $3\times10^{6}$ NSs in Table~\ref{tab:MC_v2g} (i.e., assuming the Arzoumanian et al. [2002] initial velocity distribution) as observed from 100 random points on the solar circle. The conversion of space velocity to proper motion and radial velocity was carried out using the inverse of Eq. 3.23-3 in Seidelmann (1992, p. 121). This table is published in its entirety in the electronic edition of this paper. A portion of the full table is shown here for guidance regarding its form and content. } \label{tab:SunCentric2} \end{deluxetable*} \begin{deluxetable*}{crrrrrrr} \tablecolumns{8} \tablewidth{0pt} \tablecaption{Catalog of simulated NSs as observed from the LSR - based on the unimodal initial velocity distribution} \tablehead{ \colhead{P} & \colhead{Age} & \colhead{$l$} & \colhead{$b$} & \colhead{dist} & \colhead{$\mu_{l}$} & \colhead{$\mu_{b}$} & \colhead{RV} \\ \colhead{} & \colhead{Gyr} & \colhead{deg} & \colhead{deg} & \colhead{kpc} & \colhead{$''$\,yr$^{-1}$} & \colhead{$''$\,yr$^{-1}$} & \colhead{km\,s$^{-1}$} } \startdata 1 & 2.2753 &$ 237.98450$&$ -8.95638$&$ 35.3986$&$ 0.0008045$&$ 0.0002084$&$ 166.7$ \\ 0 & 12.0000 &$ 2.28569$&$ 4.42334$&$ 7.8361$&$ -0.0014499$&$ 0.0016472$&$ 212.2$ \\ 1 & 2.8584 &$ 306.10398$&$ -45.37962$&$ 15.4574$&$ -0.0018509$&$ 0.0033513$&$ 96.8$ \\ 1 & 0.6194 &$ 262.62769$&$ -0.15823$&$ 173.4066$&$ -0.0001141$&$ 0.0000002$&$ 334.3$ \\ 0 & 12.0000 &$ 308.68736$&$ 6.21614$&$ 23.4368$&$ 0.0000392$&$ -0.0003617$&$ 171.9$ \enddata \tablecomments{Like Table~\ref{tab:SunCentric1}, but for the unimodal initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006). } \label{tab:SunCentric1} \end{deluxetable*} \section{Statistical properties} \label{sec:NSres} In this section I present the space and velocity distributions of simulated NSs, at the current epoch. In \S\ref{sec:GenProp} I discuss the overall distribution of NSs in the Galaxy, while in \S\ref{sec:ObsProp} I discuss their statistical properties as observed from the LSR. Additional specific statistical properties of these objects are discussed in Ofek et al. (2009) in the context of the long-duration radio transients (Bower et al. 2007; Kida et al. 2008). The results presented in this section assume there are $N_{9}=N/10^{9}$ NSs in the Galaxy, where $N$ is the number of NSs in the Galaxy, of which $40\%$ were born in the disk and $60\%$ in the bulge. \subsection{Overall properties} \label{sec:GenProp} NSs are born with large space velocities, which are typically of the order of the escape velocity from the Galaxy. These are presumably the result of kick velocities due to asymmetric supernovae explosions (e.g., Blaauw 1961 ; Lai et al. 2006). Therefore, it is expected that a large fraction of the Galactic NSs will be unbounded to the Milky Way gravitational potential and some may be found at very large distances. In Figure~\ref{fig:NS_EnergyDist}, I show the distribution of the total (kinetic $+$ potential) energy of the simulated NSs: $M_{NS} ( v^{2}/2 + \Phi )$, where the NSs mass was set to $M_{NS}=1.4$\,M$_{\odot}$. Panel (a) shows the energy distribution based on the simulations using the unimodal initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006), while panel (b) is for the bimodal distribution of Arzoumanian et al. (2002). In each panel, the {\it thick black solid} line represents the entire NS population, the {\it thin solid} line shows the disk-born NSs, and the {\it dashed} line is for bulge-born NSs. \begin{figure} \centerline{\includegraphics[width=8.5cm]{f3.eps}} \caption{The total energy distribution of Galactic-born NSs. The energy does~not include external potentials (i.e., other galaxies), and assumes all NSs have a mass of 1.4\,M$_{\odot}$. The different lines shows the distribution of all the NSs ({\it solid thick black} lines); only disk-born NSs ({\it solid thin} lines); and only bulge-born NSs ({\it dashed} lines). \label{fig:NS_EnergyDist}} \end{figure} Using the approximation that all the NSs with negative energies are gravitationally bounded to the Galaxy (neglecting heating), in Table~\ref{Tab:unboundedNS} I give the fractions of NSs bounded to the Galactic gravitational potential. \begin{deluxetable}{lccc} \tablecolumns{4} \tablewidth{0pt} \tablecaption{Fraction of NSs gravitationally bounded to the Galaxy} \tablehead{ \colhead{Initial velocity\tablenotemark{a}} & \colhead{All} & \colhead{disk} & \colhead{bulge} } \startdata A2002 & 0.38 & 0.16 & 0.52 \\ FK2006 & 0.30 & 0.13 & 0.41 \enddata \tablenotetext{a}{Initial velocity distribution used in the simulations, where A2002 corresponds to Arzoumanian et al. (2002), and FK2006 to Faucher-Gigu\`{e}re \& Kaspi (2006).} \label{Tab:unboundedNS} \end{deluxetable} Given the large fraction of gravitationally-unbounded NSs, it is expected that some NSs may be found at very large distances, $r$, from the Galactic center. I find that about $12\%$ and $35\%$ of the NSs born in the Galaxy are currently at distances larger than 1\,Mpc from the Galactic center, for the unimodal and bimodal initial velocity distributions, respectively. For $1$\,Mpc$<r<10$\,Mpc, I find that the density of NSs as a function of $r$ is about: $1.9\times10^{-5}(r/1 {\rm kpc})^{-2.4}N_{9}$\,pc$^{-3}$ and $3.6\times10^{-6}(r/1 {\rm kpc})^{-2.1}N_{9}$\,pc$^{-3}$, for the bimodal and unimodal initial velocity distributions, respectively. Finally, I find that some Milky Way born NSs may be at distances as large as 30 to 40\,Mpc from the Galaxy. I note that the local density, in our Galaxy, of NSs born in other galaxies, is of the order of $10^{-11}N_{9}$\,pc$^{-3}$. This was estimated by calculating the density in the the Milky Way, of NSs born in each galaxy found within 10\,Mpc. For this, I used a version of the Tully (1988) nearby galaxy catalog (Ofek 2007) where the total number of NSs in each galaxy was normalized by its total $B$-band magnitude, relative to Milky Way. In Figures~\ref{fig:NS_RadialDensity} and \ref{fig:NS_RadialSurfaceDensity} I show the density of NSs on the Galactic plane, and the surface density of NSs projected on the Galactic plane, respectively, as a function of distance from the Galactic center. The notations are the same as in Figure~\ref{fig:NS_EnergyDist}. In addition, in Fig.~\ref{fig:NS_RadialSurfaceDensity}, the {\it gray solid} line represents the initial surface distribution of all NSs. Furthermore, the vertical {\it dashed} lines mark the distance of the Sun from the Galactic center, $R_{\odot}=8.0$\,kpc (Ghez et al. 2008). As noted before, I refer to this distance from the Galactic center, when located on the Galactic plane, as the solar circle. The densities in the solar circle and Galactic center are listed in the figure captions. I note that the fluctuations seen in small and large radii are due to Poisson noise. \begin{figure} \centerline{\includegraphics[width=8.5cm]{f4.eps}} \caption{Density of NSs at the Galactic plane, as measured at the current epoch. See Fig.~\ref{fig:NS_EnergyDist} for details regarding line types. The density is calculated by counting the NSs within 100\,pc from the Galactic plane, dividing it by the appropriate volume in each bin, and assuming that the total number of NSs in the Galaxy is $10^{9}$. For the initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006; panel a), the NSs density at the solar circle (distance of 8.0\,kpc from the Galactic center; Ghez et al. 2008) is $4.0\times10^{-4}N_{9}$\,pc$^{-3}$, of which $60\%$ are disk born and $40\%$ are bulge born. The NSs density at the Galactic center is: $3\times10^{-1}N_{9}$\,pc$^{-3}$, of which $5\%$ are disk born and $95\%$ are bulge born. For the initial velocity distribution of Arzoumanian et al. (2002; panel b), the density at the solar circle is $2.4\times10^{-4}N_{9}$\,pc$^{-3}$, of which $66\%$ are disk born and $34\%$ are bulge born. The density at the Galactic center is: $2\times10^{-1}N_{9}$\,pc$^{-3}$, of which $7\%$ are disk born and $93\%$ are bulge born. I note that the fluctuations seen in small and large radii are due to Poisson noise. \label{fig:NS_RadialDensity}} \end{figure} \begin{figure} \centerline{\includegraphics[width=8.5cm]{f5.eps}} \caption{The surface density of NSs projected on the Galactic plane, as measured at the current epoch. See Fig.~\ref{fig:NS_EnergyDist} for details. In addition the {\it gray} lines show the initial surface density distributions. For the initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006; panel a), the NSs surface density at the solar circle is $0.6N_{9}$\,pc$^{-2}$ of which $54\%$ are disk born NSs, and $46\%$ are bulge-born NSs. The surface density at the Galactic center is $\sim90N_{9}$\,pc$^{-2}$ of which about $9\%$ are disk born NSs, and the rest are bulge-born NSs. For the initial velocity distribution of Arzoumanian et al. (2002; panel b), the NSs density at the solar circle is $0.4N_{9}$\,pc$^{-2}$ of which $61\%$ are disk born NSs, and $39\%$ are bulge-born NSs. The corresponding surface density at the Galactic center is $\sim50N_{9}$\,pc$^{-2}$ of which $\sim6\%$ are disk born NSs, and the rest are bulge-born NSs. As before, the plotted densities assume the total number of NSs is $10^{9}$. \label{fig:NS_RadialSurfaceDensity}} \end{figure} In Figure~\ref{fig:NS_Gal_zDistSolarCirc} I show the NSs density as a function of height above or below the Galactic plane, as measured at the solar circle. This was calculated by counting the number of simulated NSs with distance from the Galactic center, projected on the Galactic plane, of $8.0\pm0.5$\,kpc. The line scheme is the same as in Figures~\ref{fig:NS_EnergyDist}. At the solar circle the scale height\footnote{Scale height is defined as the height at which the density drops by $1/e$.} of NSs is about $0.6$\,kpc and $0.3$\,kpc for the Arzoumanian et al. (2002) and Faucher-Gigu\`{e}re \& Kaspi (2006) initial velocity distributions, respectively. \begin{figure} \centerline{\includegraphics[width=8.5cm]{f6.eps}} \caption{The density of NSs as a function of height, $z$, above/below the Galactic plane as measured on the solar circle at the current epoch. Line types are like those in Fig.~\ref{fig:NS_EnergyDist}. For the initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006; panel a), $50\%$ ($90\%$) of the NSs are found within 0.9\,kpc (5.6\,kpc) from the Galactic plane. For the initial velocity distribution of Arzoumanian et al. (2002; panel b), $50\%$ ($90\%$) of the NSs are found within 0.7\,kpc (4.4\,kpc) from the Galactic plane. \label{fig:NS_Gal_zDistSolarCirc}} \end{figure} Shown in Fig.~\ref{fig:NS_RadialDensityZ} are the space densities of NSs as a function of projected (on the Galactic plane) distance from the Galactic center, for several different Galactic height: 0\,kpc; 1\,kpc; 3\,kpc; and 10\,kpc. The densities are calculated in slices, parallel to the Galactic plane, with semi-width of 0.1\,kpc. \begin{figure} \centerline{\includegraphics[width=8.5cm]{f7.eps}} \caption{The density of NSs in Galactic heights of 0\,kpc, 1\,kpc, 3\,kpc, and 10\,kpc above/below the Galactic plane, as a function of the projected (on the Galactic plane) distance from the Galactic center, $R$, and as measured at the current epoch. Panel (a) shows the densities calculated using the initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006). For this model, at the solar circle, the densities are about $4\times10^{-4}N_{9}$\,pc$^{-3}$, $6\times10^{-5}N_{9}$\,pc$^{-3}$, $2\times10^{-5}N_{9}$\,pc$^{-3}$, and $2\times10^{-6}N_{9}$\,pc$^{-3}$, at Galactic heights of 0\,kpc, 1\,kpc, 3\,kpc, and 10\,kpc, respectively. Panel (b) shows the densities calculated using the initial velocity distribution of Arzoumanian et al. (2002). For this model, at the solar circle, the densities are about $2\times10^{-4}N_{9}$\,pc$^{-3}$, $5\times10^{-5}N_{9}$\,pc$^{-3}$, $1\times10^{-5}N_{9}$\,pc$^{-3}$, and $1\times10^{-6}N_{9}$\,pc$^{-3}$, at Galactic heights of 0\,kpc, 1\,kpc, 3\,kpc, and 10\,kpc, respectively. \label{fig:NS_RadialDensityZ}} \end{figure} Finally, in Figure~\ref{fig:NS_SpeedDist} I show the initial and final speed distributions as measured relative to an inertial reference frame (contrary to Fig.~\ref{fig:InitVelDist_Compare} which shows the initial speed distribution relative to a rotating reference frame). The probabilities in this Figure are shown per 1\,km\,s$^{-1}$ bins. The line scheme is again like the one used in Fig.~\ref{fig:NS_RadialSurfaceDensity}. As expected, the typical speeds of NSs decrease with time as they, on average, increase their distances from the Galaxy and lose kinetic energy. For the unimodal initial velocity distribution, at the current epoch I estimate that about $17\%$, $53\%$, and $99\%$ of the NSs have speeds below 100, 200, and 1000\,km\,s$^{-1}$, respectively. For the bimodal initial velocity distribution, at the current epoch I estimate that about $11\%$, $39\%$, and $94\%$ of the NSs have speeds below 100, 200, and 1000\,km\,s$^{-1}$, respectively. \begin{figure} \centerline{\includegraphics[width=8.5cm]{f8.eps}} \caption{The initial and final speed probability distributions of the simulated NSs, relative to an inertial reference frame. Line types are like those in Fig.~\ref{fig:NS_RadialSurfaceDensity}. The probabilities are calculated for 1\,km\,s$^{-1}$ bins. Panel (a) shows the results for the unimodal initial velocity distribution, while panel (b) for the bimodal initial velocity distribution. \label{fig:NS_SpeedDist}} \end{figure} \subsection{Properties observed from the LSR} \label{sec:ObsProp} In this subsection I discuss: (i) the expectancy distance of the nearest NS to the Sun; (ii) the number of young NSs in the solar neighborhood; (iii) the proper motion distribution of nearby NSs; and (iv) the all-sky distribution of Galactic NSs. The probability to find a NS within distance $d$ from the Sun is given by \begin{equation} P_{<d} = 1-\exp(-\frac{4}{3}\pi \rho d^{3}). \label{Pd} \end{equation} Given the local density of NSs, $\rho$, that I have found in \S\ref{sec:GenProp}, this implies that the expectancy distance of the nearest NS is about $8.8N_{9}^{-1/3}$\,pc and $7.5N_{9}^{-1/3}$\,pc for the Arzoumanian et al. (2002) and Faucher-Gigu\`{e}re \& Kaspi (2006) distributions, respectively. For the bimodal initial velocity distribution (Table~\ref{tab:SunCentric2}), within 1\,kpc from the Sun, 63\% of the NSs are disk-born, and there are about $220 N_{9}$ ($900 N_{9}$) NSs younger than 1\,Myr (10\,Myr). On the other hand, for the unimodal initial velocity distribution (Table~\ref{tab:SunCentric1}), within 1\,kpc from the Sun, 57\% of the NSs are disk-born, and there are about $190 N_{9}$ ($930 N_{9}$) NSs younger than 1\,Myr (10\,Myr). In Figure~\ref{fig:NS_PM_Dist}, I show the median total proper motion ({\it solid} lines) of simulated NSs as a function of their distance from an observer located on the solar circle. The {\it dotted} lines show the lower and upper $95$-percentiles of the proper motion distributions. The {\it black} lines represent the unimodal initial velocity distribution and the {\it gray} lines are for the bimodal initial velocity distribution. In addition, the {\it dots} show the observed proper motions of known pulsars\footnote{Pulsar distances and proper motions obtained from: http://www.atnf.csiro.au/research/pulsar/psrcat/.} (Manchester et al. 2005). \begin{figure} \centerline{\includegraphics[width=8.5cm]{f9.eps}} \caption{The median total proper motion ({\it solid} lines) of simulated NSs as a function of their distance from an observer located on the solar circle. The {\it dotted} lines show the lower and upper $95$-percentiles of the proper motion distributions. The {\it black} lines represent the unimodal initial velocity distribution and the {\it gray} lines are for the bimodal initial velocity distribution. In addition, the {\it dots} show the observed proper motion of known pulsars. \label{fig:NS_PM_Dist}} \end{figure} \begin{figure*} \centerline{\includegraphics[width=16cm]{f10.eps}} \caption{Sky surface density distribution of NSs, at the current epoch, for an observer located on the solar circle. The maps are presented using the Aitoff equal area projection. The grids represent Galactic coordinates with $15^{\circ}$ spacing. The panels in the left column are for the unimodal initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006), while the right column is for the bimodal initial velocity distribution of Arzoumanian et al. (2002). The upper row is assuming $10^{9}$ NSs of which $60\%$ are bulge born and $40\%$ are disk born. The middle row is for the disk born NSs, assuming there are $4\times10^{8}$ of them in the Galaxy, while the bottom row is for the bulge born NSs, assuming there are $6\times10^{8}$ of them in the Galaxy. \label{fig:NS_SkyDistribution}} \end{figure*} I note that the largest total proper motion in Table~\ref{tab:SunCentric2} (i.e., bimodal initial velocity distribution) is $4.6''$\,yr$^{-1}$, and that $\sim170 N_{9}$ NSs are expected to have proper motion in excess of $1''$\,yr$^{-1}$. In Table~\ref{tab:SunCentric1}, the largest proper motion is $3.3''$\,yr$^{-1}$, and about $240 N_{9}$ NSs are expected to have proper motion in excess of $1''$\,yr$^{-1}$. In Fig.~\ref{fig:NS_SkyDistribution}a--f I show the sky surface density of NSs at the current epoch, for an observer located at the solar circle. Panels (a)--(c) are for the unimodal initial velocity distribution, while panels (d)--(f) are for the bimodal initial velocity distribution. Panels (a) and (d) show the distribution for all the NSs (i.e., disk- and bulge-born populations), panels (b) and (e) for the disk-born population, and panels (c) and (f) for the bulge-born population. The maps are shown in the Aitoff equal-area projection and Galactic coordinate system, where the Galactic center is at the center of each map. The surface densities are normalized assuming that there are $4\times10^{8}$ disk-born NSs, and $6\times10^{8}$ bulge-born NSs. At the positions with the lowest surface density of NSs, the Poisson errors due to the limited statistics are smaller than about $10\%$. I find that the minimum surface density is attained at the direction of the Galactic poles, and is about $3900N_{9}$\,deg$^{-2}$ and $8100N_{9}$\,deg$^{-2}$ for the unimodal and bimodal initial velocity distributions, respectively. The maximum surface density is at the direction of the Galactic center and it is about $1.4\times10^{6}N_{9}$\,deg$^{-2}$ and $1.1\times10^{6}N_{9}$\,deg$^{-2}$, for the unimodal and bimodal initial velocity distributions, respectively. I note that the differences between the sky surface densities resulting from the two initial velocity distributions, are as large as about $65\%$. \section{Summary} \label{sec:Disc} The Milky Way's NSs content, and in particular the NS space and velocity distributions are important for searching nearby isolated old NSs, and studying any ongoing activity from such objects. As a tool for such studies, I present a mock catalogue of simulated isolated old NSs spatial positions and velocities, at the current epoch. The catalogue were constructed by integrating the equations of motion of simulated NSs in the Galactic potential. The simulations include two populations of NSs, one in which the NSs were born in the Galactic bulge about 12\,Gyr ago, and the second population in which NSs are being born at the Galactic disk at a constant rate, starting 12\,Gyr ago. The combined NS population assumes that $60\%$ of the NSs originated in the bulge and the rest in the disk. Although we do~not know what is the exact number of Galactic NSs, and what was their position-dependent birth rate, these two populations provide a wide range of initial conditions. I generated two catalogue of simulated NSs. Each catalog contains $3\times10^{6}$ objects. The two catalogue utilize different initial velocity distributions of NSs. One catalog (Table~\ref{tab:MC_v2g}) uses the initial velocity distribution of Arzoumanian et al. (2002), while the other (Table~\ref{tab:MC_v1e}) uses the initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006). Also derived are catalogue of simulated NS positions and proper motions with respect to an observer found at the solar circle (tables~\ref{tab:SunCentric2} and \ref{tab:SunCentric1}). The space distribution at the current epoch obtained by the two initial velocity distributions implemented here, are somewhat different. For example, I find that the resulting sky surface density, based on the two different initial velocity distributions, differs by up to $65\%$. The main differences between the two velocity distributions are in the low- and high-tails of the NSs velocity distribution (see Fig.~\ref{fig:InitVelDist_Compare}). \begin{deluxetable}{lccl} \tablecolumns{4} \tablewidth{0pt} \tablecaption{Comparison with previous works} \tablehead{ \colhead{$\rho/N_{9}$} & \colhead{$Z_{1/2}$} & \colhead{mode$(p[v])$\tablenotemark{a}} & \colhead{Reference} \\ \colhead{pc$^{-3}$} & \colhead{kpc} & \colhead{km\,s$^{-1}$} & \colhead{} } \startdata $2.4\times10^{-4}$ &0.42 & 240 & This Work (disk$+$bulge) A2002\tablenotemark{b} \\ $4.0\times10^{-4}$ &0.20 & 200 & This Work (disk$+$bulge) FK2006\tablenotemark{c} \\ $1.4\times10^{-3}$ &0.20 & & Paczynski (1990) \\ $7.5\times10^{-4}$ &0.27 & 43 & Blaes \& Madau (1993) \\ $5.3\times10^{-4}$ &0.50 & 69 & Madau \& Blaes (1994) \\ $4\times10^{-4}$ & & 140 & Perna et al. (2003) \\ $5\times10^{-4}$ &0.4 & & Popov et al. (2005) \enddata \tablenotetext{a}{Mode is the most probable value of the distribution, and the speeds are measured relative to an inertial reference frame.} \tablenotetext{b}{Using the bimodal initial velocity distribution of Arzoumanian et al. (2002).} \tablenotetext{c}{Using the unimodal double-sided exponential initial velocity distribution of Faucher-Gigu\`{e}re \& Kaspi (2006).} \tablecomments{The density, $\rho$, at the solar neighborhood is calculated assuming there are $10^{9}$ NSs in the Galaxy. $Z_{1/2}$ is the height above the Galactic plane, in which the NS density drops to $1/2$ its value on the Galactic plane, calculated at the solar circle. The density from Madau \& Blaes (1994) is calculated assuming that diffusion operats for 5\,Gyr (see \S\ref{sec:DynHeat}). The initial velocity distribution used by Paczynski (1990), Blaes \& Madau (1993), and Madau \& Blaes (1994) is from Narayan \& Ostriker (1990). The initial velocity distribution in Perna et al. (2003) is from Cordes \& Chernoff (1998). } \label{tab:Comp} \end{deluxetable} The space and velocity distributions of Galactic NSs were estimated by several past works. In Table~\ref{tab:Comp}, I compare some of the basic results of the simulations presented here (e.g., local density of NSs; scale height), with the ones obtained by previous efforts. \acknowledgments I thank Re'em Sari, Ehud Nakar, Orly Gnat, Avishay Gal-Yam, Sterl Phinney, and Mansi Kasliwal for valuable discussions. I also thank an anonymous referee for valuable comments. Support for program number HST-GO-11104.01-A was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
2024-02-18T23:40:06.831Z
2009-10-19T22:15:02.000Z
algebraic_stack_train_0000
1,378
9,347
proofpile-arXiv_065-6922
\section{Introduction} \par\noindent The cores of Active Galactic Nuclei (AGN), identified as quasars, emit a huge amount of power at visible and ultraviolet frequencies \cite{bmp}. It obtains its power by the gravitational potential energy of a massive black hole residing at its center \cite{kori}. The radiation is emitted by the accretion disk surrounding the black hole. In this paper we compute the luminosity of the invisible axions \cite{PQ1,PQ2,Weinberg,Wilczek,McKay,Kim1,Dine,Zhitnitsky,McKay2,Kim} from AGNs. We also consider a hypothetical light pseudoscalar whose couplings and mass are not related to one another. Our motivation for this study is two folds. The pseudoscalar flux from AGNs may be used to impose limits on its mass and couplings. If the pseudoscalar flux is sufficiently large then it might also provide an explanation for the observed large scale alignment of visible polarizations from quasars. Large scale alignment, on distance scales of a Gpc, has been observed in many regions of the sky \cite{huts98,huts01,huts02,JNS04}. A statistically significant signal of alignment with the local supercluster has also been observed \cite{huts98,huts01,huts02,JNS04,JP04}. This effect may be explained in terms of the conversion of photons to pseudoscalars in the local supercluster magnetic field. However this explanation is not consistent with data. The problem arises due to the observed difference in the distribution of polarizations among the Radio Quiet (RQ) and optically selected (O) quasars and the Broad Absorption Line (BAL) quasars. The polarization distribution of the RQ and O quasars peaks at very low values. The magnitude of the mixing required to explain the alignment effect is sufficiently large so as to completely wash out this difference. In Ref. \cite{JPS02} it was suggested that if the pseudoscalar flux from quasars is sufficiently large at visible frequencies, than conversion in the supercluster magnetic field may consistently explain the alignment with supercluster. In this case the alignment is explained in terms of the conversion of pseudoscalars to photons. We first study the emission of pseudoscalars from the accretion disk via the Compton, Bremsstrahlung and the Primakoff channels. In this calculation we assume the pseudoscalar to be the standard axion. Besides emission from the accretion disk, pseudoscalars may also be produced in the AGN atmospheres due to the conversion of photons to pseudoscalar in the background magnetic field. The probability for this conversion is negligible for the standard axion but can be large if the pseudoscalar mass is very small. \section{Axion Luminosity from the Accretion Disk} The emission rates of the axion via the Compton and Bremsstrahlung \& Primakoff channels are given, respectively, by the formulas \cite{kolb,fukugita,kmw,Dicus1}, \begin{equation} {\dot{\epsilon_a}(C) = \frac{40N_A\alpha g_{aee}^2 \zeta (6)T^6}{\mu _e \pi^2 m_e^4}} \end{equation} \begin{equation} {\dot{\epsilon_a}(B) = \frac{64n_en_Z g_{aee}^2 Z^2 \alpha ^2T^{5/2}}{15\rho (2\pi)^{3/2} m_e^{7/2}}} \end{equation} \begin{equation} {\dot{\epsilon_a}(P) = \frac{2n_Z Z^2 \alpha g_{a \gamma \gamma}^2 T^4 \left(6\zeta (4)\left[\ln(2)-0.5-\ln(\frac{\omega_p}{T})\right]+7.74\right)}{\rho \pi}}\, . \end{equation} Here ${N_A}$ is Avogadro's number, $\alpha$ is the fine structure constant, ${\zeta(n)}$ is the Riemann zeta function, ${T}$ is the temperature in the accretion disk, ${\mu_e}$ is the mean molecular weight of electron, ${m_e}$ is the electron mass, ${\rho}$ is the density, $n_{e}$ $(n_Z)$ is the number density of electrons (nucleons), ${Z}$ is atomic number and $\omega_p$ is the plasma frequency. The axion-electron coupling $g_{aee}$ and the axion-photon coupling, $g_{a\gamma\gamma}$, is related to the Peccei-Quinn (PQ) spontaneous symmetry breaking scale, $f_{PQ}$, by the standard formulas \cite{kolb}. We assume the DFSZ \cite{Dine,Zhitnitsky} axion for which $f_{PQ}$ is constrained by observations to be greater than $ 10^{8}$ GeV \cite{PDG,ggrflt}. For this limiting value the couplings are found to be, $g_{aee} = 5.0 \times 10^{-12}$ and $g_{a \gamma \gamma} = 8.4 \times 10^{-12}\ {\rm GeV}^{-1}$. In these estimates we have set the color anomaly factor to be unity. Although the values of these parameters are model dependent, here we use these values for our estimates. We next calculate the luminosity of axions from the accretion disk by integrating the emission rates over the disk mass. We assume the thin disk model of the accretion disk. Let $\rho$ denote the density of the disk. It can be replaced by $ {\frac{\Sigma}{H}}$, where ${\Sigma}$ is the surface density of the disk and $H = {R^{3/2}C_s\over\sqrt{GM}}$. Here $C_s=10^6$ cm/s is the speed of sound in the accretion disk medium, $G$ is the Gravitation constant, and $M$ is the mass of the central black hole, roughly equal to $10^{41} {\rm gm}$ \cite{king}. \subsection{Luminosity due to Compton Scattering} We first compute the luminosity due to Compton scattering. From the formula for the emission rate we get, $\dot{\epsilon_a}(C) = 1.268\times 10^{-11}T^{6}~{\rm GeV}$. The luminosity is given by, \begin{equation} {L_{comp} = \int\dot{\epsilon_a}(C)\,d M = 1.72 \times 10^{34} \int \int \int \rho T^6 R\,d R \,d \phi\,d z ~~{\rm erg-s^{-1}}} \end{equation} The `z' integration is straightforward, $\int dz=H$, where `z' is the scale height of the disk. We obtain \begin{equation} {L_{comp} = 1.72 \times 10^{34} \times 2\pi\int_{R_{\ast}}^{10^{3}R_{\ast}} \Sigma T^6 R \,d R ~~{\rm erg-s^{-1}}} \end{equation} where ${\Sigma}$ and $T$ are given by \cite{king}, \begin{equation} {\Sigma = 3.57 \times 10^{33} \left[ \frac{1}{R^{3/2}} - \frac{\sqrt R_{\ast}}{R^2} \right]~~{\rm erg-cm^{-2}}}\,, \label{eqn:surfden} \end{equation} \begin{equation} {T = 1.1 \times 10^{16} \left[ \frac{1}{R^3} - \frac{\sqrt R_{\ast}}{R^{7/2}} \right]^{1/4}~~{\rm K}}\,, \label{eqn:temp} \end{equation} respectively. Using these values, we get, $ L_{comp} = 9.7 \times 10^{29} ~~{\rm erg-s^{-1}}$. \subsection{Bremsstrahlung} The axion emission rate for the bremsstrahlung process is found to be \begin{equation} {\dot{\epsilon_a}(B) = 1.461\times10^{-16} \rho T^{5/2}~~{\rm GeV}} \end{equation} where both $\rho$ and T are in GeV units. Using Eqs. (\ref{eqn:surfden}) and (\ref{eqn:temp}) we find, \begin{equation} {\dot{\epsilon_a}(B) = 1.28\times10^{-22} \frac{\Sigma}{R^{3/2}} T^{5/2}~~{\rm GeV}}\,. \end{equation} The luminosity due to this process is found to be \begin{equation} {L_{brem}} = \int\dot{\epsilon_a}(B)\,d M = 5.7 \times 10^{36} ~~{\rm erg-s^{-1}}\,. \end{equation} \subsection{Primakoff} In this case, the emission rate is given by, \begin{equation} {\dot{\epsilon_a}(P) = 3.4567\times 10^{-25} \left[8.9943 T^4 - 6.4939T^4 \ln \left({\omega_p \over T}\right)\right] ~~{\rm GeV}} \end{equation} where the plasma frequency, \begin{equation} {\omega_p = 2.452\times 10^{13} {1 \over R^{3/4}} \left [{1 \over R^{3/2}} - {\sqrt{R_{\ast}}\over R^2}\right]^{1/2} ~~{\rm GeV}}\,. \end{equation} Therefore, we find that, \begin{equation} {L_{Prim} = 2.84 \times 10^{32} + 7.1 \times 10^{31} ~~~{ \rm erg - s^{-1}}} \end{equation} We find that the axion emission rate is relatively small due to all the three processes. It is negligible compared to the AGN's photon luminosity. The dominant contribution is obtained by the bremsstrahlung process. The calculation in this section depends only on the coupling of the axion to fermions and photons and does not depend on its mass. Hence the result is also valid for a hypothetical pseudoscalar with similar couplings but whose mass may not be related to the PQ symmetry breaking scale as long as the mass is much smaller than the accretion disk temperature. We note that in our study we have made several approximations. For example, we have used the thin disk approximation in our study, which may not be true in reality. Actually, our usage of the thin disk model is forced since no other viable stable model is available. Furthermore we have not investigated the pseudoscalar emission rate from the interiors of the AGN's or from other parts such as jets, etc. \section{Conversion In AGN Surroundings} In section 2 we have found that the total axion luminosity of the accretion disk is negligible compared to the total photon luminosity. Here, we determine the contribution to the pseudoscalar luminosity due to photon to pseudoscalar conversion in the AGN surroundings due to the background magnetic field. By AGN surroundings, we mean the atmosphere outside the accretion disk. This includes the dust tori, broad line region and the narrow line region. In order to perform this calculation, we require parameters such as the plasma density, magnetic field etc. in this region, which are unknown. Hence, we instead use the parameters corresponding to the radio lobes. We shall take the representative values for Cygnus A to estimate the pseudoscalar luminosity. The actual parameters may vary and hence our estimates may only be qualitatively reliable. We point out that we are not considering the standard Peccei-Quinn axion in this section. Rather, we look at a generic pseudoscalar whose mass and couplings to visible matter are unrelated to each other. The present bound on the axion mass is relatively large. For such a large mass, the conversion probability of axion into photon is found to be negligible. Instead, here we assume that $m_{\phi} \lesssim \omega_p$, where $\omega_p$ is the typical plasma frequency in the radio lobes. In this case the conversion probability may be significant. The pseudoscalar-photon mixing phenomenon in background magnetic field has been analyzed in great detail in the literature \cite{Sikivie83,Sikivie85,Maiani,RS88,Bradley,CG94,sudeep,Ganguly,Ganguly09,Csaki02,sroy,MirizziCMB,MirizziTEV,Song,Gnedin,Agarwal}. This phenomenon has also been used to impose stringent limits on the pseudoscalar-photon coupling \cite{Rosenberg,Vysotsky,Dicus2,Dearborn86,Raffelt87,Raffelt88,Turner,Mohanty,Brockway,Grifols,CAST1,CAST2,jaeckel,Robilliard,zavattini,Rubbia}. A beam of photons, passing through background magnetic field, would convert partially into pseudoscalars due to this mixing phenomenon. The mixing probability increases with frequency. At very high frequencies the mixing probability is very large and hence the flux of pseudoscalars produced may be comparable to the incident photon flux. As the pseudoscalar flux becomes sizeable, we expect significant pseudoscalar to photon conversion. Eventually we expect a beam containing roughly equal number of photons and pseudoscalars. This is true as long as the extinction of photons is negligible in the medium. However if extinction is significant, we may obtain larger pseudoscalar luminosity, even if the incident beam consists entirely of photons. The extinction coefficient for AGN atmospheres is not known. Here we shall assume that the extinction is of same order of magnitude in comparison to what is observed in the host galaxies in the case of high redshift supernovas \cite{Riess}. Here the visual extinction coefficient is extracted from the observed light curve for these supernovas. In the present case, in order to compute the pseudoscalar flux at visible frequencies, we need to extrapolate the extinction coefficient to ultraviolet frequencies. This is because a source at high redshift must emit UV light so that the radiation received by us is in the visible band. The extinction depends approximately linearly on the frequency of the photons. Hence, we obtain the extinction at UV frequencies by suitably rescaling the extinction observed at the visible region in supernovae data \cite{Riess} at large redshifts. We next briefly review the formalism for pseudoscalar-photon mixing in a uniform background, to the case where the medium causes extinction of photons. An earlier discussion of this phenomenon may be found in \cite{Csaki}. Using the notations used in \cite{sudeep}, we write the differential equation of mixing with extinction, ignoring the longitudinal component of the photons and the mixing of transverse components thereof \cite{sudeep}, as follows, \begin{equation} \left(\omega^2 + \partial^2_z\right) \left[\begin{array}{cc} A_{||}(z)\\ \phi(z) \end{array}\right] = M \left[\begin{array}{cc} A_{||}(z)\\ \phi(z) \end{array}\right]\,. \label{eq:mstart} \end{equation} This equation describes the mixing of the parallel component of the electromagnetic field with the pseudoscalar $\phi$. The perpendicular component $A_{\perp}$ does not mix with $\phi$. The ``mass matrix" or the ``mixing matrix" in Eq. (\ref{eq:mstart}) can be written as, \begin {equation} M = \left[ \begin{array}{cc} \omega_p^2 + i\Gamma(\omega) & -g_{\phi}\mathcal{B_T}\omega \\ -g_{\phi}\mathcal{B_T}\omega & m_{\phi}^2 \\ \end{array} \right] \end{equation} \par\noindent where $\mathcal{B_T}$ is the transverse component of the background magnetic field and $\Gamma(\omega)$ describes the attenuation of photons due to their extinction in the medium. We note that the extinction of light is modelled with a parameter called optical depth, $\tau_{\nu}$, such that the intensity $I_\nu(z) = I_\nu(0) e^{-\tau_\nu}$, where $z$ is the thickness of the medium. In general the optical depth increases linearly with frequency. Hence we may assume $\tau_\nu = K\omega$, where $K$ is a constant. In our formulation, the exponential decay parameter is $\frac{\Gamma z}{2\omega}$, at leading order. Equating this to $\tau_\nu$ we find $\Gamma = {2\omega^2 K \over z}$. As discussed above, we shall fix the value of $K$ by assuming that at visual frequencies the extinction is similar to that observed for high redshift supernovas \cite{Riess}. This leads to $\Gamma \approx 2.8\times 10^{-28}\omega^2(\tau/0.5)$, where $\omega$ is expressed in Hz and the visual extinction $A_V = (2.5\log_{10}e)\tau$. We can solve the equations, Eq. \ref{eq:mstart}, by diagonalizing the matrix $M$. The eigenvalues of this matrix, $\lambda_+$ and $\lambda_-$ may be expressed as, \begin{equation} \lambda_\pm = {1\over 2}\left[\Omega_p^2 + m_\phi^2 \pm\sqrt{(\Omega_p^2 + m_\phi^2)^2 - 4(\Omega_p^2 m_\phi^2 - g_\phi^2 \mathcal{B}_T^2\omega^2) } \right] \end{equation} where $\Omega_p^2 = \omega_p^2+i\Gamma$. We assume the boundary condition, $\phi(0) = 0$, and find the final result \begin{eqnarray} \label{last} A_{||}(z) &=& {1\over ad-bc}\left[ad~e^{(iz\sqrt{\omega^2 - \lambda_+})} - bc~e^{(iz\sqrt{\omega^2 - \lambda_-})}\right] A_{||}(0) \nonumber \\ \phi(z) &=& {bd\over ad-bc}\left[ e^{(iz\sqrt{\omega^2 - \lambda_+})} - e^{(iz\sqrt{\omega^2 - \lambda_-})}\right]A_{||}(0)\,, \end{eqnarray} where $a = (\lambda_+-m_\phi^2)/\sqrt{N_+}$, $b = -g_\phi B_T\omega/\sqrt{N_+}$, $c = g_\phi B_T\omega/\sqrt{N_-}$, $d = (\Omega_p^2 - \lambda_-)/\sqrt{N_-}$. Here $N_+$ and $N_-$ are normalization factors which cancel out in the final expressions. The perpendicular component of the electromagnetic wave is given by, \begin{equation} A_\perp(z) = A_\perp(0)e^{iz\sqrt{\omega^2-\Omega_p^2}} \label{eq:Aperp} \end{equation} Using Eq. \ref{last} and Eq. \ref{eq:Aperp} we can compute the photon and pseudoscalar flux emerging out of the AGN atmosphere. In making this calculation we assume parameters corresponding to the Cygnus A radio lobe. Hence we set the plasma density $n_e = 10^{-4}$ cm$^{-3}$ and magnetic field $B_T = 4\times 10^{-4}$ G. We assume the pseudoscalar photon coupling, $g_{\phi\gamma\gamma} = 10^{-10}$ GeV$^{-1}$ and the pseudoscalar mass is set to zero. In Fig. \ref{fig:omega} we show the pseudoscalar and photon intensity as a function of the frequency setting the distance equal to 10 Kpc. Here we have set the extinction parameter $\tau=0.1$ and $\omega$ represents the frequency at source. In this plot we have normalized the intensity such that the photon intensity is unity before entering the AGN atmosphere. In Fig. \ref{fig:ratio} we show the ratio of the pseudoscalar to photon intensity as a function of frequency. We find that the pseudoscalar intensity is significantly larger in comparison to the photon intensity at higher frequencies. For the parameters chosen, the pseudoscalar intensity is a factor of two or three larger than the photon intensity for $\omega=5\times 10^{16}$ to $10^{17}$ Hz. For larger frequencies the pseudoscalar flux may be an order of magnitude higher in comparison to the photon flux. However here the overall flux may be very small. In our calculations we have set the pseudoscalar mass to zero. If the pseudoscalar mass is comparable to the plasma density of the medium then there is also the possibility of resonant mixing of pseudoscalars with photons \cite{sudeep}. In this case the photon to pseudoscalar conversion is considerably enhanced and hence the pseudoscalar flux from AGNs may be significantly higher. \begin{figure}[t] \centering \includegraphics[]{omega_dep.eps} \caption{The pseudoscalar and photon intensity as a function of the frequency.} \label{fig:omega} \end{figure} \begin{figure}[t] \centering \includegraphics[]{ratio.eps} \caption{The ratio of pseudoscalar to photon intensity as a function of the frequency.} \label{fig:ratio} \end{figure} \section{Alignment of Quasar Polarizations} We next briefly address the issue of alignment of optical polarization from quasars in the direction of the Virgo supercluster. This region is labelled as A1 in \cite{huts98,huts01}. Here we limit ourselves to a qualitative explanation. A detailed quantitative analysis is postponed for future research. The alignment effect may in principle be explained by the conversion of photons into pseudoscalars in the Virgo supercluster magnetic field \cite{huts98,huts01,JPS02}. However, as mentioned in the introduction, this does not consistently explain the data due to the observed distribution of polarization of the RQ and O quasars \cite{huts98,huts01}. The polarization distribution of these quasars is found to peak at very low values. In contrast the distribution of BAL quasars is observed to be much broader. The difference between these two classes of quasars is seen in all directions including the A1 region. The magnitude of the systematic effect required to explain alignment is sufficiently large that it would completely distort the distribution of RQ and O quasars. We may alternatively consider the possibility that quasars emit a significant amount of pseudoscalar flux, as found in the previous section. The pseudoscalar flux is assumed to be larger than the photon flux. In this case the alignment may be explained in terms of the conversion of pseudoscalars into photons in the Virgo supercluster. This will lead to a linear polarization aligned along the transverse component of the background magnetic field. Furthermore we assume that the ratio of the pseudoscalar to photon flux is much larger for the BAL quasars in comparison to the RQ and O quasars. Hence the systematic polarization effect due to pseudoscalar-photon mixing would be smaller for RQ and O quasars in comparison to BAL quasars. Since the RQ and O quasars in general have smaller degree of polarization, this may be sufficient to explain their alignment. In contrast the BAL quasars would get a larger contribution due to pseudoscalar-photon mixing, which is required due to their larger intrinsic polarization. Hence we find that the observed alignment may be consistently explained if the quasars emit pseudoscalars. \section{Conclusions} \par\noindent In this paper, we have found that the luminosity of pseudoscalars from the AGN accretion disk due to Compton, Bremsstrahlung and Primakoff channels is very small in comparison to the photon luminosity. However, the photons in visible and ultraviolet frequencies may convert to pseudoscalars outside the accretion disk due to pseudoscalar-photon mixing in the background magnetic field. Taking extinction of photons into account and using the current limit on the pseudoscalar-photon coupling, we find that the pseudoscalar flux produced by this process is relatively large. For ultraviolet frequencies, which would be observed in the visible range on earth, this flux may dominate the photon flux. A large pseudoscalar flux may provide a consistent explanation for the large scale coherent orientation of the visible polarizations from quasars. \medskip \\ \section{Acknowledgement} We thank Suman Bhattacharya for collaboration during the initial stages of this work.
2024-02-18T23:40:07.890Z
2009-10-27T11:36:19.000Z
algebraic_stack_train_0000
1,405
3,380
proofpile-arXiv_065-6942
\section{Introduction}\label{SectionIntroduction} The evolution of neutron stars and the potential relationships between some of their observed classes remain outstanding problems in astrophysics. Proper motion studies of neutron stars can provide independent age estimates with which to shed light on these questions. In particular, the well defined geometry of bow shock pulsar wind nebulae (PWNe; \citeauthor{gaensler:3} \citeyear{gaensler:3}), where the relativistic wind from a high-velocity pulsar is confined by ram pressure, can be used as a probe to aid in the understanding of both neutron star evolution and the properties of the local medium through which these stars travel. The ``Mouse'' (PWN~G359.23$-$0.82), a non-thermal radio nebula, was discovered as part of a radio continuum survey of the Galactic center region \citep{yusef}, and was suggested to be powered by a young pulsar following X-ray detection \citep{predehl}. It is now recognized as a bow shock PWN moving supersonically through the interstellar medium (ISM; \citeauthor{gaensler:5} \citeyear{gaensler:5}). Its axially symmetric morphology, shown in Figure \ref{fig:yusef20cm}, consists of a compact ``head'', a fainter ``body'' extending for $\sim$10$^\prime$$^\prime$, and a long ``tail'' that extends westward behind the Mouse for $\sim$40$^\prime$$^\prime$ and $\sim$12$^\prime$ at X-ray and radio wavelengths respectively \citep{gaensler:5,mori}. The cometary tail appears to indicate motion away from a nearby supernova remnant (SNR), G359.1$-$0.5 \citep{yusef}. \begin{figure*}[t] \setlength{\abovecaptionskip}{-7pt} \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, angle=-90, width=13cm]{f1.ps} \end{center} \caption{VLA image of the Mouse (PWN~G359.23$-$0.82) at 1.4~GHz with a resolution of 12\farcs8$\times$8\farcs4 (reproduced from \citeauthor{gaensler:5} \citeyear{gaensler:5}). The brightness scale is logarithmic, ranging between $-$2.0 and $+$87.6~mJy~beam$^{-1}$ as indicated by the scale bar to the right of the image. The eastern rim of SNR~G359.1$-$0.5 is faintly visible west of $\sim$RA~17$^{\mbox{\scriptsize{h}}}$46$^{\mbox{\scriptsize{m}}}$25$^{\mbox{\scriptsize{s}}}$.} \label{fig:yusef20cm} \end{figure*} A radio pulsar, J1747$-$2958, has been discovered within the ``head'' of the Mouse \citep{camilo:1}. PSR~J1747$-$2958 has a spin period $P=98.8$ ms and period derivative $\dot{P}=6.1\times10^{-14}$, implying a spin-down luminosity $\dot{E}=2.5 \times 10^{36}$~ergs~s$^{-1}$, surface dipole magnetic field strength $B=2.5\times10^{12}$~G, and characteristic age $\tau_{c} \equiv P/ 2\dot{P}=25$~kyr (\citeauthor{camilo:1} \citeyear{camilo:1}; see also updated timing data from \citeauthor{gaensler:5} \citeyear{gaensler:5}). The distance to the pulsar is {\footnotesize $\gtrsim$}4~kpc from X-ray absorption \citep{gaensler:5}, and {\footnotesize $\lesssim$}5.5 kpc~from HI absorption \citep{uchida}. Here we assume that the system lies at a distance of $d=5d_{5}$~kpc, where $d_{5}=1\pm0.2$ ($1\sigma$). Given such a small characteristic age, it is natural to ask where PSR~J1747$-$2958 was born and to try and find an associated SNR. While it is possible that no shell-type SNR is visible, such as with the Crab pulsar \citep{sankrit} and other young pulsars \citep{braun}, an association with the adjacent SNR~G359.1$-$0.5 appears plausible. This remnant was initially suggested to be an unrelated background object near the Galactic center \citep{uchida}. However, it is now believed that the two may be located at roughly the same distance (\citeauthor{yusef3} \citeyear{yusef3}, and references therein). By determining a proper motion for PSR~J1747$-$2958, this association can be subjected to further scrutiny (for example, see analysis of PSR~B1757$-$24, PWN~G5.27$-$0.90 and SNR~G5.4$-$1.2; \citeauthor{blazek} \citeyear{blazek}; \citeauthor{zeiger} \citeyear{zeiger}). As PSR~J1747$-$2958 is a very faint radio source, it is difficult to measure its proper motion interferometrically. It is also difficult to use pulsar timing to measure its proper motion due to timing noise and its location near the ecliptic plane \citep{camilo:1}. To circumvent these issues, in this paper we investigate dual-epoch high-resolution radio observations of the Mouse nebula, spanning 12 years from 1993 to 2005, with the intention of indirectly inferring the motion of PSR~J1747$-$2958 through the motion of its bow shock PWN. In \S~\ref{SectionObservations} we present these observations. In \S~\ref{SectionAnalysis} we present our analysis and measurement of proper motion using derivative images of PWN~G359.23$-$0.82. In \S~\ref{SectionDiscussion} we use our measurement to determine an in situ hydrogen number density for the local ISM, to resolve the question of a possible association with SNR~G359.1$-$0.5, and to investigate the age and possible future evolution of PSR~J1747$-$2958. We summarize our conclusions in \S~\ref{SectionConclusions}. \section{Observations}\label{SectionObservations} PWN~G359.23$-$0.82 was observed with the Very Large Array (VLA) on 1993 February 2 (Program AF245) and again\footnote{An observation of PWN~G359.23$-$0.82 was also carried out on 1999 October 8 (Program AG571). However, the target was observed mainly at low elevation and the point spread function and spatial frequency coverage were both poor as a result, thus ruling out the observation's amenability to astrometric comparison.} on 2005 January 22 (Program AG671). Each of these observations were carried out in the hybrid BnA configuration at a frequency near 8.5 GHz. The 1993 and 2005 epochs used on-source observation times of 3.12 and 2.72 hours respectively. The 1993 observation only measured $RR$ and $LL$ circular polarization products, while the 2005 observation measured the cross terms $RL$ and $LR$ as well. Both observations used the same pointing center, located at $\mbox{RA}$~$=$~$17^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec764, $\mbox{Dec}$~$=$~$-29$\degree58$^\prime$1\farcs12 (J2000), as well as the same primary flux calibrator, 3C286. Both were phase-referenced to the extragalactic source TXS~1741$-$312, located at $\mbox{RA}$~$=$~$17^{\mbox{\scriptsize{h}}}$44$^{\mbox{\scriptsize{m}}}$23\farcsec59, $\mbox{Dec}$~$=$~$-31$\degree16$^\prime$35\farcs97 (J2000), which is separated by $1\hbox{$.\!\!^\circ$}4$ from the pointing center. Data reduction was carried out in near identical fashion for both epochs using the MIRIAD package \citep{sault:2}, taking into consideration the slightly different correlator mode used in the 1993 data. This process involved editing, calibrating, and imaging the data using multi-frequency synthesis and square pixels of size 50~$\times$~50 milli-arcseconds. These images were then deconvolved using a maximum entropy algorithm and smoothed to a common resolution with a circular Gaussian of full width at half-maximum (FWHM) 0\farcs81. The resulting images are shown in the left column of Figure \ref{fig:allImages}. \begin{figure*}[t] \setlength{\abovecaptionskip}{-7pt} \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, angle=-90, width=12cm]{f2.ps} \end{center} \caption{{\it Left column:} VLA observations of the Mouse at 8.5~GHz over two epochs separated by 12 yr. Each image covers a 14\farcs5$\times$8\farcs5 field at a resolution of 0\farcs81, as indicated by the circle at the bottom right of each panel. The brightness scale is linear, ranging between $-$0.23 and $+$3.5~mJy~beam$^{-1}$. {\it Right column:} Spatial x-derivative images for 1993 (top) and 2005 (bottom), covering the same regions as images in the left column. The images are shown in absolute value to increase visual contrast using a linear brightness scale spanning zero and the largest magnitude derivative from either image. The box between $\sim$RA~17$^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec8$-$16\farcsec5 indicates the eastern region extracted for cross-correlation. {\it All panels:} Contours are shown over each image at 30\%, 50\%, 75\% and 90\% of the peak flux within their respective column group. The dashed vertical line in each panel has been arbitrarily placed at a right ascension near the brightness peak in the top-right panel in order to determine if motion can be seen by eye between epochs.} \label{fig:allImages} \end{figure*} The peak flux densities of the 1993 and 2005 images are 3.24 and 3.25~mJy~beam$^{-1}$, respectively; the noise in these two images are 51 and 35~$\mu$Jy~beam$^{-1}$, respectively. The pulsar J1747$-$2958 is located at $\mbox{RA}$~$=$~$17^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec882, $\mbox{Dec}$~$=$~$-29$\degree58$^\prime$1\farcs0 (J2000), within the region of intense synchrotron emission seen in each image (see \S~3.5 of \citeauthor{gaensler:5} \citeyear{gaensler:5}). Qualitatively comparing each epoch from the left column of Figure \ref{fig:allImages}, it appears that the head of PWN~G359.23$-$0.82 has the same overall shape in both images, with a quasi-parabolic eastern face, approximate axial symmetry along a horizontal axis through the centre of the nebula (although the position of peak intensity seems to shift slightly in declination), and a small extension to the west. By eye the PWN seems to be moving from west to east over time, in agreement with expectation from the cometary morphology seen in Figure \ref{fig:yusef20cm}. Beyond any minor morphological changes seen between the images in the left column of Figure \ref{fig:allImages}, the Mouse nebula seems to have expanded slightly over time. \section{Analysis}\label{SectionAnalysis} To quantify any motion between epochs, an algorithm was developed to evaluate the cross-correlation coefficient over a range of pixel shifts between images, essentially by producing a map of these coefficients. This algorithm made use of the Karma visualization package \citep{gooch} to impose accurate non-integer pixel shifts. We applied our algorithm to the image pair from the left column of Figure \ref{fig:allImages} to determine an image offset measurement. To check that this offset measurement would be robust against any possible nebular morphological change between epochs, we also applied our algorithm to the same image pair when both images were equally clipped at various flux density upper-level cutoffs. We found that the offset measurement was strongly dependent on the choice of flux density cutoff. Clearly such variation in the measured shift between epochs was not desirable, as selection of a final solution would have required an arbitrary assumption about the appropriate level of flux density clipping to use. There was also no strong indication that the region of peak flux density in each image coupled with the exact location of PSR~J1747$-$1958. In order to isolate the motion of the pulsar from as much nebular morphological change as possible, we focused on a different method involving the cross-correlation of spatial derivatives of the images from the left column of Figure \ref{fig:allImages}. As there is not enough information to solve for an independent Dec-shift, we will only focus on an RA-shift, and will assume that any Dec changes are primarily due to morphological evolution of the Mouse nebula. To justify this assumption, we note simplistically that the cometary tail of the Mouse in Figure \ref{fig:yusef20cm} is oriented at {\footnotesize $\lesssim$}5\ensuremath{^\circ}~from the RA-axis, and thus estimate that any Dec motion contributes less than 10\% to the total proper motion of the nebular system. The small angle also justifies our decision not to calculate derivatives along orthogonal axes rotated against the RA-Dec coordinate system. For each epoch, an image of the first spatial derivative of intensity in the x (RA) direction was created by shifting the original image by 0.1 pixels along the RA-axis, subtracting the original from this shifted version, and then dividing by the value of the shift. These x-derivative images are shown in the right column of Figure \ref{fig:allImages}, where the brighter pixels represent regions of larger x-derivative from the corresponding left column images (note that these derivative images are shown in absolute value so as to increase their visual contrast; this operation was not applied to the analyzed data). The x-derivative images have signal-to-noise ratios $\sim$13, since progression to higher derivatives degrades sensitivity. As seen in the right column of Figure \ref{fig:allImages}, the x-derivative images of the Mouse are divided into two isolated regions: an eastern forward region and a western rear region. Derivatives in the eastern region are greater in magnitude than those in the western region. As we will justify in \S~\ref{SectionDiscussion}, we propose that the eastern region of the x-derivative images tracks the forward termination shock of the Mouse nebula, which in turn acts as a proxy for an upper limit on the motion of PSR~J1747$-$2958. The eastern region provides a natural localized feature at each epoch with which to generate cross-correlation maps in order to track the motion of PSR~J1747$-$2958. \subsection{Calculation of Nebular Proper Motion}\label{Subsection:FinalCalc} To prepare the x-derivative images for cross-correlation, their eastern regions, extending between Right Ascensions (J2000) of 17$^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$15\farcsec8 and 17$^{\mbox{\scriptsize{h}}}$47$^{\mbox{\scriptsize{m}}}$16\farcsec5 (see Figure \ref{fig:allImages}), were extracted and padded along each side with 50 pixels (2500~mas) of value zero. These cropped and padded x-derivative images for the 1993 and 2005 epochs were then cross-correlated with each other over a range of non-integer pixel shifts between $-$2500 and $+$2500~mas in both RA and Dec. The resultant 2005$-$1993 cross-correlation map, which indicates the shift required to make the 1993 epoch colocate with the 2005 epoch, is shown in the left of Figure \ref{fig:finalResults}. \begin{figure*}[t] \setlength{\abovecaptionskip}{-3pt} \begin{center} \begin{minipage}[c]{0.45\textwidth} \begin{center} \vspace{-4mm} \includegraphics[trim = 0mm 0mm 5mm 0mm, width=5.3cm, angle=-90]{f3leftg.ps} \end{center} \end{minipage} \hspace{1mm} \begin{minipage}[c]{0.45\textwidth} \begin{center} \vspace{-7mm} \includegraphics[trim = -3mm 0mm 6mm 0mm, width=4.7cm, angle=-90]{f3right.ps} \end{center} \end{minipage} \end{center} \caption{{\it Left}: Cross-correlation map for cropped and padded x-derivative images between 1993 and 2005. Shifts range from $-$2500 to $+$2500~mas in both RA and Dec. The pixels are scaled from zero (white) to the peak value of the map (black), and contours are at 50\%, 68.3\%, 99.5\%, and 99.7\% of this peak value. {\it Right}: Profile along a line parallel to the RA-axis through the peak value of the cross-correlation map (solid curve). The caption quantifies the RA-shift for the fitted peak value (also indicated by the vertical dashed line), FWHM (also indicated by the dotted vertical lines), and signal-to-noise ratio.} \label{fig:finalResults} \end{figure*} Note that the map in Figure \ref{fig:finalResults} incorporates trial shifts large enough to probe regions where the cross-correlation falls to zero (corresponding to cross-correlation between signal and a source-free region, as opposed to only probing trial shifts close to the maxima of each x-derivative map). In this way, the contours presented in Figure \ref{fig:finalResults} represent percentages of the peak cross-correlation value. To quantify the shift between epochs, a profile along a line parallel to the RA-axis was taken through the peak of the cross-correlation map, as shown in the right of Figure \ref{fig:finalResults}. We assume that morphological changes are negligible in the eastern region of the x-derivative images; therefore, by taking a profile through the peak we tolerate small Dec shifts between the two epochs. The RA shift between the 2005 and 1993 epochs was determined by fitting a Gaussian to the central 660~mas of the cross-correlation profile. The resultant shift is 154~mas with a statistical uncertainty of 7~mas, where the latter is equal to the FWHM divided by twice the signal-to-noise ratio. This calculation reflects the angular resolution and noise in the images, but to completely quantify the error on the shift between the two epochs, systematic errors also need to be incorporated. To estimate the positional error in the image plane, corresponding to phase error in the spatial frequency plane, the phases of the complex visibility data for each epoch were self-calibrated\footnote{Note that self-calibration was not used in the general reduction process because it would have caused a degradation in relative positional information between the final images, as the phases would no longer be tied to a secondary calibrator of known position.}. By dividing the standard error of the mean of the phase variation in the gain solutions by 180\ensuremath{^\circ}, the fraction of a diffraction-limited beam by which positions may have been in error in the image plane were calculated. By multiplying this fraction with the diffraction limited beamwidth for the 1993 and 2005 epochs, the two-dimensional relative positional uncertainty of these two reference frames was estimated to be 22 and 20 mas respectively. The systematic error in our measurement, which describes the relative positional error between the two epochs, was then determined by reducing the self-calibrated positional uncertainties for each epoch by $\sqrt{2}$ (we are only looking at random positional uncertainties projected along the RA-axis), and added in quadrature. This error was found to be 21~mas, which totally dominates the much smaller statistical error of 7~mas. By calculating the total error as the quadrature sum of the statistical and systematic errors, the RA-shift of the PWN tip between the 1993 and 2005 epochs was found to be $154\pm22$~mas. When divided by the 4372 days elapsed between these two epochs, the measured shift corresponds to a proper motion of $\mu=12.9\pm1.8$~mas~yr$^{-1}$ in an eastward direction. We therefore detect motion at the $7\sigma$ level. Note that, at an assumed distance of $\sim$5~kpc along a line of sight toward the Galactic center, this motion is unlikely to be contaminated significantly by Galactic rotation \citep{olling}. If we simplistically compare the eastward component of the proper motion with the angle of the Mouse's cometary tail, as described earlier in \S~\ref{SectionAnalysis}, we obtain a crude estimate of $\sim$1~mas~yr$^{-1}$ for the northerly component of the nebula's proper motion. As this value is well within the error for the eastward motion, which is dominated by systematic effects, we feel that our earlier assumption of pure eastward motion in the presence of relative positional uncertainty between the 1993 and 2005 reference frames is justified. \section{Discussion}\label{SectionDiscussion} Bow shock PWNe have a double-shock structure consisting of an outer bow shock where the ambient ISM is collisionally excited, an inner termination shock at which the pulsar's relativistic wind is decelerated, and a contact discontinuity between these two shocks which marks the boundary between shocked ISM and shocked pulsar wind material \citep{gaensler:3}. The outer bow shock may emit in H$\alpha$, though for systems such as PWN~G359.23$-$0.82 with high levels of extinction along their line of sight, the detection of such emission would not be expected. The inner termination shock, which encloses the pulsar's relativistic wind, may emit synchrotron radiation detectable at radio/X-ray wavelengths. It is expected that any synchrotron emission beyond the termination shock would be sharply bounded by the contact discontinuity \citep{gaensler:5}. As mentioned in \S~\ref{SectionAnalysis}, we suggest that the eastern regions of the x-derivative images from Figure \ref{fig:allImages} provide the best opportunity to track motion of PSR~J1747$-$2958, relatively independent of any morphological changes occurring in PWN~G359.23$-$0.82. Physically, these regions of greatest spatial derivative (along the RA-axis) might correspond to the vicinity of the termination shock apex, or possibly the contact discontinuity between the two forward shocks, where motion of the pulsar is causing confinement of its wind and where rapid changes in flux might be expected to occur over relatively small angular scales. This is consistent with hydrodynamic simulations which predict that the apex of the bow shock will be located just outside a region of intense synchrotron emission in which the pulsar lies \citep{bucc,van:1}. The assumption that the eastern region of each x-derivative image can be used as a proxy to track the motion of PSR~J1747$-$2958 is therefore plausible, but difficult to completely justify. To show that motion calculated in this way provides an upper limit to the true motion of PSR~J1747$-$2958 we recall the overall morphological change described at the end of \S~\ref{SectionObservations}, namely that the Mouse nebula has expanded with time between the 1993 to 2005 epochs. This expansion suggests that the ISM density may be dropping, causing the termination shock to move further away from pulsar, so that any motion calculated using the nebula may in fact overestimate the motion of the pulsar (a similar argument was used by \citeauthor{blazek} \citeyear{blazek} in placing an upper limit on the motion of the PWN associated with PSR~B1757$-$24). Such changes in density are to be expected as the nebula moves through interstellar space, where like the spectacular Guitar nebula \citep{chatterjee:1} motion may reveal small-scale inhomogeneities in the density of the ISM. We therefore assume that our measurement of proper motion from \S~\ref{Subsection:FinalCalc} corresponds to an upper limit on the true proper motion of PSR~J1747$-$2958. \subsection{Space Velocity and Environment of PSR~J1747$-$2958}\label{Subsec:SpaceVel} Using our proper motion result from \S~\ref{Subsection:FinalCalc} and the arguments for interpreting this motion as an upper limit from \S~\ref{SectionDiscussion}, the projected eastward velocity of PSR~J1747$-$2958 is inferred to be $V_{\mbox{\tiny{PSR,$\perp$}}}$~{\footnotesize $\leq$}~$\left(306\pm43\right)d_{5}$~km~s$^{-1}$. Given that no motion along the line of sight or in Dec could be measured, we will assume that our estimate of $V_{\mbox{\tiny{PSR,$\perp$}}}$ approximates the 3-dimensional space velocity $V_{\mbox{\tiny{PSR}}}$. In a bow shock PWN, the pulsar's relativistic wind will be confined and balanced by ram pressure. Using our proper motion upper limit (assuming $V_{\mbox{\tiny{PSR}}}$~$\approx$~$V_{\mbox{\tiny{PSR,$\perp$}}}$), the pressure balance relationship\footnote{This relationship assumes a uniform density $\rho$ with typical cosmic abundances, expressed as $\rho=1.37n_{0}m_{H}$, where $m_{H}$ is the mass of a hydrogen atom and $n_{0}$ is the number density of the ambient ISM.} $V_{\mbox{\tiny{PSR}}}=305n_{0}^{-1/2}d_{5}^{-1}$~km~s$^{-1}$ from \S~4.4 of \citet{gaensler:5}, and Monte Carlo simulation, we find an in situ hydrogen number density $n_{0}$~$\approx$~$\left(1.0_{-0.2}^{+0.4}\right)d_{5}^{-4}$~cm$^{-3}$ at 68\% confidence, or $n_{0,95\%}$~$\approx$~$\left(1.0_{-0.4}^{+1.1}\right)d_{5}^{-4}$~cm$^{-3}$ at 95\% confidence. Our calculated density $n_{0}$ implies a local sound speed of $\sim$5~km~s$^{-1}$, corresponding to motion through the warm phase of the ISM. Our space velocity for PSR~J1747$-$2958 is comparable with other pulsars that have observed bow shocks \citep{chatterjee:2}, and is consistent with the overall projected velocity distribution of the young pulsar population \citep{hobbs, faucher}. We note that \citet{gaensler:5} estimated a proper motion and space velocity of $\approx$25~mas~yr$^{-1}$ and $\approx$600~km~s$^{-1}$, respectively, which are a factor of two larger than the values determined in this paper. However, by halving their assumed sound speed of 10~km~s$^{-1}$, their estimates of motion correspondingly halve. We now use our proper motion and hydrogen number density results to resolve the question of association between PSR~J1747$-$2958 and SNR~G359.1$-$0.5, and to investigate the age and possible future evolution of this pulsar. \subsection{Association with SNR~G359.1$-$0.5?}\label{Subsection:Association} If PSR~J1747$-$2958 and the adjacent SNR~G359.1$-$0.5 are associated and have a common progenitor, then an age estimate for the system that is independent of both distance and inclination effects is simply the time taken for the pulsar to traverse the eastward angular separation between the explosion site inferred from the SNR morphology and its current location, at its inferred eastward proper motion {\footnotesize $\lesssim$}~$\mu$. Assuming pulsar birth at the center of the SNR, the eastward angular separation between the center of SNR~G359.1$-$0.5 from \citet{uchida} and the location of PSR~J1747$-$2958 from the timing solution by \citet{gaensler:5} is found to be $\theta \sim 23'$, which would imply a system age of $\theta / \mu$~{\footnotesize $\gtrsim$}~110~kyr. Given such a large age, and the unremarkable interstellar hydrogen number density at the (currently assumed) nearby Mouse (from \S~\ref{Subsec:SpaceVel}), it would be difficult to argue why SNR~G359.1$-$0.5 has not dissipated and faded from view. Instead, SNR~G359.1$-$0.5 appears to be a middle aged remnant $\sim$18~kyr old which continues to emit thermal X-rays \citep{bamba}. We conclude, independent of distance estimates to either the pulsar or the SNR, that PSR~J1747$-$2958 is moving too slowly to be physically associated with the relatively young SNR~G359.1$-$0.5. \subsection{Age Estimate for PSR~J1747$-$2958}\label{Subsection:Age} Given that an association between the Mouse and SNR~G359.1$-$0.5 is unlikely, as outlined in \S~\ref{Subsection:Association}, we now estimate the age of PSR~J1747$-$2958 assuming that it is unrelated to this SNR. As seen in Figure \ref{fig:yusef20cm}, there is a cometary tail of emission extending around 12$^{\prime}$ ($\sim$17$d_{5}$~pc) westward of the Mouse, containing shocked pulsar wind material flowing back from the termination shock about PSR~J1747$-$2958 \citep{gaensler:5}. We begin by simplistically assuming that this pulsar was born at the tail's western tip. By dividing the tail length by the proper motion $\mu$, we estimate an age of $t_{\mbox{\tiny{tail}}}$~$\approx$~$56$~kyr. Note that this age is independent of any distance estimates to the Mouse or of inclination effects. However, given that the tail appears to simply fade away rather than terminate suddenly, it is possible that tail could be much longer, and thus that the system could be much older (by considering the upper limit arguments from \S~\ref{SectionDiscussion}, this system age may be even greater still). As discussed in \S~4.7 of \citet{gaensler:5}, it is unlikely that the Mouse or its entire tail could still be located inside an unseen associated SNR, given that the tail is smooth and uninterrupted. In addition, the lack of a rejuvenated SNR shell anywhere along the length of the tail (or indeed beyond it), such as that seen, for example, in the interaction between PSR~B1951$+$32 and SNR~CTB80 \citep{van:2}, supports the conclusion that the Mouse's tail is located entirely in the ISM. Therefore, the rim of the Mouse's unseen associated SNR must be located at a minimum angular separation of $\sim$12$^{\prime}$ west of the Mouse's current location, implying that $t_{\mbox{\tiny{tail}}}$ is a lower limit on the time elapsed since PSR~J1747$-$2958 was in the vicinity of this rim. To estimate the total age of PSR~J1747$-$2958 we thus need to incorporate the time taken for this pulsar to escape its associated SNR and reach its current location, taking into account the continued expansion of the SNR following pulsar escape (which will sweep up, and therefore shorten, part of the Mouse's tail initially located in the ISM). Using Monte Carlo simulation and following \citet{van:1}, we find that the time taken for a pulsar, traveling with velocity $V_{\mbox{\tiny{PSR,$\perp$}}}$~{\footnotesize $\leq$}~$\left(306\pm43\right)d_{5}$~km~s$^{-1}$ through a typical interstellar environment with constant hydrogen number density $n_{0}$ from \S~\ref{Subsec:SpaceVel}, to escape from its SNR (while in the pressure-drive snowplow phase) of typical explosion energy $\sim$10$^{51}$~ergs, and leave behind a 12$^{\prime}$ tail in the ISM which remains ahead of the expanding SNR, is $163_{-20}^{+28}$~kyr at 68\% confidence, or $163_{-39}^{+60}$~kyr at 95\% confidence. Note that the errors quoted for this total time incorporate the error in $\mu$ and are only weakly dependent on the uncertainty in distance $d_{5}$ (for comparison, when the distance to PSR~J1747$-$2958 is fixed at 5~kpc, the 68\% and 95\% confidence intervals are reduced to $164_{-17}^{+22}$~kyr and $164_{-31}^{+52}$~kyr, respectively). Assuming that PSR~J1747$-$2958 was created in such a supernova explosion, and noting that the pulsars's travel time in the ISM $t_{\mbox{\tiny{tail}}}$ is a lower limit (even without taking into account the upper limit associated with $\mu$, as described earlier in this section), we can thus establish a lower limit on the age of the pulsar of $t_{\mbox{\tiny{total}}}$~{\footnotesize $\geq$}~$163_{-20}^{+28}$~kyr (68\% confidence). This lower limit is greater than 6 times the characteristic age $\tau_{c}=$~25.5~kyr of PSR~J1747$-$2958, which was derived from its measured pulse $P$ and $\dot{P}$ \citep{camilo:1}, suggesting that, within the context of the characteristic approximation, the spindown properties of this pulsar deviate significantly from magnetic dipole braking (see \S~\ref{Subsection:Evolution}). Our result is similar to the age discrepancy previously claimed for PSR~B1757$-$24 \citep{gaensler:2}; however, ambiguity regarding association with SNR G5.4$-$1.2 presents difficulties with this claim \citep{thorsett,zeiger}. In comparison, given the relatively simple assumptions made in this paper, PSR~J1747$-$2958 arguably provides the most robust evidence to date that some pulsars may be much older than their characteristic age. We now discuss the potential implications of this age discrepancy with regard to the future evolution of PSR~J1747$-$2958. \subsection{Possible Future Evolution of PSR~J1747$-$2958}\label{Subsection:Evolution} Pulsars are assumed to slow down in their rotation according to the spin-down relationship $\dot{\Omega}=-K\Omega^{n}$, where $\Omega$ is the pulsar's angular rotation frequency, $\dot{\Omega}$ is the angular frequency derivative, $n$ is the braking index, and $K$ is a positive constant that depends on the pulsar's moment of inertia and magnetic moment. By taking temporal derivatives of this spin-down relationship, the rate at which the characteristic age $\tau_{c}$ changes with time can be expressed as $d\tau_{c}/dt$=$(n-1)/2$ (e.g., \citeauthor{lyne5} \citeyear{lyne5}). Evaluating $d\tau_{c}/dt$ as the ratio between $\tau_{c}=$~25.5~kyr and the lower age limit $t_{\mbox{\tiny{total}}}$, we estimate a braking index of $n$~{\footnotesize $\lesssim$}~$1.3$ for PSR~J1747$-$2958 (incorporating the error limits from $t_{\mbox{\tiny{total}}}$ does not significantly affect this value). Given that magnetic dipole braking corresponds to $n=3$ (the value assumed when calculating $\tau_{c}$), the smaller braking index calculated here indicates that either some form of non-standard pulsar braking is taking place, or that standard magnetic dipole braking is occurring in the presence of evolution of the magnetic field or of the moment of inertia (e.g., \citeauthor{blandford} \citeyear{blandford}; \citeauthor{camilo:0} \citeyear{camilo:0}). If we adopt a constant moment of inertia and assume standard electromagnetic dipole braking, then by performing a similar derivation from the spin-down relationship for the surface magnetic field, the magnetic field growth timescale can be expressed as $B/\left(dB/dt\right)$=$\tau_{c}/\left(3-n\right)$ (e.g., \citeauthor{lyne5} \citeyear{lyne5}). Evaluating this with our braking index, the magnetic field growth timescale\footnote{As noted by \citet{lyne5} the magnetic field growth is not linear over time, as can be appreciated by taking an example of a pulsar with braking index $n$=1.} is estimated to be $\approx$15~kyr. It is interesting to note that the braking index inferred here for PSR~J1747$-$2958 is comparable to the value obtained from an estimate of $\ddot{P}$ made for the Vela pulsar B0833$-$45, which was found to have $n = 1.4 \pm 0.2$ \citep{lyne4}. To investigate the possible future evolution of PSR~J1747$-$2958, we plot its implied trajectory (along with that of the Vela pulsar) across the $P-\dot{P}$ diagram, as shown in Figure \ref{fig:ppdot}. Note that the magnitude of each plotted vector indicates motion over a timescale of 50 kyr assuming that $\ddot{P}$ is constant, which is not true for constant $n$; however, the trend is apparent. The longer vector for the Vela pulsar simply indicates that it is braking more rapidly than PSR~J1747$-$2958. \begin{figure}[h] \setlength{\abovecaptionskip}{-7pt} \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=8.5cm, angle=-90]{f4onlineCOLOR.ps} \end{center} \caption{The $P-\dot{P}$ diagram for rotating neutron stars, where points indicate the period and period derivative for over 1600 rotating stars where such measurements were available (data obtained from the ATNF Pulsar Catalogue, version 1.34; \citeauthor{manchester} \citeyear{manchester}). The periods and period derivatives of three neutron stars, B0833$-$45 (the Vela pulsar), J1747$-$2958 (the Mouse pulsar), and J0537$-$6910 (a 16-ms pulsar in the Large Magellanic Cloud) are indicated with symbols (see discussion at the end of \S~\ref{Subsection:Evolution}), as well as those of the known magnetars. Dashed lines indicate contours of constant $\tau_{c}$, while solid lines indicate contours of constant surface magnetic field $B$. Points toward the bottom left are millisecond (recycled) pulsars, those in the central concentration are middle$-$old age pulsars, and those to the top left are young pulsars. The middle$-$old age pulsars are presumed to move down to the right along lines of constant magnetic field, corresponding to magnetic dipole braking, while some young pulsars such as the Vela pulsar have been found to be moving away from the main body of pulsars, up and to the right toward the region of the magnetars. The plotted vectors for PSR~B0833$-$45 \citep{lyne5} and for PSR~J1747$-$2958 (this paper) represent inferred trajectories across the diagram over 50~kyr, assuming a constant braking index over this time period; the magnitude of each vector is proportional to the magnetic field growth timescale of the respective neutron star.} \label{fig:ppdot} \end{figure} The plotted vectors for the Vela and Mouse pulsars both seem to point in the direction of the magnetars (high-energy neutron stars for which $B${\footnotesize $\gtrsim$}10$^{14}$~G; for a review of magnetars, see \citeauthor{woods} \citeyear{woods}). By extrapolating the trajectories in Figure \ref{fig:ppdot}, and assuming negligible magnetic field decay over time, it can be suggested that young, energetic, rapidly spinning pulsars such as PSR~J0537$-$6910 \citep{marshall}, whose location in the $P-\dot{P}$ plane is shown with a star symbol, may evolve into objects like the Vela or Mouse pulsars, which, as proposed by \citet{lyne5}, may in turn continue to undergo magnetic field growth until arriving in the parameter space of the magnetars. \section{Conclusions}\label{SectionConclusions} We have investigated two epochs of interferometric data from the VLA spanning 12 years to indirectly infer a proper motion for the radio pulsar J1747$-$2958 through observation of its bow shock PWN~G359.23$-$0.82. Derivative images were used to highlight regions of rapid spatial variation in flux density within the original images, corresponding to the vicinity of the forward termination shock, thereby acting as a proxy for the motion of the pulsar. We measure an eastward proper motion for PWN~G359.23$-$0.82 of $\mu=12.9\pm1.8$~mas~yr$^{-1}$, and interpret this value as an upper limit on the motion of PSR~J1747$-$2958. At this angular velocity, we argue that PSR~J1747$-$2958 is moving too slowly to be physically associated with the relatively young adjacent SNR~G359.1$-$0.5, independent of distance estimates to either object or of inclination effects. At a distance $d=5d_{5}$~kpc, the proper motion corresponds to a projected velocity of $V_{\mbox{\tiny{PSR,$\perp$}}}$~{\footnotesize $\leq$}~$\left(306\pm43\right)d_{5}$~km~s$^{-1}$, which is consistent with the projected velocity distribution for young pulsars. Combining the time taken for PSR~J1747$-$2958 to traverse its smooth $\sim$12$^\prime$ radio tail with the time to escape a typical SNR, we calculate a lower age limit for PSR~J1747$-$2958 of $t_{\mbox{\tiny{total}}}$~{\footnotesize $\geq$}~$163_{-20}^{+28}$~kyr (68\% confidence). The lower age limit $t_{\mbox{\tiny{total}}}$ exceeds the characteristic age of PSR~J1747$-$2958 by more than a factor of 6, arguably providing the most robust evidence to date that some pulsars may be much older than their characteristic age. This age discrepancy for PSR~J1747$-$2958 suggests that the pulsar's spin rate is slowing with an estimated braking index $n$~{\footnotesize $\lesssim$}~$1.3$ and that its magnetic field is growing on a timescale $\approx$15~kyr. Such potential for magnetic field growth in PSR~J1747$-$2958, in combination with other neutron stars that transcend their archetypal categories such as PSR~J1718$-$3718, a radio pulsar with a magnetar-strength magnetic field that does not exhibit magnetar-like emission \citep{kaspi}, PSR~J1846$-$0258, a rotation-powered pulsar that exhibits magnetar-like behaviour \citep{gavril,archibald}, and magnetars such as 1E 1547.0$-$5408 that exhibit radio emission (\citeauthor{camilo:2} \citeyear{camilo:2}, and references therein), supports the notion that there may be evolutionary links between the rotation-powered and magnetar classes of neutron stars. However, such a conclusion may be difficult to reconcile with evidence suggesting that magnetars are derived from more massive progenitors than normal pulsars (e.g., \citeauthor{gaens:8} \citeyear{gaens:8}; \citeauthor{muno} \citeyear{muno}). If the massive progenitor hypothesis is correct, then this raises further questioning of whether, like the magnetars, there is anything special about the progenitor properties of neutron stars such as PSR~J1747$-$2958, or whether all rotation-powered pulsars exhibit similar magnetic field growth or even magnetar-like phases in their lifetimes. To constrain the motion of PSR~J1747$-$2958 further, future observational epochs are desirable. It may be possible to better constrain the motion and distance to this pulsar by interferometric astrometry with the next generation of sensitive radio telescopes (e.g., \citeauthor{2004NewAR..48.1413C} \citeyear{2004NewAR..48.1413C}). High time resolution X-ray observations may also be useful to detect any magnetar-like behaviour from this rotation-powered radio pulsar. In general, more neutron star discoveries, as well as measured or inferred braking indices, may allow for a better understanding of possible neutron star evolution. \acknowledgments We thank the anonymous referee for their helpful comments. C.~A.~H. acknowledges the support of an Australian Postgraduate Award and a CSIRO OCE Scholarship. B.~M.~G. acknowledges the support of a Federation Fellowship from the Australian Research Council through grant FF0561298. S.~C. acknowledges support from the University of Sydney Postdoctoral Fellowship program. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. {\it Facilities:} \facility{VLA}.
2024-02-18T23:40:07.955Z
2009-10-15T02:34:52.000Z
algebraic_stack_train_0000
1,410
6,647
proofpile-arXiv_065-7000
\section{Introduction} Graphene is a material with superior optical, electronic and mechanical properties~\cite{geim2010rise,chen2012optical,fei2012gate,fang2013gated,gullans2013single}. And, it can be used for replacing noble metals (mainly Au and Ag) for applications operating at near to far infrared (IR) wavelengths~\cite{GarciadeAbajo2014}. Graphene possesses an advantage over noble metals because it has smaller material losses \cite{Khurgin2015a} and also its optical properties are tunable~\cite{ju2011graphene}, thus allowing the design of multipurpose applications~\cite{Novoselov2004,Grigorenko2012,Low2014a}. In recent years, graphene has also been recognized as a promising active material for super-capacitors. Studies show that having large surface area is essential for such applications~\citep{liu2010graphene,stoller2008graphene}. In that respect, a spherical geometry~(graphene nano-ball) is suggested for increasing the surface area. Then, it was shown that a graphene mesoporous structure with an average pore diameter of $4.27$ nm, can be fabricated via chemical vapor deposition technique~\cite{lee2013chemical}. Additionally, self-crystallized graphene and graphite nano-balls have been recently demonstrated via Ni vapor-assisted growth~\cite{yen2014direct}. Utilization of such growth techniques or in-liquid synthesis methods~\cite{tan2018assembly} can be employed to construct nanoparticle-graphene composite structures which operate at strong-coupling regime. In many of studies on such nanoscale composites, the focus of attention resided mainly on electrical properties. It is also intriguing to study the optical applications of the graphene spherical shell structures~\cite{Daneshfar2019}. In addition, the electromagnetic response of spherical shells has also been studied in terms of their plasmonic responses~\cite{Christensen2015,bian2016optical}. Graphene plasmons~(GPs) can be tuned continuously by applying a voltage or can be adjusted by electrostatic doping~\cite{chu2013active} besides trapping the incident light into small volumes~\cite{Koppens2011,toscano2013nonlocal}. This tuning provides incredible potential in a vast amount of applications, such as sensing~\cite{li2014graphene}, switching~\cite{ju2011graphene}, and meta-materials~\cite{balci2018electrically}. Placing a quantum emitter~(QE), such as a quantum dot~(QD), in close proximity to a graphene nano-structure can yield strong interaction~\cite{Koppens2011} and modulations in optical properties. Usually the interactions between QE placed in a nano-structured environment are described through investigating the QE's lifetime, calculating the Purcell factor \cite{Koppens2011}. For such simulations, the QE-nano-structure interaction is described in terms of non-Hermitian of quantum electrodynamics, and the QE is assumed as a point dipole source. Moreover, the interaction between the QE and an infinite graphene layer has been investigated experimentally by measuring the relaxation rate for varying the distance between them \cite{Gaudreau2013} and varying the chemical potential value of the graphene layer~\cite{Tielrooij2015}. The QEs used are erbium ions with a transition energy close to the telecommunication wavelength, where the graphene nano-structures can have a plasmonic response for specific chemical potential values. Moreover, there are variety of molecules and quantum emitters also operating at infrared wavelengths~\cite{Treadway2001,Pietryga2004}. In this paper, we demonstrate the strong coupling between a ~5 nm-radius quantum dot~(QD) and a graphene spherical shell, of the same size, coating the QD as shown in Fig.~\ref{fig:sktch}(a). We show that, in this way, the strong coupling between the QD and the graphene shell can be achieved even in a single quantum emitter level. A splitting of about 80 meV between the two hybrid modes could be obtained due to the strong coupling, interestingly, even when the spectrum of the QD does not overlap with the one of the graphene shell (when the two, actually, are off-resonant to each-other). When the coupling between the two is off-resonant, one of the (tunable) hybrid modes is very narrow compared to the linewidth of the bare graphene~(around three times). The spectral positions of the these hybrid modes can be controlled via tuning the chemical potential of the graphene shell, which can be done by tailoring the Fermi energy level of graphene by the applied bias voltage~\cite{balci2016dynamic} as shown in Fig.~\ref{fig:sktch}(b). Beyond demonstrating these effects via exact solutions of the 3D Maxwell equations, i.e., taking the retardation effects into account, we also show that the same effects are already predicted by a simple analytical model. We explain the physics, i.e., why such a sharp hybrid mode appear simply on the analytical model. Achieving such `tunable' narrow linewidth plasmonic (plexcitonic) modes are invaluable for sensing applications. Because one of the hybrid modes has a sharper linewidth, carrying potential for enhanced Figure of Merit (FOM) sensing and, for instance, graphene display technologies~\cite{cartamil2018graphene} with sharper color tunings. We remark that the spectral position of a QD (in general a QE) is also tunable via the applied voltage. The coupling of light to a QD, however, is order of magnitude lower compared to a graphene shell - QD hybrid, which also provides a more intense hot-spot. \begin{figure} \includegraphics[width=8cm]{qdson8.png} \caption{(a) The hybrid structure, a QD coated with a graphene spherical shell. (b) In the proposed experimental setup, the hybrid structure is placed between the substrate and the AFM tip with zero and finite electrical potential respectively to tailor the Fermi energy level of graphene by the applied bias voltage~\cite{balci2016dynamic}. \label{fig:sktch}} \end{figure} The paper is organized as follows. We first present the exact solutions of the 3D-Maxwell equations, specifically, the absorption spectrum of the graphene spherical shell, the semi-conducting sphere individually and the combination of a QE with a graphene spherical shell (the full case), respectively in Sec.~\ref{Sec:Simulations}. Next, we describe the theoretical model and derive an effective Hamiltonian for a two-level system (QE) coupled to GPs in Sec.~\ref{Sec:model} where we derive the equations of motion for suggested structure and obtain $a$ $single$ $equation$ for the steady-state plasmon amplitude. A summary appears in~Sec.~\ref{Sec:Conclusion}. \section{Electromagnetic simulations of the absorption of a graphene coated semi-conducting sphere} \label{Sec:Simulations} When the absorption peak of the QE matches the GP resonance, we observe a splitting in the absorption band due to the interaction between the exciton polariton mode, of the semi-conducting sphere, with the localized surface GP mode, supported by the graphene spherical shell. To prove this, we perform electromagnetic simulations by using MNPBEM package~\cite{hohenester2012mnpbem}, through solving the Maxwell equations in 3-dimensions. This splitting is connected to the energy exchange between the two modes. Due to the large splitting the system enters the strong coupling regime, where a splitting of $80\,m$eV between the hybrid-modes is observed~\cite{Baranov2018}. These type of collective modes have been also named as plexcitons~\cite{Manjavacas2011}. We stress out that the QE coated with graphene spherical shell has been experimentally demonstrated~\cite{Wu2013}. In this section, we start with presenting the mathematical framework and the expressions that give the dielectric permittivity of the graphene spherical shell and of the semi-conducting QE. Next we present results regarding the absorption spectrum of the graphene spherical shell, the QE and the full case of QE with a graphene spherical shell coating. \begin{figure} \includegraphics[width=8cm]{Fig01.png} \caption{Absorption spectrum of the graphene spherical shell, varying the excitation wavelength. We keep fixed the value of the chemical potential, $\mu=1\,$eV, of the graphene spherical shell, for different values of its radius, $R=2.5\,$nm, $5\,$nm, $10\,$nm and $15\,$nm.\label{fig:Fig01}} \end{figure} The optical response of graphene is given by its in-plane surface conductivity, $\sigma$, in the random phase approximation~\cite{Jablan2009,Falkovsky2008}. This quantity is mainly determined by electron-hole pair excitations, which can be divided into intraband and interband transitions~$ \sigma=\sigma_{\text{intra}}+\sigma_{\text{inter}} $. It depends on the chemical potential~($\mu$), the temperature~($T$), and the scattering energy~($E_{S}$) values~\cite{Wunsch2006}. The intraband term $\sigma_{\text{intra}}$ describes a Drude modes response, corrected for scattering by impurities through a term containing $\tau$, the relaxation time. The relaxation time, $\tau$, causes the plasmons to acquire a finite lifetime and is influenced by several factors, such as collisions with impurities, coupling to optical phonon and finite-size effects. In this paper, we assume that $T=300\,\text{K}$ and $\tau=1\,\text{ps}$. In addition, we vary the value of chemical potential~\cite{Novoselov2004}, $\mu$, for active tuning of GPs. \begin{figure} \includegraphics[width=8cm]{Fig02.png} \caption{Extinction/absorption spectrum of the graphene spherical shell, varying the excitation wavelength. We keep fixed the radius, $R=5\,$nm, of the graphene spherical shell, for different values of chemical potential, $\mu=0.2\,$eV, $0.4\,$eV, $0.6\,$eV, $0.8\,$eV and $1.0\,$eV.\label{fig:Fig02}} \end{figure} In Fig.~\ref{fig:Fig01} and Fig.~\ref{fig:Fig02}, we present the extinction spectrum of the graphene spherical shell by a plane wave illumination. In both figures we observe a peak in the extinction spectrum, this peak value is due to the excitation of localized surface plasmon (LSP) mode supported by the graphene spherical shell. In particular, the LSP resonance frequency is given as a solution of the equation~\cite{Christensen2015}: \begin{equation} \frac{i\epsilon\omega_{l}}{2\pi\sigma\left(\omega_{l}\right)}=\left(1+\frac{1}{2l+1}\right)\frac{l}{R},\label{eq:06} \end{equation} where $R$ is the radius of the graphene spherical shell, $\epsilon$ is the dielectric permittivity of the surrounding medium and the space inside the graphene spherical shell and $l$ is the resonance eigenvalue which is connected with the expansion order. Here, we focus on graphene spherical shell radii that $R\ll\lambda$, where $\lambda$ is the excitation wavelength, thus we focus on the dipole mode $l=1$. Since, $R\ll\lambda$, the extinction and the absorption have essentially the same value (we disregard its scattering). Moreover, the LSP resonance depends on the intraband contributions of the surface conductivity, which, in the limit $\mu/\hbar\omega\gg1$, $\sigma(\omega)=4ia\mu/\hbar\omega$, ignoring the plasmon lifetime. Then, the LSP resonance wavelength ($\lambda_{1}$) has the value: \begin{equation} \lambda_{1}=2\pi c\sqrt{\frac{\hbar\varepsilon}{\pi a\mu}\frac{1}{12}R}.\label{eq:07} \end{equation} In boundary element simulations, using MNPBEM~\cite{hohenester2012mnpbem}, the graphene spherical shell is modeled as a thin layer of thickness $d=0.5\,\text{nm}$, with a dielectric permittivity~\cite{Vakil2011}, ($\varepsilon(\omega)$) \begin{equation} \epsilon(\omega)=1+\frac{4\pi\sigma(\omega)}{\omega d},\label{eq:03} \end{equation} where the surface conductivity is given by Eq.~\ref{eq:06}~\cite{Novoselov2004}. \begin{figure} \includegraphics[width=8cm]{Fig03.png} \caption{Absorption spectra of the QE coated with the graphene spherical shell with respect to excitation wavelength. Different values of the chemical potential are employed while the QE transition energy is kept constant at $\lambda_{eg}=1550\,$nm. More details on simulation parameters are given in the inset.\label{fig:Fig03}} \end{figure} In Fig.~\ref{fig:Fig01}, we present the absorption spectrum of the graphene spherical shell in near IR region, considering different values of its radius $R=2.5\,$nm, $5\,$nm, $10\,$nm and $15\,$nm. We consider a fixed value for the chemical potential, $\mu=1\,$eV and observe that by increasing the radius of the graphene spherical shell the surface plasmon resonance is red-shifted as is predicted by Eq.~\ref{eq:07}. The dipole surface plasmon resonance from Fig.~\ref{fig:Fig01} for $R=10\,$nm is $2190\,$nm and from numerically solving Eq.~\ref{eq:06} it is $2120\,$nm, validating our approach. Moreover, increasing the graphene spherical shell radius the absorption strength gets higher. In Fig.~\ref{fig:Fig02}, we present the extinction spectrum of the graphene spherical shell, for fixed radius $R=5\,$nm, for different values of the chemical potential, $\mu=0.2\,$eV, $0.4\,$eV, $0.6\,$eV, $0.8\,$eV and $1.0\,$eV. As the value of the chemical potential increases the GP resonance is shifted to lower wavelengths as expected from Eq.~\ref{eq:07}. The physical explanation for such behavior is that the optical gap increases as the chemical potential value increases, thus the surface plasmon resonance blue-shifts. For exploring the effect of coupling, we placed a QD (QE) inside graphene spherical shell. The optical properties of the QE are also described through its absorption spectrum. We here stress out that we do not take into account the emission of the QE itself. Response of a QD or QE can be safely modeled by a Lorentzian dielectric function~\cite{wu2010quantum,postaci2018silent}. \begin{equation} \epsilon_{eg}(\omega)=\epsilon_{\infty}-f\frac{\omega_{eg}^{2}}{\omega^{2}-\omega_{eg}^{2}+i\gamma_{_{eg}}\omega},\label{eq:04} \end{equation} where $\epsilon_{\infty}$ is the bulk dielectric permittivity at high frequencies, $f$ is the oscillator strength~\cite{thomas2018plexcitons,leistikow2009size} and $\gamma_{eg}$ is the transition line-width, which is connected to quality of the QE. $\omega_{eg}$ is connected with the energy from the excited to the ground state of the the QE. As the sphere is composed by a semi-conducting material, it supports localized exciton polariton modes. The sphere sizes considered in this paper are much smaller than the excitation wavelength and only the dipole exciton resonance is excited. In the electrostatic limit, condition for exciting the dipole localized exciton resonance is given by the ${\rm Re}\left(\epsilon_{eg}(\omega)\right)=-2\epsilon$, where $\epsilon$ is the dielectric permittivity of the surrounding medium, is this paper we consider $\epsilon=1$. From this resonance condition it becomes apparent that changing the radius of the semi-conducting sphere does not influence its resonance wavelength, as long as $R\ll\lambda$. On the other hand, as the level spacing of the QE changes, the position of the dipole localized exciton resonance shifts accordingly. \begin{figure} \vskip-0.4truecm \includegraphics[width=8cm]{Fig04.png} \vskip-0.2truecm \caption{Absorption spectra of the QE coated with the graphene spherical shell with respect to excitation wavelength. Fixed value for the chemical potential is taken as $\mu=1.2\,$eV, while different values of the transition energy of the QE are considered. More details in the inset.\label{fig:Fig04}} \end{figure} In Fig.~\ref{fig:Fig03}, we consider the full case in which the QE is coated by graphene spherical shell. We simulate the absorption of the combined system in the same spectral region. We start in Fig.~\ref{fig:Fig03} by considering the effect of the value of the chemical potential, $\mu$, in the absorption of the combined system, where the value of the transition energy of the QE is fixed at $\lambda_{eg}=1550\,$nm. For the chemical potential $\mu=1\,$eV the splitting in the absorption spectrum is $\hbar\Omega=84\,m$eV, where we can apparently see that the localized exciton mode is off-resonant to the surface plasmon mode. This means that the interaction between GP and exciton modes is still in the strong coupling regime. In addition, the initial splitting blue-shifts as the value of the chemical potential $\mu$ increases. In Fig.~\ref{fig:Fig04}, we present the absorption of the QE coated with graphene spherical shell, for $\mu=1.2\,$eV and the radius of the sphere is $R=5\,$nm. We consider different values of the transition energy of the QE, $\lambda_{eg}$. We observe that by increasing the value of $\lambda_{eg}$ the resonance of the exciton polariton mode redshifts, similarly the splitting in the extinction/absorption of the combined QE core-graphene spherical shell nanosystem also redshifts. Initially for $\lambda_{eg}=1400\,$nm, even the exciton polariton and the GP modes are highly off-resonant, we still observe the plexitonic modes, which can also be read as the existence of the strong coupling between the nanostructures. In the following section, we explain this in more detail with a simple analytical model. \begin{figure*} \centering \includegraphics[width=16cm]{Fig05.png} \vskip-0.0truecm \caption{The scaled absorption intensity of the GP~($ |\alpha_{_{GP}}|^2 $) as a function of excitation wavelength~$\lambda$, obtained from Eq. (\ref{EOMa}-\ref{EOMc}). (a) In the absence~(black-solid) and in the presence of the QE having resonance at~$\lambda_{eg}$ = 1535 nm~(dark gray-dotted),~$\lambda_{eg}$ = 1500 nm~(blue-dashed)~and $\lambda_{eg}$ = 1400 nm~(red- dashed-dotted) for a fixed coupling strength,~$\Omega_R$ = 0.05 $\omega_{_{GP}} $. Variation of the resonance intensity of GP with excitation wavelength~$\lambda$ and coupling strength~$\Omega_R$ for (b) $\lambda_{eg}$ = 1535 nm and (c) $\lambda_{eg}$ =1400 nm. Here we use $ \gamma_{_{GP}}=0.005$ $\omega_{_{GP}} $ and $ \gamma_{eg}=10^{-5}$ $\omega_{_{GP}} $. \label{fig:Fig05}} \end{figure*} \section{The analytical model} \label{Sec:model} Here, we write the effective Hamiltonian for the GPs coupled to a QE and derive the equations of motion. We consider the QE as a two level system~\cite{wu2010quantum} with level spacing~$\omega_{eg}=2\pi c/ \lambda_{eg}$. In the steady state, we obtain a single equation. We show that by using this equation one can have a better understanding on the parameters of the combined system. We consider the dynamics of the total system as follows. The incident light ($\varepsilon_{_L}$) with optical frequency~$ \omega=2\pi c/ \lambda$ excites a GP ($ \hat{a}_{_{GP}}$), which is coupled to a QE. The Hamiltonian of the system can be written as the sum of the energy of the QE and GP~($\omega_{_{GP}}=2\pi c/ \lambda_{_{GP}}$) oscillations~($\hat{H}_0 $) and the energy transferred by the pump source~($\hat{H}_{L}$) \begin{eqnarray} \hat{H}_0&=&\hbar \omega_{_{GP}} \hat{a}_{_{GP}}^\dagger \hat{a}_{_{GP}}+\hbar \omega_{eg} |e \rangle \langle e|\\ \hat{H}_{L}&=&i\hbar (\varepsilon_{_L} \hat{a}_{_{GP}}^\dagger e^{-i\omega t} -h.c) \label{Ham0} \end{eqnarray} and the interaction of the QE with the GP modes ($ \hat{H}_{int}$) \begin{eqnarray} \hat{H}_{int}&=&\hbar \{\Omega_R^\ast \hat{a}_{_{GP}}^\dagger |g\rangle \langle e|+ \Omega_R |e\rangle \langle g| \hat{a}_{_{GP}} \}, \end{eqnarray} where the parameter $ \Omega_R $, in units of frequency, is the coupling strength between GP and the QE. $|g\rangle$~($|e \rangle$) is the ground~(excited) state of the QE. In the strong coupling limit, one needs to consider counter-rotating terms in the interaction Hamiltonian~\cite{scully1999quantum}, but there is still no analytically exact solution~\cite{gan2010dynamics}. Instead of pursuing a full consideration, left for future work, we demonstrate here RWA, giving consistent results for the structure considered in this work. Moreover, we are interested in intensities but not in the correlations, so we replace the operators $\hat{a}_i$ and $ \hat{\rho}_{ij}= |i\rangle \langle j|$ with complex number ${\alpha}_i$ and $ {\rho}_{ij} $~\cite{Premaratne2017} respectively and the equations of motion can be obtained as \begin{subequations} \begin{align} \dot{{\alpha}}_{_{GP}}&=-(i\omega_{_{GP}}+\gamma_{_{GP}}) {\alpha}_{_{GP}}-i \Omega_R^\ast{{\rho}}_{ge}+\varepsilon_{_L} e^{-i\omega t} \label{EOMa},\\ \dot{{\rho}}_{ge} &= -(i \omega_{eg}+\gamma_{eg}) {\rho}_{ge}+i \Omega_R {\alpha}_{_{GP}}({\rho}_{ee}-{{\rho}}_{gg}) \label{EOMb},\\ \dot{{{\rho}}}_{ee} &= -\gamma_{ee} {{\rho}}_{ee}+i \bigl \{\Omega_R^\ast {\alpha}^\ast_{_{GP}} {{\rho}}_{ge}- \textit{c.c} \bigr \} \label{EOMc}, \end{align} \end{subequations} where $ \gamma_{_{GP}} $ and $ \gamma_{eg} $ are the damping rates of the GP mode and of the off-diagonal density matrix elements of the QE, respectively. The values of the damping rates are considered as the same with previous section. The conservation of probability ${\rho}_{ee}+{\rho}_{gg}=1$ with the diagonal decay rate of the QE~$ \gamma_{ee} = 2 \gamma_{eg} $ accompanies Eqs.(\ref{EOMa}-\ref{EOMc}). In the steady state, one can define the amplitudes as \begin{eqnarray} {\alpha}_{_{GP}}(t)&=&\tilde{\alpha}_{_{GP}} e^{-i\omega t}, \qquad {\rho}_{ge}(t)= \tilde{\rho}_{ge} e^{-i\omega t}, \label{stat} \end{eqnarray} where $\tilde{\alpha}_{_{GP}}$ and $ \tilde{\rho}_{ge} $ are constant in time. By inserting Eq.(\ref{stat}) into Eqs.(\ref{EOMa}-\ref{EOMc}), the steady-state solution for the GP mode can be obtained as \begin{eqnarray} \tilde{\alpha}_{_{GP}}=\frac{\varepsilon_{_L} [i(\omega_{eg}-\omega)+\gamma_{eg}]}{(\omega-\Omega_+)(\omega-\Omega_-)+i\Gamma(\omega)}, \label{Eq-Stat2} \end{eqnarray} where $ \Omega_{\pm}=\delta_+ \pm \sqrt{\delta_-^2-|\Omega_R|^2y+\gamma_{eg}\gamma_{_{GP}}}$ defines hybrid mode resonances~\cite{vasa2017strong} and $\Gamma(\omega)=[\gamma_{eg}(\omega_{_{GP}}-\omega)+\gamma_{_{GP}}(\omega_{eg}-\omega)]$ with $ \delta_\pm=(\omega_{_{GP}} \pm \omega_{eg})/2 $ and population inversion $ y=\rho_{ee}-\rho_{gg} $ terms. It is important to note that the results presented in Fig.~\ref{fig:Fig05} and Fig.~\ref{fig:Fig06} are the exact solutions of Eqs.(\ref{EOMa}-\ref{EOMc}). We study the steady-state in Eq.(\ref{Eq-Stat2}) to gain a better understanding over the parameters and avoid time consuming electromagnetic 3D simulations of the combined system. Moreover, we hereafter calculate the intensity of the GP mode in Eq.~(\ref{EOMa}), which is related to the absorption from the nanostructure~\cite{Wu2013}, to compare the results with the electromagnetic 3D-simulations. To find the modulation of the intensities of the hybrid modes in the presence of QE, we use different resonance values of the QE,~ $ \lambda_{eg}=2\pi c/\omega_{eg} $ = 1535 nm 1500 nm 1400 nm in Fig.~\ref{fig:Fig05}a. The quantitative results comparing with the numerical simulations in Fig.~\ref{fig:Fig04}, which takes retardation effects into account, are obtained. We also show the evolution of the hybrid-modes by varying interaction strength $ |\Omega_R| $ for zero detuning~($ \delta_-=0 $) in Fig.~\ref{fig:Fig05}b, and for highly off-resonant case in Fig.~\ref{fig:Fig05}c. The strong coupling regime is reached if $ \Omega_R^2 \ > (\gamma_{_{GP}}^2+\gamma_{eg}^2)/2 $~\cite{torma2014strong}, that is the coupling strength exceeds the sum of the dephasing rates. When QE and GP are resonant~[see Fig.~\ref{fig:Fig05}b] a dip starts to appear around $ |\Omega_R| \approx \gamma_{_{GP}} $. This can be also read from Eq.~(\ref{Eq-Stat2}). That is when $\omega_{eg}=\omega_{_{GP}}=\omega $, the Eq.~(\ref{Eq-Stat2}) becomes $\tilde{\alpha}_{_{GP}}\propto \gamma_{eg}/(|\Omega_R|^2y+\gamma_{_{GP}}\gamma_{eg}) $. Since $ \gamma_{eg} $ is very small from other frequencies, with increasing $ |\Omega_R| $, $ \tilde{\alpha}_{_{GP}} $ becomes smaller compared to case obtained without QE. Beyond a point, where the transparency window appears~\cite{wu2010quantum}, there emerge two different peaks centered at frequencies $ \Omega_\pm $. And, the separation becomes larger as the $ \Omega_R $ increases. This argument is not valid when GP and level spacing of the QE are highly off-resonant. In this case, to make second peak significant, the interaction strength has to be much larger than $ \gamma_{_{GP}} $~[see Fig.~\ref{fig:Fig05}c]. The dip can be seen at $ \omega_{eg} $, which is out of the GP resonance window and it may not be useful for practical applications. However, having sharp peak, due to strong coupling between off-resonant particles, can be very useful for the sensing applications. The reason for that it has smaller line-width and can be tuned by changing chemical potential. To show this, in Fig.~\ref{fig:Fig06}, we plot the evolution of the field intensity of the GP~($ |\alpha_{_{GP}}|^2 $) as a function of excitation wavelength~$\lambda$ and GP resonance~$\lambda_{_{GP}}$, when graphene is alone Fig.~\ref{fig:Fig06}a and with QE Fig.~\ref{fig:Fig06}b. It can be seen from Fig.~\ref{fig:Fig06}b that it is possible to control the positions and line-widths of the hybrid resonances by adjusting $\mu$. The similar behavior is also obtained in MNPBEM simulation~[see Fig.~\ref{fig:Fig03}]. \begin{figure} \centering \includegraphics[width=9.cm]{Fig06.png} \vskip-0.truecm \caption{The scaled field intensity of the GP~($ |\alpha_{_{GP}}|^2 $) as a function of excitation wavelength~$\lambda$ and GP resonance~$\lambda_{_{GP}}$, when the graphene spherical shell is alone (a) and with QE (b). We scale GP intensity with its maximum value and the parameters are used as:~$\Omega_R$ = 0.1 $\omega_{eg} $, $ \gamma_{_{GP}}=0.01$ $\omega_{eg} $ and $ \gamma_{eg}=10^{-5}$ $\omega_{eg} $.} \label{fig:Fig06} \end{figure} \section{Summary} \label{Sec:Conclusion} In summary, we investigate the optical response of the GPs for the spherical shell geometry in the presence and absence of the QE. We show that there is a tunability of the optical response of the graphene spherical shell through changing the value of the chemical potential and its radius. For the combined system (the QE covered with a graphene layer) we observe a splitting in the absorption band. This is due to the strong coupling regime where splitting of up to $80\,m$eV are observed in a single QE limit. We also discuss the case when the QE and GP are off-resonant, and observe that the system can hold strong coupling. The results of the theoretical model, we present here, support the exact solutions of the 3D-Maxwell equations obtained from MNPBEM simulations. Our results show that chemical potential and the coupling strength can be used as tuning parameters for tuning the extinction spectrum of the nanocomposite very effectively. Tuning of the chemical potential can be induced by use of an electrolytic cell or alternatively an electrical nanocontact through scanning probe microscope tip as shown in Fig. 1. The same tip can also be used to mechanically alter the graphene spherical shell-QE layout in a way to modify the coupling strength of the two counterparts through mechanical distortion of the graphene spherical shell and hence as another tuning mechanism. We expect our results to contribute to controlling light-matter interactions at the nanometer scale and find potential from all-optical switch nonlinear devices to sensing applications with current experimental ability for fabrication. Extreme field confinement, device tunability and low losses make such structures even more attractive in the future studies. \section*{Acknowledgments} $^\ddagger$ Contributed equally. This research was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No. 117F118. MET, AB and RS acknowledge support from TUBITAK 1001-119F101.
2024-02-18T23:40:08.145Z
2020-01-16T02:06:32.000Z
algebraic_stack_train_0000
1,420
4,459
proofpile-arXiv_065-7034
\section[#2]{#2% \sectionmark{#1}} \sectionmark{#1}} \newcommand{\markedsubsection}[2]{\subsection[#2]{#2% \subsectionmark{#1}} \subsectionmark{#1}} \begin{document} \onehalfspacing \begin{titlepage} \title{\bf The Regression Discontinuity Design\footnote{We thank Rich Nielsen for his comments and suggestions on a previous version of this chapter.}} \author{ Matias D. Cattaneo\thanks{Department of Operations Research and Financial Engineering, Princeton University.} \and Roc\'{i}o Titiunik\thanks{Department of Politics, Princeton University.} \and Gonzalo Vazquez-Bare\thanks{Department of Economics, University of California at Santa Barbara.} } \date{\large{\vspace{0.5in} \today}} \end{titlepage} \maketitle \thispagestyle{empty} \begin{center} Handbook chapter published in\medskip\\ \textit{Handbook of Research Methods in Political Science and International Relations}\medskip\\ Sage Publications, Ch. 44, pp. 835-857, June 2020. \bigskip\\ Published version:\medskip\\ \url{http://dx.doi.org/10.4135/9781526486387.n47} \end{center} \thispagestyle{empty} \newpage\pagenumbering{roman}\setcounter{page}{1} {\singlespacing\tableofcontents} \newpage \newpage\pagenumbering{arabic}\setcounter{page}{1} \section{Introduction}\label{sec:intro} The Regression Discontinuity (RD) design has emerged in the last decades as one of the most credible non-experimental research strategies to study causal treatment effects. The distinctive feature behind the RD design is that all units receive a score, and a treatment is offered to all units whose score exceeds a known cutoff, and withheld from all the units whose score is below the cutoff. Under the assumption that the units' characteristics do not change abruptly at the cutoff, the change in treatment status induced by the discontinuous treatment assignment rule can be used to study different causal treatment effects on outcomes of interest. The RD design was originally proposed by \citet*{Thistlethwaite-Campbell_1960_JEP} in the context of an education policy, where an honorary certificate was given to students with test scores above a threshold. Over time, the design has become common in areas beyond education, and is now routinely used by scholars and policy-makers across the social, behavioral, and biomedical sciences. In particular, the RD design is now part of the standard quantitative toolkit of political science research, and has been used to study the effect of many different interventions including party incumbency, foreign aid, and campaign persuasion. In this chapter, we provide an overview of the basic RD framework, discussing the main assumptions required for identification, estimation, and inference. We first discuss the most common approach for RD analysis, the \textit{continuity-based} framework, which relies on assumptions of continuity of the conditional expectations of potential outcomes given the score, and defines the basic parameter of interest as an average treatment effect at the cutoff. We discuss how to estimate this effect using local polynomials, devoting special attention to the role of the bandwidth, which determines the neighborhood around the cutoff where the analysis is implemented. We consider the bias-variance trade-off inherent in the most common bandwidth selection method (which is based on mean-squared-error minimization), and how to make valid inferences with this bandwidth choice. We also discuss the local nature of the RD parameter, including recent developments in extrapolation methods that may enhance the external validity of RD-based results. In the second part of this chapter, we overview an alternative framework for RD analysis that, instead of relying on continuity of the potential outcome regression functions, makes the assumption that the treatment is as-if randomly assigned in a neighborhood around the cutoff. This interpretation was the intuition provided by \cite{Thistlethwaite-Campbell_1960_JEP} in their original contribution, though it now has become less common due to the stronger nature of the assumptions it requires. We discuss situations in which this \textit{local randomization} framework for RD analysis may be relevant, focusing on cases where the running variable has mass points, which occurs very frequently in applications. To conclude, we discuss a battery of data-driven falsification tests that can provide empirical evidence about the validity of the design and the plausibility of its key identifying assumptions. These falsification tests are intuitive and easy to implement, and thus should be included as part of any RD analysis in order to enhance its credibility and replicability. Due to space limitations, we do not discuss variations and extensions of the canonical (sharp) RD designs, such as fuzzy, kink, geographic, multi-cutoff or multi-score RD designs. A practical introduction to those topics can be found in \citet*{Cattaneo-Idrobo-Titiunik_2019_Book1,Cattaneo-Idrobo-Titiunik_2019_Book2}, in the recent edited volume \citet*{Cattaneo-Escanciano_2017_AIE}, and the references therein. For a recent review on program evaluation methods see \citet*{Abadie-Cattaneo_2018_ARE}. \section{General Setup}\label{sec:setup} We start by introducing the basic notation and framework. We consider a study where there are multiple units from a population of interest (such as politicians, parties, students, households or firms), and each unit $i$ has a \textit{score} or \textit{running variable}, denoted by $X_i$. This running variable could be, for example, a party's vote share in a congressional district, a student's score from a standardized test, a household's poverty index, or a firm's total revenues in a certain period of time. This running variable may be continuous, in which case no two units will have the same value of $X_i$, or not, in which case the same value of $X_i$ might be shared by multiple units. The latter case is usually called ``discrete'', but in many empirical applications the score variable is actually both. In the simplest RD design, each unit receives a binary treatment $D_i$ when their score exceeds some fixed threshold $c$, and does not receive the treatment otherwise. This type of RD design is commonly known as the \textit{sharp RD design}, where the word \textit{sharp} refers to the fact that the assignment of treatment coincides with the actual treatment taken---that is, compliance with treatment assignment is perfect. When treatment compliance is imperfect, the RD design becomes a \textit{fuzzy RD design} and its analysis requires additional methods beyond the scope of this chapter (see the Introduction for references). The methods described here for analyzing sharp RD designs can be applied directly in the context of fuzzy RD designs when the parameter of interest is the intention-to-treat effect. The sharp RD treatment assignment rule can be formally written as \begin{equation}\label{eq:sharp} D_i=\mathbbm{1}(X_i\ge c)= \begin{cases} 1 & \quad \text{if }X_i\ge c \\ 0 & \quad \text{if }X_i< c \end{cases}, \end{equation} where $\mathbbm{1}(\cdot)$ is the indicator function. For example, $D_i$ could be a scholarship for college students that is assigned to those with a score of 7 or higher in an entry exam on a scale from 0 to 10. In this example, $X_i$ is the exam score, $c=7$ is the cutoff used for treatment assignment, and $D_i=\mathbbm{1}(X_i\ge 7$) is the binary variable that indicates receipt of the scholarship. Our goal is to assess the effect of the binary treatment $D_i$ on a certain outcome variable. For instance, in the previous scholarship example, we may be interested in analyzing whether the scholarship increases the academic performance during college or the probability of graduating. This problem can be formalized within the potential outcomes framework \citep{Imbens-Rubin_2015_Book}. In this framework, each unit $i$ from the population of interest has two potential outcomes, denoted $Y_i(1)$ and $Y_i(0)$, which measure the outcome that would be observed for unit $i$ with and without treatment, respectively. For example, for a certain college student $i$, $Y_i(1)$ could be the student's GPA at a certain stage had the student received the scholarship, and $Y_i(0)$ the student's GPA had she not received the scholarship. The individual-level treatment ``effect'' for unit $i$ is defined as the difference between the potential oucomes under treatment and control status, $\tau_i=Y_i(1)-Y_i(0)$. Because the same unit can never be observed under both treated and control status (a student can either receive or not receive the scholarship, but not both), one of the potential outcomes is always unobservable. The observed outcome, denoted $Y_i$, equals $Y_i(1)$ when $i$ is treated and $Y_i(0)$ if $i$ is untreated, that is, \begin{equation}\label{eq:obsout} Y_i=Y_i(1)\cdot D_i+Y_i(0)\cdot (1-D_i)=\begin{cases} Y_i(1) &\quad \text{if }D_i=1 \\ Y_i(0) &\quad \text{if }D_i=0 \end{cases}. \end{equation} The observed outcome can never provide information on both potential outcomes. Hence, for each unit in the population, one of the potential outcomes is observed, and the other one is a \textit{counterfactual}. This problem is known as the \textit{fundamental problem of causal inference} \citep{Holland_1986}. The RD design provides a way to address this problem by comparing treated units that are ``slightly above'' the cutoff to control units that are ``slightly below'' it. The rationale behind this comparison is that, under appropriate assumptions that will be made more precise in the upcoming sections, treated and control units in a small neighborhood or \textit{window} around the cutoff are comparable in the sense of having similar observed and unobserved characteristics (with the only exception being treatment status). Thus, observing the outcomes of units just below the cutoff provides a valid measure of the average outcome that treated units just above the cutoff would have had if they had not received the treatment. In the remainder of this chapter, we describe two alternative approaches for analyzing RD designs. The first one, which we call the \textit{continuity-based framework}, assumes that the observed sample is a random draw from an infinite population of interest, and invokes assumptions of continuity. In this framework, identification of the parameter of interest, defined precisely in the next section, relies on assuming that the average potential outcomes given the score are continuous as a function of the score. This assumption implies that the researcher can compare units marginally above the cutoff to units marginally below to identify (and estimate) the average treatment effect at the cutoff. The second approach for RD analysis, which we call the \textit{local randomization framework}, assumes that the treatment of interest is as-if randomly assigned in a small region around the cutoff. This approach formalizes the interpretation of RD designs as local experiments, and allows researchers to use the standard tools from the classical analysis of experiments. In addition, if the researcher is willing to assume that potential outcomes are fixed (non-random) and that the $n$ units that are observed in the sample conform the finite population of interest, this approach also allows the researcher to use finite-sample exact randomization inference tools, which are specially appealing in applications where the number of observations near the cutoff is small. For both frameworks, we discuss the parameters of interest, estimation, inference, and bandwidth or window selection methods. We then compare the two approaches and provide a series of falsification methods that are commonly employed to assess the validity of the RD design. See also \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM} for an overview and practical comparisons between these RD approaches. \section{The Continuity-Based Framework}\label{sec:continuity} Under the continuity-based framework, the observed data $\{Y_i(1),Y_i(0),X_i,D_i\}$, for $i=1,2,\ldots,n$, is a random sample from an infinite population of interest (or data generating process). The main objects of interest under this framework are the conditional expectation functions of the potential outcomes, \begin{equation}\label{eq:cef} \mu_1(x)=\mathbb{E}[Y_i(1)|X_i=x]\qquad\text{and}\qquad \mu_0(x)=\mathbb{E}[Y_i(0)|X_i=x], \end{equation} which capture the population average of the potential outcomes for each value of the score. In the sharp RD design, for each value of $x$, only one of these functions is observed: $\mu_1(x)$ is observed for $x$ at or above the cutoff, and $\mu_0(x)$ is observed for values of $x$ below the cutoff. The observed conditional expectation function is \begin{equation}\label{eq:obscef} \mu(x)=\mathbb{E}[Y_i|X_i=x]=\begin{cases} \mu_1(x) &\quad \text{if }x \ge c \\ \mu_0(x) &\quad \text{if }x<c \text{.} \end{cases} \end{equation} We start by defining the function $\tau(x)$, which gives the average treatment effect conditional on $X_i=x$: \begin{equation}\label{eq:te} \tau(x)=\mathbb{E}[Y_i(1)-Y_i(0)|X_i=x]=\mu_1(x)-\mu_0(x)\text{.} \end{equation} The first step is to establish conditions for identification, that is, conditions under which we can write the parameter of interest, which depends on unobservable quantities due to the fundamental problem of causal inference, in terms of observable (i.e., identifiable) and thus estimable quantities. In the continuity-based framework, the key assumption for identification is that $\mu_1(x)$ and $\mu_0(x)$ are continuous functions of the score at the cutoff point $x=c$. Intuitively and informally, this condition states that the observable and unobservable characteristics that determine the average potential outcomes do not jump abruptly at the cutoff. When this assumption holds, the only difference between units on opposite sides of the cutoff whose scores are ``very close'' to the cutoff is their treatment status. Intuitively, we can think that treated and control units with very different score values will generally be very different in terms of important observable and unobservable characteristics affecting the outcome of interest but, as their scores approach the cutoff and become similar in that dimension, the only remaining difference between them will be their treatment status, thus ensuring comparability between units just above and just below the cutoff, at least in terms of their potential outcome mean regression functions. More formally, \citet*{Hahn-Todd-vanderKlaauw_2001_ECMA} showed that, when conditional expectation functions are continuous in $x$ at the cutoff level $x=c$, \begin{equation}\label{eq:sharp_id} \tau(c) = \lim_{x\downarrow c}\mathbb{E}[Y_i|X_i=x]-\lim_{x\uparrow c}\mathbb{E}[Y_i|X_i=x], \end{equation} so that the difference between average observed outcomes for units just above and just below the cutoff is equal to the average treatment effect at the cutoff, $\tau(c)=\mathbb{E}[Y_i(1)-Y_i(0)|X_i=c]$. Note that this identification result expresses the estimand $\tau(c)$, which is unobservable, as a function of two limits that depend only on observable (i.e., identifiable) quantities that are estimable from the data. As a consequence, in a sharp RD design, a natural parameter of interest is $\tau(c)$, the average treatment effect at the cutoff. This parameter captures the average effect of the treatment on the outcome of interest, given that the value of the score is equal to the cutoff. It is useful to compare this parameter to the average treatment effect, $\mathtt{ATE} = \mathbb{E}[Y_i(1)-Y_i(0)]$, which is the difference that we would see in average outcomes if all units were switched from control to treatment. In contrast to $\mathtt{ATE}$, which is a weighted average of $\tau(x)$ over $x$ because $\mathtt{ATE}=\mathbb{E}[\tau(X_i)]$, $\tau(c)$ is only the average effect of the treatment at a particular value of the score, $x=c$. For this reason, the RD parameter of interest $\tau(c)$ is often referred to as a \textit{local} average treatment effect, because it is only informative of the effect of the treatment for units whose value of the score is at (or, loosely speaking, in a local neighborhood of) the cutoff. This limits the external validity of the RD parameter $\tau(c)$. A recent and growing literature studies how to extrapolate treatment effects in RD designs \citep*{Angrist-Rokkanen_2015_JASA,Dong-Lewbel_2015_ReStat,Cattaneo-Keele-Titiunik-VazquezBare_2016_JOP,Bertanha-Imbens_2019_JBES,Cattaneo-Keele-Titiunik-VazquezBare_2019_wp}. The main advantage of the identification result in \eqref{eq:sharp_id} is that it relies on continuity conditions of $\mu_1(x)$ and $\mu_0(x)$ at $x=c$, which are nonparametric in nature and reasonable in a wide array of empirical applications. Section \ref{sec:falsification} describes several falsification strategies to provide indirect empirical evidence to assess the plausibility of this assumption. Assuming continuity holds, the estimation of the RD parameter $\tau(c)$ can proceed without making parametric assumptions about the particular form of $\mathbb{E}[Y_i|X_i=x]$. Instead, estimation can proceed by using nonparametric methods to approximate the regression function $\mathbb{E}[Y_i|X_i=x]$, separately for values of $x$ above and below the cutoff. However, estimation and inference via nonparametric local approximations near the cutoff is not without challenges. When the score is continuous, there are in general no units with value of the score exactly equal to the cutoff. Thus, estimation of the limits of $\mathbb{E}[Y_i|X_i=x]$ as $x$ tends to the cutoff from above or below will necessarily require extrapolation. To this end, estimation in RD designs requires specifying a neighborhood or bandwidth around the cutoff in which to approximate the regression function $\mathbb{E}[Y_i|X_i=x]$, and then, based on that approximation, calculate the value that the function has exactly at $x=c$. In what follows, we describe different methods for estimation and bandwidth selection under the continuity-based framework. \subsection{Bandwidth Selection} Selecting the bandwidth around the cutoff in which to estimate the effect is a crucial step in RD analysis, as the results and conclusions are typically sensitive to this choice. We now briefly outline some common methods for bandwidth selection in RD designs. See also \citet*{Cattaneo-VazquezBare_2016_ObsStud} for an overview of neighborhood selection methods in RD designs. The approach for bandwidth selection used in early RD studies is what we call \textit{ad-hoc} bandwidth selection, in which the researcher chooses a bandwidth without a systematic data-driven criterion, perhaps relying on intuition or prior knowledge about the particular context. This approach is not recommended since it lacks objectivity, does not have a rigorous justification and, by leaving bandwidth selection to the discretion of the researcher, opens the door for specification searches. For these reasons, the ad-hoc approach to bandwidth selection has been replaced by systematic, data-driven criteria. In the RD continuity-based framework, the most widely used bandwidth selection criterion in empirical practice is the \textit{mean squared error} (MSE) criterion \citep*{Imbens-Kalyanaraman_2012_REStud,Calonico-Cattaneo-Titiunik_2014_ECMA,Arai-Ichimura_2018_QE,Calonico-Cattaneo-Farrell-Titiunik_2019_RESTAT}, which relies on a tradeoff between the bias and variance of the RD point estimator. The bandwidth determines the neighborhood of observations around the cutoff that will be used to approximate the unknown function $\mathbb{E}[Y_i|X_i=x]$ above and below the cutoff. Intuitively, choosing a very small bandwidth around the cutoff will tend to reduce the misspecification error in the approximation, thus reducing bias. A very small bandwidth, however, requires discarding a large fraction of the observations and hence reduces the sample, leading to estimators with larger variance. Conversely, choosing a very large bandwidth allows the researcher to gain precision using more observations for estimation and inference, but at the expense of a larger misspecification error, since the function $\mathbb{E}[Y_i|X_i=x]$ now has to be approximated over a larger range. The goal of bandwidth selection methods based on this tradeoff is therefore to find the bandwidth that optimally balances bias and variance. We let $\hat{\tau}$ denote a local polynomial estimator of the RD treatment effect $\tau(c)$---we explain how to construct this estimator in the next section. For a given bandwidth $h$ and a total sample size $n$, the MSE of $\hat{\tau}$ is \begin{equation}\label{eq:mse} \mathsf{MSE}(\hat{\tau})=\mathsf{Bias}^2(\hat{\tau})+\mathsf{Variance}(\hat{\tau}) = \mathscr{B}^2 + \mathscr{V}\text{,} \end{equation} which is the sum of the squared bias and the variance of the estimator. The MSE-optimal bandwidth, $h_{\mathsf{MSE}}$, is the value of $h$ that balances bias and variance by minimizing the MSE of $\hat{\tau}$, \begin{equation}\label{eq:hmse} h_{\mathsf{MSE}}=\argmin_{h>0} \mathsf{MSE}(\hat{\tau}) \text{.} \end{equation} The shape of the MSE depends on the specific estimator chosen. For example, when $\hat{\tau}$ is obtained using local linear regression (LLR), which will be discussed in the next section, the MSE can be approximated by \[\mathsf{MSE}(\hat{\tau})\approx h^4\mathsf{B}^2+\frac{1}{nh}\mathsf{V}\] where $\mathsf{B}$ and $\mathsf{V}$ are constants that depend on the data generating process and specific features of the estimator used. This expression clearly highlights how a smaller bandwidth reduces the bias term while increasing the variance and vice versa. In this case, the optimal bandwidth, simply obtained by setting the derivative of the above expression with respect to $h$ equal to zero, is \begin{equation}\label{eq:hmse_llr} h_{\mathsf{MSE}}^{\mathsf{LLR}}=\mathsf{C}_\mathsf{MSE} \cdot n^{-1/5}, \end{equation} where the constant $\mathsf{C}_\mathsf{MSE}=(\mathsf{V}/(4\mathsf{B}^2))^{1/5}$ is unknown but estimable. This shows that the MSE-optimal bandwidth for a local linear estimator is proportional to $n^{-1/5}$. While $h_{\mathsf{MSE}}$ is optimal for point estimation, it is generally not optimal for conducting inference. \citet*{Calonico-Cattaneo-Farrell_2018_JASA,Calonico-Cattaneo-Farrell_2019_CEoptimal,Calonico-Cattaneo-Farrell_2019_CERD} show how to choose the bandwidth to obtain confidence intervals minimizing the coverage error probability (CER). More precisely, let $\mathsf{IC}(\hat{\tau})$ be an $\alpha$-level confidence interval for the RD parameter $\tau(c)$ based on the estimator $\hat{\tau}$. A CER-optimal bandwidth makes the coverage probability as close as possible to the desired level $1-\alpha$: \begin{equation}\label{eq:hcer} h_{\mathsf{CER}}=\argmin_{h>0} |\P[\tau(c)\in \mathsf{IC}(\hat{\tau})]-(1-\alpha)| \text{.} \end{equation} For the case of local linear regression, the CER-optimal $h$ is \[h_\mathsf{CER}^{\mathsf{LLR}}=\mathsf{C}_\mathsf{CER}\cdot n^{-1/4},\] where again the constant $\mathsf{C}_\mathsf{CER}$ unknown, because depends in part on the data generating process, but estimable. Hence, the CER-optimal bandwidth is smaller than the MSE-optimal bandwidth, at least in large samples. Based on the ideas above, several variations of optimal bandwidth selectors exist, including one-sided CER-optimal and MSE-optimal bandwidths with and without accounting for covariate adjustment, clustering, or other specific features. In all cases, these bandwidth selectors are implemented in two steps: first the constant (e.g., $\mathsf{C}_\mathsf{MSE}$ or $\mathsf{C}_\mathsf{CER}$) is estimated, and then the bandwidth is chosen using that preliminary estimate and the appropriate rate formula (e.g., $n^{-1/5}$ or $n^{-1/4}$). \subsection{Estimation and Inference}\label{sec:estinf1} Given a bandwidth $h$, continuity-based estimation in RD designs consists on estimating the outcome regression functions, given the score, separately for treated and control units whose scores are within the bandwidth. Recall from Equation (\ref{eq:sharp_id}) that we need to estimate the limits of the conditional expectation function of the observed outcome from the right and from the left. One possible approach would be to simply estimate the difference in average outcomes between treated and controls within $h$. This strategy is equivalent to fitting a regression including only an intercept at each side of the cutoff. However, since the goal is to estimate two boundary points, this local constant approach will have a bias that can be reduced by including a slope term in the regression. More generally, the most common approach for point estimation in the continuity-based RD framework is to employ \textit{local polynomial} methods \citep*{Fan-Gijbels_1996_Book}, which involve fitting a polynomial of order $p$ separately on each side of the cutoff, only for observations inside the bandwidth. Local polynomial approximations usually include a weighting scheme that places more weight on observations that are closer to the cutoff; this weighting scheme is based on a \textit{kernel function}, which we denote by $K(\cdot)$. More formally, the treatment effect is estimated as: \[\hat{\tau}=\hat{\alpha}_{+}-\hat{\alpha}_{-}\] where $\hat{\alpha}_+$ is obtained as the intercept from the (possibly misspecified) regression model: \[Y_i=\alpha_++\beta_{1+}(X_i-c)+\ldots+\beta_{p+}(X_i-c)^p+u_i\] on the treated observations using weights $K((X_i-c)/h)$, and similarly $\hat{\alpha}_-$ is obtained as the intercept from an analogous regression fit employing only the control observations. Although theoretically a large value of $p$ can capture more features of the unobserved regression functions, $\mu_1(x)$ and $\mu_0(x)$, in practice high-order polynomials can have erratic behavior, especially when estimating boundary points, a fact usually known as \textit{Runge's phenomenon} \citep*[][p. 1756-57]{Calonico-Cattaneo-Titiunik_2015_JASA}. In addition, global polynomials can lead to counter-intuitive weighting schemes, as discussed by \cite{Gelman-Imbens_2019_JBES}. Common choices for $p$ are $p=1$ or $p=2$. As we can see, once the bandwidth has been appropriately chosen, the implementation of local polynomial regression reduces to simply fitting two linear or quadratic regressions via weighted least-squares---see \citet*{Cattaneo-Idrobo-Titiunik_2019_Book1} for an extended discussion and practical introduction. Despite the implementation and algebraic similarities between ordinary least squares (OLS) methods and local polynomial methods, there is a crucial difference: OLS methods assume that the polynomial used for estimation is the true form of the function, while local polynomial methods see it as just an approximation to an unknown regression function. Thus, inherent in the use of local polynomial methods is the idea that the resulting estimate will contain a certain error of approximation or misspecification bias. This difference between OLS and local polynomial methods turns out to be very consequential for inference purposes---that is, for testing statistical hypotheses and constructing confidence intervals. The conventional OLS inference procedure to test the null hypothesis of no treatment effect at the cutoff, $\mathsf{H_0}: \tau(c)=0$, relies on the assumption that the distribution of the t-statistic is approximately standard normal in large samples: \begin{equation} \label{eq:conv} \frac{\hat{\tau}}{\sqrt{\mathscr{V}}}\overset{a}{\thicksim} \mathcal{N}(0,1), \end{equation} where $\mathscr{V}$ is the (conditional) variance of $\hat{\tau}$, that is, the square of the standard error. However, this will only occur in cases where the misspecification bias or approximation error of the estimator $\hat{\tau}$ for $\tau(c)$ becomes sufficiently small in large samples, so that the distribution of the t-statistic is correctly centered at zero. In general, this will not occur in RD analysis, where the local polynomials are used as a nonparametric approximation device, and do not make any specific functional form assumptions about the regression functions $\mu_1(x)$ and $\mu_0(x)$, which will be generally misspecified. The general approximation to the t-statistic in the presence of misspecification error is \begin{equation} \label{eq:robust} \frac{\hat{\tau} - \mathscr{B}}{\sqrt{\mathscr{V}}}\overset{a}{\thicksim} \mathcal{N}(0,1), \end{equation} where $\mathscr{B}$ is the (conditional) bias of $\hat{\tau}$ for $\tau(c)$. This approximation will be equivalent to the one in \eqref{eq:conv} only when $\mathscr{B}/\sqrt{\mathscr{V}}$ is small, at least in large samples. More generally, it is crucial to account for the bias $\mathscr{B}$ when conducting inference. The magnitude of the bias depends on the shape of the true regression functions and on the length of the bandwidth. As discussed before, the smaller the bandwidth, the smaller the bias. Although the conventional asymptotic approximation in (\ref{eq:conv}) will be valid in some special cases, such as when the bandwidth is small enough, it is not valid in general. In particular, if the bandwidth chosen for implementation is the MSE-optimal bandwidth discussed in the prior section, the bias will remain even in large samples, making inferences based on (\ref{eq:conv}) invalid. In other words, the MSE-optimal bandwidth, which is optimal for point estimation, is too large when conducting inference according to the usual OLS approximations. Generally valid inferences thus require researchers to use the asymptotic approximation in (\ref{eq:robust}), which contains the bias. In particular, \citet*{Calonico-Cattaneo-Titiunik_2014_ECMA} propose a way to construct a t-statistic that corrects the bias of the estimator (thus making the approximation valid for more bandwidth choices, including the MSE-optimal choice) and simultaneously adjusts the standard errors to account for the variability that is introduced in the bias correction step---this additional variability is introduced because the bias is unknown and thus must be estimated. This approach is known as \textit{robust bias-corrected} inference. Based on the approximation (\ref{eq:robust}), \citet*{Calonico-Cattaneo-Titiunik_2014_ECMA} propose robust bias-corrected confidence intervals \[\mathtt{CI}_{\mathtt{rbc}} = \left[ ~ \big(\hat{\tau}-\hat{\mathscr{B}}\big) \pm 1.96 \cdot \sqrt{\mathscr{V}_{\mathtt{bc}}} ~ \right], \] where, in general, $\mathscr{V}_{\mathtt{bc}} > \mathscr{V}$ because $\mathscr{V}_{\mathtt{bc}}$ includes the variability of estimating $\mathscr{B}$ with $\hat{\mathscr{B}}$. In terms of implementation, the infeasible variance $\mathscr{V}_{\mathtt{bc}}$ can be replaced by a consistent estimator $\hat{\mathscr{V}}_{\mathtt{bc}}$, which can account for heteroskedasticity and clustering as apprpriate. Robust bias correction methods for RD designs have been further developed in recent years. For example, see \citet*{Calonico-Cattaneo-Farrell-Titiunik_2019_RESTAT} for robust bias correction inference in the context of RD designs with covariate adjustments, clustered data, and other empirically relevant features. In addition, see \citet*{Calonico-Cattaneo-Farrell_2018_JASA,Calonico-Cattaneo-Farrell_2019_CEoptimal,Calonico-Cattaneo-Farrell_2019_CERD} for theoretical results justifying some of features of robust bias correction inference. Finally, see \citet*{Ganong-Jager_2018_JASA} and \citet*{Hyytinen-etal_2018_QE} for two recent applications and empirical comparisons of robust bias correction methods.\bigskip \begin{tcolorbox}[colframe=blue!25, colback=blue!10, coltitle=blue!20!black, title = \textbf{Continuity-based framework: summary}] \begin{enumerate} \item \textbf{Key assumptions}: \begin{enumerate} \item Random potential outcomes drawn from an infinite population \item The regression functions are continuous at the cutoff \end{enumerate} \item \textbf{Bandwidth selection}: \begin{enumerate} \item Systematic, data-driven selection based on non-parametric methods \item Optimality criteria: MSE, coverage error \end{enumerate} \item \textbf{Estimation}: \begin{enumerate} \item Nonparametric local polynomial regression within bandwidth \item Choice parameters: order of the polynomial, weighting method (kernel) \end{enumerate} \item \textbf{Inference}: \begin{enumerate} \item Large-sample normal approximation \item Robust, bias corrected \end{enumerate} \end{enumerate} \end{tcolorbox} \section{The Local Randomization Framework} The local randomization approach to RD analysis provides an alternative to the continuity-based framework. Instead of relying on assumptions about the continuity of regression functions and their approximation and extrapolation, this approach is based on the idea that, close enough to the cutoff, the treatment can be interpreted to be ``as good as randomly assigned''. The intuition is that, if units either have no knowledge of the cutoff or have no ability to precisely manipulate their own score, units whose scores are close enough to the cutoff will have the same chance of being barely above the cutoff as barely below it. If this is true, close enough to the cutoff, the RD design may create experimental-like variation in treatment assignment. The idea that RD designs create conditions that resemble an experiment near the cutoff has been present since the origins of the method \citep[see][]{Thistlethwaite-Campbell_1960_JEP}, and has been sometimes proposed as a heuristic interpretation of continuity-based RD results. \citet*{Cattaneo-Frandsen-Titiunik_2015_JCI} used this local randomization idea to develop a formal framework, and to derive alternative assumptions for the analysis of RD designs, which are stronger than the typical continuity conditions. The formal local randomization framework was further developed by \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM}. The central idea behind the local randomization approach is to assume the existence of a neighborhood or window around the cutoff where the assignment to being above or below the cutoff behaves as it would have behaved in an actual experiment. In other words, the local randomization RD approach makes the assumption that there is a window around the cutoff where assignment to treatment is as-if experimental. The formalization of these assumptions requires a more general notation. In prior sections, we used $Y_i(D_i)$ to denote the potential outcome under treatment $D_i$, which could be equal to one (treatment) or zero (control). Since $D_i = \mathbbm{1}(X_i \geq c)$, this also allowed the score $X_i$ to indirectly affect the potential outcomes; moreover, this notation did not prevent $Y_i(\cdot)$ from being a function of $X_i$, but this was not explicitly noted. We now generalize the notation to explicitly note that the potential outcomes may be a direct function of $X_i$, so we write $Y_i(D_i,X_i)$. In addition, note that here and in all prior sections we are implicitly assuming that potential outcomes only depend on unit $i$'s own treatment assignment and running variable, an assumption known as SUTVA (stable unit treatment value assumption). While some of the methods described in this section are robust to some violations of the SUTVA, we impose this assumption to ease exposition. See \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM} for more discussion. To formalize the local randomization RD approach, we assume that there exists a window $W_0$ around the cutoff where the following two conditions hold: \begin{itemize} \item \textbf{Unconfounded Assignment}. The distribution function of the score inside the window, $F_{X_i | X_i \in W_0}(r)$, does not depend on the potential outcomes, is the same for all units, and is known: \begin{equation} \label{eq:uncon} F_{X_i | X_i \in W_0}(x) = F_0(x), \end{equation} where $F_0(x)$ is a known distribution function. \item \textbf{Exclusion Restriction}. The potential outcomes do not depend on the value of the running variable inside the window, except via the treatment assignment indicator \begin{equation}\label{eq:flatout} Y_i(d,x)=Y_i(d)\quad \forall\, i \text{ such that } X_i \in W_0\text{.} \end{equation} This condition requires the potential outcomes to be unrelated to the score inside the window. \end{itemize} Importantly, these two assumptions would not be satisfied by randomly assigning the treatment inside $W_0$, because the random assignment of $D_i$ inside $W_0$ does not by itself guarantee that the score and the potential outcomes are unrelated (the exclusion restriction). For example, imagine a RD design based on elections, where the treatment is the electoral victory of a political party, the score is the vote share, and the party wins the election if the vote share is above 50\%. Even if, in very close races, election winners were chosen randomly instead of based on their actual vote share, donors might still believe that districts where the party obtained a bare majority are more likely to support the party again, and thus they may donate more money to the races where the party's vote share was just above 50\% than to races where the party was just below 50\%. If donations are effective in boosting the party, this would induce a positive relationship near the cutoff between the running variable (vote share) and the outcome of interest (victory in the future election). The discussion above illustrates why the unconfounded assignment assumption in equation (\ref{eq:uncon}) is not enough for a local randomization approach to RD analysis. We must explicitly assume that the score and the potential outcomes are unrelated inside $W_0$, which is not implied by (\ref{eq:uncon}). This issue is discussed in detail by \citet*{Sekhon-Titiunik_2017_AIE}, who use several examples to show that the exclusion restriction in (\ref{eq:flatout}) is neither implied by assuming statistical independence between the potential outcomes and the treatment in $W_0$, nor by assuming that the running variable is randomly assigned in $W_0$. In addition, see \citet*{Sekhon-Titiunik_2016_ObsStud} for a discussion of the status of RD designs among observational studies, and \cite{Titiunik2020-Chapter} for a discussion of the connection between RD designs and natural experiments. \subsection{Estimation and Inference within a Known Window}\label{sec:estinf2} The local randomization conditions (\ref{eq:uncon}) and (\ref{eq:flatout}) open new possibilities for RD estimation and inference. Of course, these conditions are strong and, just like the continuity conditions in Section \ref{sec:continuity}, they are not implied by the RD treatment assignment rule but rather must be assumed in addition to it \citep{Sekhon-Titiunik_2016_ObsStud}. Because these assumptions are strong and are inherently untestable, it is crucial for researchers to provide as much information as possible regarding their plausibility. We discuss this issue in Section \ref{sec:falsification}, where we present several strategies for empirical falsification of the RD assumptions. The key assumption of the local randomization approach is that there exists a neighborhood around the cutoff in which (\ref{eq:uncon}) and (\ref{eq:flatout}) hold---implying that we can treat the RD design as a randomized experiment near the cutoff. We denote this neighborhood by $W_0=[c-w,c+w]$, where $c$ continues to be the RD cutoff, but we now use the notation $w$ as opposed to $h$ to emphasize that $w$ will be chosen and interpreted differently from the previous section. Furthermore, to ease the exposition, we start by assuming that $W_0$ is known, and then discuss how to select $W_0$ based on observable information. This data-driven window selection step will be crucial in applications, as in most empirical examples $W_0$ is fundamentally unknown, if it exists at all---but see \cite{Hyytinen-etal_2018_QE} for an exception. Given a window $W_0$, the local randomization framework summarized by assumptions (\ref{eq:uncon}) and (\ref{eq:flatout}) allows us to analyze the RD design employing the standard tools of the classical analysis of experiments. Depending on the available number of observations inside the window, the experimental analysis can follow two different approaches. In the Fisherian approach, also known as a randomization inference approach, potential outcomes are considered non-random, the assignment mechanism is assumed to be known, and this assignment is used to calculate the exact finite-sample distribution of a test statistic of interest under the null hypothesis that the treatment effect is zero for every unit. On the other hand, in the large-sample approach, the potential outcomes may be fixed or random, the assignment mechanism need not be known, and the finite-sample distribution of the test statistic is approximated under the assumption that the number of observations is large. Thus, in contrast to the Fisherian approach, in the large-sample approach inferences are based on test statistics whose finite-sample properties are unknown, but whose null distribution can be approximated by a Normal distribution under the assumption that the sample size is large enough. Below we briefly review both Fisherian and large-sample methods for analysis of RD designs under a local randomization framework. Fisherian methods will be most useful when the number of observations near the cutoff is small, which may render large-sample methods invalid. In contrast, in applications with many observations, large-sample methods will be the most natural approach, and Fisherian methods can be used as a robustness check. \subsubsection{Fisherian approach} In the Fisherian framework, the potential outcomes are seen as fixed, non-random magnitudes from a finite population of $n$ units. The information on the observed sample of units $i=1,\ldots,n$ is not seen as a random draw from an infinite population, but as the population of interest. This feature allows for the derivation of the finite-sample-exact distribution of test statistics without relying on approximations. We follow the notation in \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM}, adapting slightly our previous notation. Let $\mathbf{X}=(X_1,\ldots,X_n)'$ denote the $n\times 1$ column vector collecting the observed running variable of all units in the sample, and $\mathbf{D}=(D_1,\ldots,D_n)'$ be the vector collecting treatment assignments. The non-random potential outcomes for each unit $i$ are denoted by $y_i(d,x)$ where $d$ and $x$ are possible values for $D_i$ and $X_i$. All the potential outcomes are collected in the vector $\mathbf{y}(\mathbf{d},\mathbf{x})$. The vector of observed outcomes is simply the vector of potential outcomes, evaluated at the observed values of the treatment and running variable, $\mathbf{Y}=\mathbf{y}(\mathbf{D},\mathbf{X})$. Because potential outcomes are assumed non-random, all the randomness in the model enters through the running variable vector $\mathbf{X}$, and the treatment assignment $\mathbf{D}$ which is a function of it. In what follows, we let the subscript ``0'' indicate the subvector inside the neighborhood $W_0$, so that $\mathbf{X}_0$, $\mathbf{D}_0$ and $\mathbf{Y}_0$ denote the vectors of running variables, treatment assignments and observed outcomes inside $W_0$. Finally, $N_0^+$ will denote the number of observations inside the neighborhood and above the cutoff (treated units inside $W_0$), and $N_0^-$ the number of units in the neighborhood below the cutoff (control units in $W_0$), with $N_0=N_0^++N_0^-$. Note that using the fixed-potential outcomes notation, the exclusion restriction becomes $y_i(d,x)=y_i(d),\; \forall\, i \text{ in } W_0$ \citep*[see assumption 1(b) in][]{Cattaneo-Frandsen-Titiunik_2015_JCI}. In this Fisherian framework, a natural null hypothesis to test for the presence of a treatment effect is the \textit{sharp null of no effect}: \[\mathsf{H}^s_0:\quad y_i(1)=y_i(0), \quad \forall i \text{ in } W_0\text{.}\] This sharp null hypothesis states that switching treatment status does not affect potential outcomes, implying that the treatment does not have an effect on \textit{any} unit inside the window. In this context, a hypothesis is \textit{sharp} when it allows the researcher to impute all the missing potential outcomes. Thus, $\mathsf{H}^s_0$ is sharp because when there is no effect, all the missing potential outcomes are equal to the observed ones. Under $\mathsf{H}^s_0$, the researcher can impute all the missing potential outcomes and, since the assignment mechanism is assumed to be known, it is possible to calculate the distribution of any test statistic $T(\mathbf{D}_0,\mathbf{Y}_0)$ to assess how far in the tails the observed statistic falls. This reasoning provides a way to calculate a p-value for $\mathsf{H}^s_0$ that is finite-sample exact and does not require any distributional approximation. This randomization inference p-value is obtained by calculating the value of $T(\mathbf{D}_0,\mathbf{Y}_0)$ for all possible values of the treatment vector inside the window $\mathbf{D}_0$, and calculating the probability of $T(\mathbf{D}_0,\mathbf{Y}_0)$ being larger than the observed value $T_\mathsf{obs}$. See \citet*{Cattaneo-Frandsen-Titiunik_2015_JCI}, \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM} and \citet*{Cattaneo-Titiunik-VazquezBare_2016_Stata} for further details and implementation issues. See also \citet*{Cattaneo-Idrobo-Titiunik_2019_Book2} for a practical introduction to local randomization methods. In addition to testing the null hypothesis of no treatment effect, the researcher may be interested in obtaining a point estimate for the effect. When condition (\ref{eq:flatout}) holds, a difference in means between treated and controls inside the window, \[\Delta=\frac{1}{N^+_0}\sum_{i=1}^n Y_iD_i - \frac{1}{N^-_0}\sum_{i=1}^n Y_i(1-D_i),\] where the sum runs over all observations inside $W_0$, is unbiased for the sample average treatment effect in $W_0$, \[\tau_0=\frac{1}{N_0}\sum_{i=1}^n (y_i(1)-y_i(0)).\] However, it is important to emphasize that the randomization inference method described above cannot test hypotheses on $\tau_0$ because the null hypothesis that $\tau_0=0$ is not sharp, that is, does not allow the researcher to unequivocally impute all the missing potential outcomes, without further restrictive assumptions, which is a necessary condition to use Fisherian methods. Hence, under the assumptions imposed so far, hypothesis testing on $\tau_0$ has to be based on asymptotic approximations, as described in Section \ref{sec:LR-largesample}. The assumption that the potential outcomes do not depend on the running variable, stated in Equation (\ref{eq:flatout}), can be relaxed by assuming a local parametric model for the relationship between $\mathbf{Y}_0$ and $\mathbf{X}_0$. Specifically, \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM} assume there exists a transformation $\phi(\cdot)$ such that the transformed outcomes do not depend on $\mathbf{X}_0$. This transformation could be, for instance, a linear adjustment that removes the slope whenever the relationship between outcomes and the running variable is assumed to be linear. The case where potential outcomes do not depend on the running variable is a particular case in which $\phi(\cdot)$ is the identity function. Both inference and estimation can therefore be conducted using the transformed outcomes when the assumption that potential outcomes are unrelated is not reasonable, or as a robustness check. \subsubsection{Large-Sample approach} \label{sec:LR-largesample} In the most common large-sample approach, we treat potential outcomes as random variables, and often see the units in the study as a random sample from a larger population. (Though in the Neyman large-sample approach, potential outcomes are fixed; see \cite{Imbens-Rubin_2015_Book} for more discussion.) In addition to the randomness of the potential outcomes, this approach differs from the Fisherian approach in its null hypothesis of interest. Given the randomness of the potential outcomes, the focus is no longer on the sharp null but rather typically on the hypothesis that the average treatment effect is zero. In our RD context, this null hypothesis can be written as \[\mathsf{H}^s_0:\quad \mathbb{E}[Y_i(1)]=\mathbb{E}[Y_i(0)], \quad \forall i \text{ in } W_0\] Inference in this case is based on the usual large-sample methods for the analysis of experiments, relying on usual difference-in-means tests and Normal-based confidence intervals. See \cite{Imbens-Rubin_2015_Book} and \cite{Cattaneo-Idrobo-Titiunik_2019_Book2} for details. \subsection{Window Selection} In practice, the window $W_0$ in which the RD design can be seen as a randomized experiment is not known and needs to be estimated. \citet*{Cattaneo-Frandsen-Titiunik_2015_JCI} propose a window selection mechanism based on the idea that in a randomized experiment, the distribution of observed covariates has to be equal between treated and controls. Thus, if the local assumption is plausible in any window, it should be in a window where we cannot reject that the pre-determined characteristics of treated and control units are on average identical. The idea of this procedure is to select a test statistic that summarizes differences in a vector of covariates between groups, such as difference-in-means or the Kolmogorov-Smirnov statistic, and start with an initial ``small'' window. Inside this initial window, the researcher conducts a test of the null hypothesis that covariates are balanced between treated and control groups. This can be done, for example, by assessing whether the minimum p-value from the tests of differences-in-means for each covariate is larger than some specified level, or by conducting a joint test using for instance a Hotelling statistic. If the null hypothesis is not rejected, enlarge the window and repeat the process. The selected window will be the widest window in which the null hypothesis is not rejected. Common choices for the test statistic $T(\mathbf{D}_0,\mathbf{Y}_0)$ are the difference-in-means between treated and controls, the two-sample Kolmogorov-Smirnov statistic or the rank sum statistic. The minimum window to start the procedure should contain enough observations to ensure enough statistical power to reject the null hypothesis of covariate balance. The appropriate minimum number of observations will naturally depend on unknown, application-specific parameters, but based on standard power calculations we suggest using no less than approximately 10 observations in each group. See \citet*{Cattaneo-Frandsen-Titiunik_2015_JCI} and \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM} for methodological details, \citet*{Cattaneo-Idrobo-Titiunik_2019_Book2} for a practical introduction, and \citet*{Cattaneo-Titiunik-VazquezBare_2016_Stata} for software implementation. \bigskip \begin{tcolorbox}[colframe=blue!25, colback=blue!10, coltitle=blue!20!black, title=\textbf{Local randomization framework: summary}] \begin{enumerate} \item \textbf{Key assumptions}: \begin{enumerate} \item There exists a window $W_0$ in which the treatment assignment mechanism satisfies two conditions: \begin{itemize} \item Probability of receiving a particular score value in $W_0$ does not depend on the potential outcomes \item Exclusion restriction or parametric relationship between $\mathbf{Y}$ and $\mathbf{X}$ in $W_0$ \end{itemize} \end{enumerate} \item \textbf{Window selection}: \begin{enumerate} \item Goal: Find a window where the key assumptions are plausible \item Iterative procedure to balance observed covariates between groups \item Choice parameters: test statistic, stopping rule \end{enumerate} \item \textbf{Estimation}: \begin{enumerate} \item Difference in means between treated and controls within neighborhood OR \item Flexible parametric modeling to account for the effect of $X_i$ \end{enumerate} \item \textbf{Inference}: \begin{enumerate} \item Fisherian randomization-based inference or large-sample inference \item Conditional on sample and chosen window \item Choice parameter: test statistic, randomization mechanism in Fisherian \end{enumerate} \end{enumerate} \end{tcolorbox} \section{Falsification Methods}\label{sec:falsification} Every time researchers use an RD design, they must rely on identification assumptions that are fundamentally untestable, and that do not hold by construction. If we employ a continuity-based approach, we must assume that the regression functions are smooth functions of the score at the cutoff. If, on the other hand, we employ a local randomization approach, we must assume that there exists a window where the treatment behaves as if it had been randomly assigned. These assumptions may be violated for many reasons. Thus, it is crucial for researchers to provide as much empirical evidence as possible about its validity. Although testing the assumptions directly is not possible, there are several empirical regularities that we expect to hold in most cases where the assumptions are met. We discuss some of these tests below. Our discussion is brief, but we refer the reader to \citet*{Cattaneo-Idrobo-Titiunik_2019_Book1} for an extensive practical discussion of RD falsification methods, and additional references. \begin{enumerate} \item \textbf{Covariate Balance}. If either the continuity or local randomization assumptions hold, the treatment should not have an effect on any predetermined covariates, that is, on covariates whose values are realized before the treatment is assigned. Since the treatment effect on predetermined covariates is zero by construction, consistent evidence of non-zero effects on covariates that are likely to be confounders would raise questions about the validity of the RD assumptions. For implementation, researchers should analyze each covariate as if it were an outcome. In the continuity-based approach, this requires choosing a bandwidth and performing local polynomial estimation and inference within that bandwidth. Note that the optimal bandwidth is naturally different for each covariate. In the local randomization approach, the null hypothesis of no effect should be tested for each covariate using the same choices as used for the outcome. If the window is chosen using the covariate balance procedure discussed above, the selected window will automatically be a region where no treatment effects on covariates are found. \item \textbf{Density of Running Variable}. Another common falsification test is to study the number of observations near the cutoff. If units cannot manipulate precisely the value of the score that they receive, we should expect as many observations just above the cutoff as just below it. In contrast, for example, if units had the power to affect their score and they knew that the treatment were very beneficial, we should expect more people just above the cutoff (where the treatment is received) than below it. In the continuity-based framework, the procedure is to test the null hypothesis that the density of the running variable is continuous at the cutoff \citep{McCrary_2008_JoE}, which can be implemented in a more robust way via the novel density estimator proposed in \citet*{Cattaneo-Jansson-Ma_2019_JASA}. In the local randomization framework, \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM} propose a novel implementation via a finite sample exact binomial test of the null hypothesis that the number of treated and control observations in the chosen window is compatible with a 50\% probability of treatment assignment. \item \textbf{Alternative cutoff values}. Another falsification test estimates the treatment effect on the outcome at a cutoff value different from the actual cutoff used for the RD treatment assignment, using the same procedures used to estimate the effect in the actual cutoff but only using observations that share the same treatment status (all treatment observations if the artificial cutoff is above the real one, or all control observations if the artificial cutoff is below the real cutoff). The idea is that no treatment effect should be found at the artificial cutoff, since the treatment status is not changing. \item \textbf{Alternative bandwidth and window choices}. Another approach is to study the robustness of the results to small changes in the size of the bandwidth or window. For implementation, the main analysis is typically repeated for values of the bandwidth or window that are slightly smaller and/or larger than the values used in the main analysis. If the effects completely change or disappear for small changes in the chosen neighborhood, researchers should be cautious in interpreting their results. \end{enumerate} \section{Empirical Illustration} To illustrate all the RD methods discussed so far, we partially re-analyze the study by \citet*{KlasnjaTitiunik2017-APSR}. These authors study municipal mayor elections in Brazil between 1996 and 2012, examining the effect of a party's victory in the current election on the probability that the party wins a future election for mayor in the same municipality. The unit of analysis is the municipality, the score is the party's margin of victory at election $t$---defined as the party's vote share minus the vote share of the party's strongest opponent, and the treatment is the party's victory at $t$. Their original analysis focuses on the unconditional victory of the party at $t+1$ as the outcome of interest. In this illustration, our outcome of interest is instead the party's margin of victory at $t+1$, which is only defined for those municipalities where the incumbent party runs for reelection at $t+1$. We analyze this effect for the incumbent party (defined as the that party won election $t-1$, whatever this party is) in the full sample. \citet*{KlasnjaTitiunik2017-APSR} discuss the interpretation and validity issues that arise when conditioning on the party's decision to re-run, but we ignore such issues here for the purposes of illustration. In addition to the outcome and score variables used for the main empirical analysis, our covariate-adjusted local polynomial methods, window selection procedure, and falsification approaches employ seven covariates at the municipality level: per-capita GDP, population, number of effective parties, and indicators for whether each of four parties (the Democratas, PSDB, PT and PMDB) won the prior ($t-1$) election. We implement the continuity-based analysis with the \texttt{rdrobust} software \citep*{Calonico-Cattaneo-Titiunik_2014_Stata,Calonico-Cattaneo-Titiunik_2015_R,Calonico-Cattaneo-Farrell-Titiunik_2017_Stata}, the local randomization analysis using the \texttt{rdlocand} software \citep*{Cattaneo-Titiunik-VazquezBare_2016_Stata}, and the density test falsification using the \texttt{rddensity} software \citep*{Cattaneo-Jansson-Ma_2018_Stata}. The packages can be obtained for \texttt{R} and \texttt{Stata} from \url{https://sites.google.com/site/rdpackages/}. We do not present the code to conserve space, but the full code employed is available in the packages' website. \citet*{Cattaneo-Idrobo-Titiunik_2019_Book1,Cattaneo-Idrobo-Titiunik_2019_Book2} offer a detailed tutorial on how to use these packages, employing a different empirical illustration. \subsection*{Falsification Analysis} We start by presenting a falsification analysis. In order to falsify the continuity-based analysis, we analyze the density of the running variable, and also the effect of the RD treatment on several predetermined covariates. We start by reporting the result of a continuity-based density test, using the local polynomial density estimator developed by \citet*{Cattaneo-Jansson-Ma_2019_JASA}. The estimated difference in the density of the running variable at the cutoff is $-0.0753$, and the p-value associated with the test of the null hypothesis that this difference is zero is $0.94$. This test is illustrated in Figure \ref{fig:rddensity}, which shows the local-polynomial-estimated density of the incumbent party's margin of victory at $t$ at the cutoff, separately estimated from above and below the cutoff. These results indicate that the density of the running variable does not change abruptly at the cutoff, and are thus consistent with the assumption that parties do not precisely manipulate their margin of victory to ensure a win in close races. In addition, we also implemented the finite sample exact binomial tests proposed in \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM}, which confirmed the empirical results obtained via local polynomial density methods. We do not report these numerical result to conserve space, but they can be consulted using the accompaying replication files. \begin{figure}[H] \centering \caption{Estimated density of running variable} \includegraphics[width=0.50\textwidth]{empapp/rddensity.pdf} \label{fig:rddensity} \end{figure} We also present local polynomial point estimates of the effect of the incumbent party's victory on each of the seven predetermined covariates mentioned above, and we perform robust local-polynomial inference to obtain confidence intervals and p-values for these effects. Since these covariates are all determined before the outcome of the election at $t$ is known, the treatment effect on each of them is zero by construction. Our estimated effects and statistical inferences should therefore be consistent with these known null effects. We present the results graphically in Figures \ref{fig:covs1} and \ref{fig:covs2} using typical RD plots \citep*{Calonico-Cattaneo-Titiunik_2015_JASA} where binned means of the outcome within intervals of the score are plotted against the mid point of the score in each interval. A fourth-order polynomial, separately estimated above and below the cutoff, is superimposed to show the global shape of the regression functions. In these plots, we also report the formal local polynomial point estimate, $95\%$ robust confidence interval, robust p-value, and number of observations within the bandwidth. The bandwidth (not reported) is chosen in each case to be MSE-optimal. As we can see, the incumbent party's bare victory at $t$ does not have an effect on any of the covariates. All 95\% confidence intervals contain zero, most of these intervals are approximately symmetric around zero, and most point estimates are small. These results show that there are no obvious or notable covariate differences at the cutoff between municipalities where the incumbent party barely won at $t$ and municipalities where the incumbent party barely lost at $t$. \begin{figure}[H] \centering \subfloat[][GDP per capita]{ \includegraphics[width=0.55\textwidth]{empapp/pibpc.pdf} \label{fig:covs2GDP}} \subfloat[][Population]{ \includegraphics[width=0.55\textwidth]{empapp/population.pdf} \label{fig:covs2Pop}} \qquad \subfloat[][No. Effective Parties]{ \includegraphics[width=0.55\textwidth]{empapp/numpar_candidates_eff.pdf} \label{fig:covs2EffP}} \subfloat[][PMDB Victory at $t-1$]{ \includegraphics[width=0.55\textwidth]{empapp/party_PMDB_wonlag1_b1.pdf} \label{fig:covs2PMDB}} \caption{RD Effects on Predetermined Covariates} \label{fig:covs1} \end{figure} \begin{figure}[H] \centering \subfloat[][DEM Victory at $t-1$]{ \includegraphics[width=0.55\textwidth]{empapp/party_DEM_wonlag1_b1.pdf} \label{fig:covs1DEM}} \subfloat[][PSDB Victory at $t-1$]{ \includegraphics[width=0.55\textwidth]{empapp/party_PSDB_wonlag1_b1.pdf} \label{fig:covs1PSDB}} \qquad \subfloat[][PT Victory at $t-1$]{ \includegraphics[width=0.55\textwidth]{empapp/party_PT_wonlag1_b1.pdf} \label{fig:covs1PT}} \caption{RD Effects on Predetermined Covariates} \label{fig:covs2} \end{figure} \subsection*{Outcome Analysis} Since the evidence from our falsification analysis is consistent with the validity of our RD design, we now proceed to analyze the treatment effect on the main outcome of interest---the incumbent party's margin of victory at $t+1$. This effect is illustrated in Figure \ref{fig:rdplot}. A stark jump can be seen at the cutoff, where the margin of victory of the incumbent party at $t+1$ abruptly decreases as the score crosses the cutoff. This indicates that municipalities where the incumbent party barely wins at $t$ obtain a lower margin of victory at election $t+1$ compared to municipalities where the incumbent party barely loses at $t$, one of the main substantive findings in \citet*{KlasnjaTitiunik2017-APSR}. \begin{figure}[H] \centering \caption{Effect of Victory at $t$ on Vote Margin at $t+1$ \\ Incumbent Party, Brazilian Mayoral Elections, 1996-2012 \label{fig:rdplot}} \vspace{-0.5in} \includegraphics[scale=0.8]{empapp/rdplot-Outcome.pdf} \end{figure} We now analyze this effect formally. We first analyze RD effects using the continuity-based framework, employing local polynomial methods with $p=1$ and a MSE-optimal bandwidth. For inference, we use robust bias-corrected 95\% confidence intervals. As we can see in Table \ref{tab:cont}, the MSE-optimal bandwidth is estimated to be around 15.3 percentage points, and within this bandwidth, the RD local-polynomial point estimate is about -6.3. This shows that, at the cutoff, a victory at $t$ reduces the incumbent party's vote margin at $t+1$ by about 6 percentage points in those municipalities where the party seeks reelection. The 95\% robust bias-corrected confidence interval ranges from -10.224 to -2.945, rejecting the null hypothesis of no effect with a robust p-value of about 0.0004. Including covariates leads to very similar results: the MSE-optimal bandwidth changes to 14.45, and the point estimate moves from -6.28 to -6.10, a very small change, as expected when the covariates are truly predetermined. \begin{table}[H] \caption{Continuity-based RD Analysis: Effect of Victory at $t$ on Vote Margin at $t+1$ \\ Incumbent Party, Brazilian Mayoral Elections, 1996-2012 }\label{tab:cont} \begin{tabular}{lcccccc} \toprule & RD effect ($\hat{\tau}$) & 95\% Robust Conf. Int. & Robust p-val & $h$ & $N_l$ & $N_r$ \\ \midrule Standard & -6.281 & {[}-10.224 , -2.945 {]} & .0004 & 15.294 & 1533 & 1740 \\ Using covariates & -6.106 & {[}-9.881 , -2.656 {]} & .0007 & 14.453 & 1482 & 1672 \\ \bottomrule \end{tabular} \end{table} Second, we analyze the main outcome using a local randomization approach. For this, we must choose the window around the cutoff where the assumption of local randomization appears plausible (if such a window exists). We implement our window selection procedure using the list of covariates mentioned above, an increment of 0.1 percentage points, and a cutoff p-value of 0.15. We use Fisherian randomization-based inference with the difference-in-means as the test statistic and assuming a fixed-margins randomization procedure using the actual number of treated and controls in each window. As shown in Table \ref{tab:winsel}, starting at the $[0.05, -0.05]$ window and considering all symmetric windows in 0.01 increments, we see that all windows between $[0.05, -0.05]$ and $[0.15, -0.15]$ have a minimum p-value above 0.15. The window $[0.08, -0.08]$ is the first window where the minimum p-value drops below 0.15, indeed, it drops all the way to 0.061. Thus, our selected window is $[-0.15, 0.15]$, which has exactly 38 observations on each side of the cutoff. \begin{table}[H] \caption{Minimum p-value in first 20 symmetric windows around cutoff \\ Running variable is Vote Margin at $t$ of Incumbent Party, Brazilian Mayoral Elections, 1996-2012 \label{tab:winsel}} \centering \begin{tabular}{ccccc} \toprule Window & Minimum balance p-val & Covariate of min p-val &$N_0^-$ & $N_0^+$\\ \midrule {[}0.05,0.05{]} & 0.179 & PSDB previous victory & 10 & 14 \\ {[}0.06,0.06{]} & 0.302 & PSDB previous victory & 13 & 16 \\ {[}0.07,0.07{]} & 0.357 & No. effective parties & 16 & 16 \\ {[}0.08,0.08{]} & 0.231 & No. effective parties & 18 & 20 \\ {[}0.09,0.09{]} & 0.176 & No. effective parties & 18 & 22 \\ {[}0.10,0.10{]} & 0.34 & PT previous victory & 23 & 28 \\ {[}0.11,0.11{]} & 0.335 & Population & 24 & 30 \\ {[}0.12,0.12{]} & 0.208 & No. effective parties & 26 & 31 \\ {[}0.13,0.13{]} & 0.201 & PT previous victory & 28 & 33 \\ {[}0.14,0.14{]} & 0.167 & No. effective parties & 34 & 36 \\ {[}0.15,0.15{]} & 0.157 & No. effective parties & 38 & 38 \\ {[}0.16,0.16{]} & 0.062 & PT previous victory & 42 & 41 \\ {[}0.17,0.17{]} & 0.114 & PT previous victory & 43 & 43 \\ {[}0.18,0.18{]} & 0.044 & PT previous victory & 49 & 45 \\ {[}0.19,0.19{]} & 0.065 & PT previous victory & 51 & 50 \\ {[}0.20,0.20{]} & 0.054 & PT previous victory & 53 & 50 \\ \bottomrule \end{tabular} \end{table} In order to further illustrate the results in Table \ref{tab:winsel}, Figure \ref{fig:pvalues} shows the associated p-values for all symmetric windows in 0.01 increments between $[0.05, -0.05]$ and $[-2.00, 2.00]$. \begin{figure}[H] \centering \caption{Window Selector Based on Covariates \\ Incumbent Party, Brazilian Mayoral Elections, 1996-2012 \\ Running variable is Incumbent party's Margin of Victory at $t$ } \label{fig:pvalues} \includegraphics[scale=0.9]{empapp/windowsel.pdf} \end{figure} In table \ref{tab:locran}, we present our inference results in the chosen window $[-0.15, 0.15]$, reporting both Fisherian inference (using the same choices as those used in the window selection procedure) and large-sample p-values. The treated-control difference-in-means is $-9.992$, with a Fisherian p-value of approximately $0.083$ and a large-sample p-value of about $0.070$, rejecting both the sharp null hypothesis and the hypothesis of no average effect at 10\% level. The fact that the point estimate continues to be negative and that the p-values are 8\% and below suggests that the continuity-based results are broadly robust to a local-randomization assumption, as both approaches lead to similar conclusions. The local randomization p-value is much larger than the p-value from the continuity-based local polynomial analysis, but this is likely due, at least in part, to the loss of observations, as the sample size goes from a total of 3,412 (1,740+1,672) observations to just 39 (19+20). (The discrepancy in the number of observations in $[-0.15,0.15]$ between the outcome analysis and the window-selector analysis stems from missing values in the covariates.) \begin{table}[H] \caption{Local Randomization RD Analysis: Effect of Victory at $t$ on Vote Margin at $t+1$ \\ Incumbent Party, Brazilian Mayoral Elections, 1996-2012 \label{tab:locran}} \centering \begin{tabular}{cccccccc} \toprule & RD effect $\hat{\tau}_0$ & Fisher p-val & Large-sample p-val & Window &$N_0^-$ & $N_0^+$\\ \midrule & -9.992 & 0.083 & 0.0697 & [-0.15, 0.15] & 19 & 20\\ \bottomrule \end{tabular} \end{table} \clearpage \section{Final Remarks} We reviewed two alternative frameworks for analyzing sharp RD designs. First, the continuity-based approach, which is more common in empirical work, assumes that the unknown regression functions are continuous at the cutoff. Estimation is conducted nonparametrically using local polynomial methods, and bandwidth selection relies on minimizing a criterion such as the MSE or the coverage error probability. Inference under this framework relies on large sample distributional approximations, and requires robust bias correction to account for misspecification errors local to the cutoff. Second, the local randomization approach formalizes the intuition that RD designs can be interpreted as local experiments in a window around the cutoff. In this case, the window is chosen to ensure that treated and controls are comparable in terms of observed predetermined characteristics, as in a randomized experiment. Within this window, inference is conducted using randomization inference methods assuming that potential outcomes are non-random, or other canonical analysis of experiments methods based on large sample approximations. These two approaches rely on different assumptions, each with its own advantages and disadvantages, and thus we see them as complementary. On the one hand, the continuity-based approach is agnostic about the data generating process and does not require any modeling or distributional assumptions on the regression functions. This generality comes at the expense of basing inference on large-sample approximations, which may not be reliable when the sample size is small (a case that is common in RD designs, given their local nature). On the other hand, the Fisherian local randomization approach provides tools to conduct inference that is exact in finite samples and does not rely on distributional approximations. This type of inference is more reliable than large-sample-based inference when the sample size is small. And if the sample size near the cutoff is large, the analysis can also be conducted using standard large-sample methods for the analysis of experiments. However, the conclusions drawn under the local randomization approach (either Fisherian or large-sample) require stronger assumptions (unconfounded assignment, exclusion restriction) than the continuity-based approach, are conditional on a specific sample and window, and do not generalize to other samples or populations. In sum, as in \citet*{Cattaneo-Titiunik-VazquezBare_2017_JPAM}, we recommend the continuity-based approach as the default approach for analysis, since it does not require parametric modeling assumptions and automatically accounts for misspecification bias in the regression functions when conducting estimation and inference. The local randomization approach can be used as a robustness check, especially when the sample size is small and the large-sample approximations may not be reliable. There is one particular case, however, in which the continuity-based approach is not applicable: when the running variable exhibits only a few distinct values or mass points (even if the sample size is large because of repeated values). In this case, the nonparametric methods for estimation, inference, and bandwidth selection described above do not apply, since they are developed under the assumption of local approximations and continuity of the score variable, which are violated by construction when the running variable is discrete with a small number of mass points. Thus, in settings where the running variable has few mass points, local randomization methods, possibly employing only the closest observations to the cutoff, are a more natural approach for analysis. We refer the reader to \citet*{Cattaneo-Idrobo-Titiunik_2019_Book2} for a more detailed discussion and practical illustration of this point. \newpage \bibliographystyle{jasa}
2024-02-18T23:40:08.336Z
2020-06-02T02:37:49.000Z
algebraic_stack_train_0000
1,432
11,637
proofpile-arXiv_065-7055
\section{Introduction}\label{sec:introduction} \par One of the fundamental characteristics of networked control systems (\text{NCSs}) \cite{zhang:2016} is the existence of an imperfect communication network between computational and physical entities. In such setups, an analytical framework to assess impacts of communication and data-rate limitations on the control performance is strongly required. \par In this paper, we adopt information-theoretic tools to analyze these requirements. Specifically, we consider {\it sequential coding of correlated sources} {initially introduced by} \cite{viswanathan:2000} (see also \cite{tatikonda:2000}) (see Fig. \ref{fig:sequential_coding}), which is a generalization of the successive refinement source coding problem \cite{koshelev:1980,equitz:1991,rimoldi:1994}. In successive refinement, source coding is performed in (time) stages where one first describes the given source within a few bits of information and, then, tries to ``refine'' the description of the same source (at the subsequent stages) when more information is available. Sequential coding differs from successive refinement in that at the second stage, encoding involves describing a correlated (in time) source as opposed to improving the description of the same source. To accomplish this task, sequential coding encompasses a spatio-temporal coding method. \begin{figure*}[htp] \centering \includegraphics[width=0.7\textwidth]{sequential_coding.pdf} \caption{Sequential coding of correlated sources.} \label{fig:sequential_coding} \end{figure*} In addition, sequential coding is a temporally zero-delay coding paradigm since both {\it encoding} and {\it decoding} must occur in real-time. The resulting zero-delay coding approach should not be confused with other existing works on zero-delay coding, see, e.g., \cite{witsenhausen:1979,gaarder-slepian:1982,teneketzis:2006,linder:2014,wood:2017,stavrou:2018stsp}, because it relies on the use of a spatio-temporal coding approach (see Fig. \ref{fig:sequential_coding}) whereas the aforementioned papers rely solely on temporal coding approaches. \subsection{{Literature review on sequential source coding}}\label{subsec:literature_review} {In what follows, we provide a detailed literature review on sequential source coding. However, in order to shed more light on the historical route of this coding paradigm, we distinguish the work of \cite{viswanathan:2000} (see also \cite{ma-ishwar:2011,yang:2011}) with the work of \cite{tatikonda:2000} because although their results complement each other, their underlining motivation has been different. Indeed, \cite{viswanathan:2000} initiated this coding approach targeting video coding applications, whereas \cite{tatikonda:2000} aimed to develop a framework for delay-constrained systems and to study the communication theory in classical closed-loop control setups.} \paragraph*{{Sequential coding via \cite{viswanathan:2000}}} The authors of \cite{viswanathan:2000} characterized the minimum achievable rate-distortion region for two temporally correlated random variables with each being a vector of spatially independent and identically distributed ($\IID$) processes (also called ``frames'' or spatial vectors), subject to a coupled average distortion criterion. The last decade, sequential coding approach of \cite{viswanathan:2000} was further studied in \cite{ma-ishwar:2011,yang:2011,yang:2014}. In \cite{ma-ishwar:2011}, the authors used an extension of the framework of \cite{viswanathan:2000} to three time instants subject to a per-time distortion constraint to investigate the effect of sequential coding when possible coding delays occur within a multi-input multi-output system. Around the same time, \cite{yang:2011} generalized the framework of \cite{viswanathan:2000} to a finite number of time instants. Compared to \cite{viswanathan:2000} and \cite{ma-ishwar:2011}, their spatio-temporal source process is correlated over time whereas each frame is spatially jointly stationary and totally ergodic subject to a per-time average distortion criterion. More recently, the same authors in \cite{yang:2014} drew connections between sequential causal coding and {\it predictive sequential causal coding}, that is, for (first-order) Markov sources subject to a single-letter fidelity constraint, sequential causal coding and sequential predictive coding coincide. For three time instants of an $\IID$ vector source containing jointly Gaussian correlated processes (not necessarily Markov) an explicit expression of the minimum achievable sum-rate for a per-time mean-squared error ($\mse$) distortion is obtained in \cite{torbatian:2012}. Inspired by the framework of \cite{viswanathan:2000,ma-ishwar:2011}, Khina {\it et al.} in \cite{khina:2018} derived fundamental performance limitations in control-related applications. In their work, they considered a multi-track system that tracks several parallel time-varying Gauss-Markov processes with $\IID$ spatial components conveyed over a single shared wireless communication link (possibly prone to packet drops) to a minimum mean-squared error ($\mmse$) decoder. In their Gauss-Markov multi-tracking scenario, they provided lower and upper bounds in finite-time and in the per unit time asymptotic limit for the distortion-rate region of time-varying Gauss-Markov sources subject to a mean-squared error ($\mse$) distortion constraint. Their lower bound is characterized by a forward in time distortion allocation algorithm operating with {\it given data-rates at each time instant for a finite time horizon} whereas their upper bound is obtained by means of a differential pulse-code modulation (\text{DPCM}) scheme using entropy coded dithered quantization (\text{ECDQ}) using one dimensional lattice constrained by {\it data rates averaged across time} (for details on this coding scheme, see, e.g., \cite{farvardin:1985,zamir:2014}). Subsequently, they used these bounds in a scalar-valued quantized linear quadratic Gaussian (\text{LQG}) closed-loop control problem to find similar bounds on the minimum cost of control. \paragraph*{{Sequential coding via \cite{tatikonda:2000}}} {A similar framework to \cite{viswanathan:2000} was independently introduced and developed by Tatikonda in \cite[Chapter 5]{tatikonda:2000} (see also \cite{borkar:2001}) in the context of delay-constrained and control-related applications. Tatikonda in \cite{tatikonda:2000}, introduced an information theoretic quantity called {\it sequential rate distortion function ($\rdf$)} that is attributed to the works of Gorbunov and Pinsker in \cite{gorbunov-pinsker1972b,gorbunov-pinsker1972a}. Using the sequential $\rdf$, Tatikonda {\it et al.} in \cite{tatikonda:2004} studied the performance analysis and synthesis of a multidimensional fully observable time-invariant Gaussian closed-loop control system when a communication link exists between a stochastic linear plant and a controller whereas the performance criterion is the classical linear quadratic cost. The use of sequential $\rdf$ (also termed nonanticipative or causal $\rdf$ in the literature) in filtering applications is stressed in \cite{charalambous:2014,tanaka:2017tac,stavrou:2018siam}. Analytical expressions of lower and upper bounds for the setup of \cite{tatikonda:2004} including the cases where a linear fully observable time-invariant plant is driven by $\IID$ non-Gaussian noise processes or when the system is modeled by time-invariant partially observable Gaussian processes are derived in \cite{kostina:2019}. Tanaka {\it et al.} in \cite{tanaka:2016,tanaka:2018} studied the performance analysis and synthesis of a linear fully observable and partially observable Gaussian closed loop control problem when the performance criterion is the linear quadratic cost. Moreover, they showed that one can derive lower bounds in finite time and in the per unit time asymptotic limit by casting the problems as semidefinite representable and thus numerically computable by known solvers. An achievability bound on the asymptotic limit using a \text{DPCM}-based $\ecdq$ scheme that uses one dimensional quantizer at each dimension was also proposed. Lower and upper bounds for a general closed-loop control system subject to {\it asymptotically average total data-rate constraints} across the time are also investigated in \cite{silva:2011,silva:2016}. The lower bounds are obtained using sequential coding and directed information \cite{massey:1990} whereas the upper bounds are obtained via a sequential $\ecdq$ scheme using scalar quantizers.} \subsection{Contributions}\label{subsec:contributions} {In this paper, we first revisit the sequential coding framework developed by \cite{viswanathan:2000,tatikonda:2000,ma-ishwar:2011,yang:2011} to obtain the following main results. \begin{itemize} \item[{\bf (1)}] {Analytical} non-asymptotic {and finite-dimensional} lower and upper bounds on the minimum achievable total-rates (per-dimension) for a multi-track communication scenario similar to the one considered in \cite{khina:2018}. However, compared to \cite{khina:2018}, who derived distortion-rate bounds via forward recursions with {\it given data rates across a finite time horizon}, here we derive a lower bound subject to a dynamic reverse-waterfilling algorithm in which for a given distortion threshold $D>0$ we optimally assign the data-rates and the $\mse$ distortions at each time instant for a finite time horizon (Theorem \ref{theorem:fundam_lim_mmse}). We also implement our algorithm in Algorithm \ref{algo1}. Our lower bound is the basis to derive our upper bound on the minimum achievable total-rates (per dimension) using a sequential $\dpcm$-based $\ecdq$ scheme that is constrained by total-rates for a finite time horizon. For the specific rate constraint we use a dynamic reverse-waterfilling algorithm obtained from our lower bound to allocate the rate and the $\mse$ distortion at each time instant for the whole finite time horizon. This rate constraint is the fundamental difference compared to similar upper bounds derived in \cite[Theorem 6]{khina:2018} and \cite[Corollary 5.2]{silva:2011} (see also \cite{silva:2016,stavrou:2018stsp}) that {\it restrict their transmit rates to have fixed rates that are averaged across the time horizon or that are asymptotically averaged across the time} \item[{\bf (2)}] We obtain {analogous bounds to {\bf (1)}} on the minimum achievable total (across time) cost-rate function of control (per-dimension) for a \text{NCS} with time-varying quantized {$\lqg$} closed-loops {operating with data-rate obtained subject to a solution of a reverse-waterfilling algorithm} (Theorems \ref{theorem:fund_limt_lqg}, \ref{theorem:achievability_lqg}). \end{itemize} } {\noindent{\it Discussion of the contributions and additional results.} The non-asymptotic lower bound in {\bf (1)} is obtained because for parallel processes all involved matrices in the characterization of the corresponding optimization problem {\it commute by pairs} \cite[p. 5]{harville:1997} thus they are {\it simultaneously diagonalizable} by an orthogonal matrix \cite[Theorem 21.13.1]{harville:1997} and the resulting optimization problem simplifies to one that resembles scalar-valued processes. The upper bound in {\bf (1)} is obtained because we are able to employ a lattice quantizer \cite{zamir:2014} using a quantization scheme with existing performance guarantees such as the $\dpcm$-based $\ecdq$ scheme and using existing approximations from quantization theory for high-dimensional but possibly finite-dimensional quantizers with a $\mse$ performance criterion (see, e.g., \cite{conway-sloane1999}). {The non-asymptotic bounds derived in {\bf (2)} are obtained using the so-called ``weak separation principle'' of quantized $\lqg$ control (for details, see \S\ref{sec:ncs}) and well-known inequalities that are used in information theory. Interestingly, our lower bound in {\bf (2)} also reveals the minimum allowable data rates on the cost-rate (or rate-cost) function in control at each time instant to ensure (mean square) stability of the plant (see e.g., \cite{nair:2004} for the definition) for the specific \text{NCS} (Remark \ref{remark:technical_remarks_lower_bound})}. Finally, for every bound in this paper, we derive the corresponding bounds in the infinite time horizon recovering several known results in the literature (see Corollaries \ref{corollary:steady_state_rev_water}-\ref{corollary:steady_state_lqg_upper}).} \par {This paper is organized as follows. In \S\ref{sec:preliminaries} we give an overview of known results on sequential coding. In \S\ref{sec:examples} we derive non-asymptotic bounds and their corresponding per unit time asymptotic limits for a quantized state estimation problem. In \S\ref{sec:ncs}, we use the results of \S\ref{sec:examples} and the weak separation principle to derive non-asymptotic bounds and their corresponding per unit time asymptotic limits for a quantized $\lqg$ closed-loop control problem. In \S\ref{sec:discussion} we discuss several open questions that can be answered based on this work and draw conclusions in \S\ref{sec:conclusions}. } \paragraph*{\bf Notation} $\mathbb{R}$ is the set of real numbers, $\mathbb{N}_{1}$ is the set of positive integers, and $\mathbb{N}_1^n\sr{\triangleqangle}{=}\{1,\ldots,n\}$,~ $n\in\mathbb{N}_1$, respectively. Let $\mathbb{X}$ be a finite-dimensional Euclidean space, and ${\cal B}(\mathbb{X})$ be the Borel $\sigma$-algebra on $\mathbb{X}$. A random variable ($\rv$) $X$ defined on some probability space ($\Omega, {\cal F}, {\bf P}$) is a map $X : \Omega\mapsto \mathbb{X}$. The probability distribution of a $\rv$ $X$ with realization $X=x$ on $\mathbb{X}$ is denoted by ${\bf P}_X\equiv{p}(x)$. The conditional distribution of a $\rv$ $Y$ with realization $Y=y$, given $X=x$ is denoted by ${\bf Q}_{Y|X}\equiv {q}(y|x)$. We denote the sequence of one-sided $\rvs$ by $X_{t,j} \sr{\triangleqangle}{=} (X_{t}, X_{t+1}, \ldots,X_j),~{t}\leq{j},~(t,j)\in {\mathbb N}_1\times\mathbb{N}_1$, and their values by $x_{t,j} \in {\mathbb X}_{t,j} \sr{\triangleqangle}{=} \times_{k={{t}}}^j {\mathbb X}_k$. We denote the sequence of ordered $\rvs$ with ``$i^{\text{th}}$'' spatial components by $X_{t,j}^i$, so that $X_{t,j}^i$ is a vector of dimension ``$i$'', and their values by $x_{t,j}^i \in {\mathbb X}_{t,j}^i \sr{\triangleqangle}{=} \times_{k={{t}}}^j {\mathbb X}^i_k$, where ${\mathbb X}^i_k\sr{\triangleqangle}{=}\left(\mathbb{X}_k(1),\ldots\mathbb{X}_k(i)\right)$. The notation ${X}\leftrightarrow{Y}\leftrightarrow{Z}$ denotes a Markov Chain ($\mc$) which means that $p(x|y,z)=p(x|y)$. We denote the diagonal of a square matrix by $\diag(\cdot)$ and the $p\times{p}$ identity matrix by $I_p$. If $A\in\mathbb{R}^{p{\times}{p}}$, we denote by $A\succeq{0}$ (resp., $A\succ{0}$) a positive semidefinite matrix (resp., positive definite matrix). We denote the determinant and trace of some matrix $A\in\mathbb{R}^{p\times{p}}$ by $|A|$ and $\trace(A)$, respectively. {We denote by $h(x)$ (resp. $h(x|y)$) the differential entropy of a distribution $p(x)$ (resp. $p(x|y)$). We denote ${\cal D}(P||Q)$ the relative entropy of probability distributions $P$ and $Q$. We denote by ${\bf E}\{\cdot\}$ the expectation operator and $||\cdot||_2$ the Euclidean norm.} Unless otherwise stated, when we say ``total'' distortion, ``total-rate'' or ``total-cost'' we mean with respect to time. Similarly, by referring to ``average total'' we mean normalized over the total finite time horizon. \section{Known Results on Sequential Coding}\label{sec:preliminaries} \par {In this section, we give an overview of the sequential causal coding introduced and analyzed independently by \cite[Chapter 5]{tatikonda:2000} and \cite{viswanathan:2000,ma-ishwar:2011,yang:2011}. We merge both frameworks because some results obtained in \cite{ma-ishwar:2011,yang:2011} complement the results of \cite[Chapter 5]{tatikonda:2000} and vice versa.} \par In the following analysis, we will consider processes for a fixed time-span $t\in\mathbb{N}^n_1$, i.e., ($X_1,\ldots,X_{n}$). Following \cite{ma-ishwar:2011,yang:2011}, we assume that the sequences of $\rvs$ are defined on alphabet spaces with finite cardinality. Nevertheless, these can be extended following for instance the techniques employed in \cite{oohama:1998} to continuous alphabet spaces as well (i.e., Gaussian processes) with $\mse$ distortion constraints. First, we use some definitions (with slight modifications to ease the readability of the paper) from \cite[\S{II}]{ma-ishwar:2011} and \cite[\S{I}]{yang:2011}. \begin{definition}(Sequential causal coding)\label{def:sequential_codes_iid} A spatial order $p$ sequential causal code ${\cal C}_p$ for the (joint) vector source ($X_1^p,X_2^p,\ldots,X_n^p$) is formally defined by a sequence of encoder and decoder pairs ($f^{(p)}_1,g^{(p)}_1$),$\ldots$,($f^{(p)}_n,g^{(p)}_n$) such that \begin{align} \begin{split}\label{seq:coding_functions} f_t^{(p)}&:~\mathbb{X}^p_{1,t}\times\underbrace{\{0,1\}^*\times\ldots\times\{0,1\}^*}_{t-1~times}\longrightarrow\mathbb\{0,1\}^*\\ g_t^{(p)}&:~\underbrace{\{0,1\}^*\times\ldots\times\{0,1\}^*}_{t~times}\longrightarrow\mathbb{Y}^p_t,~t\in\mathbb{N}_1^n \end{split}, \end{align} where $\{0,1\}^*$ denotes the set of all binary sequences of finite length satisfying the property that at each time instant $t$ the range of $\{f_t:~t\in\mathbb{N}_1^n\}$ given any $t-1$ binary sequences is an instantaneous code. Moreover, the encoded and reconstructed sequences of $\{X_t^p:~t\in\mathbb{N}_1^n\}$ are given by $S_t=f_t(X^p_{1,t},S_{1,t-1})$, with $S_t\in\mathbb{S}_t\subset\{0,1\}^*$, and $Y_t^p=g_t(S_{1,t})$, respectively, with $|\mathbb{Y}_t|<\infty$. Moreover, the expected rate in bits per symbol at each time instant (normalized over the spatial components) is defined as \begin{align} r_t\sr{\triangleqangle}{=}\frac{{\bf E}|S_t|}{p},~t\in\mathbb{N}_1^n,\label{coding_rate:sequential} \end{align} where $|S_t|$ denotes the length of the binary sequence $S_t$. \end{definition} \paragraph*{Distortion criterion} For each $t\in\mathbb{N}_1^n$, we consider a total (in dimension) single-letter distortion criterion. This means that the distortion between $X^p_t$ and $Y^p_t$ is measured by a function $d_t:~\mathbb{X}^p_t\times\mathbb{Y}^p_t\longrightarrow[0,\infty)$ with maximum distortion $d_t^{\max}=\max_{x^p_t,y^p_t}d_t(x^p_t,y_t^p)<\infty$ such that \begin{align} d_t(x^p_t,y^p_t)\sr{\triangleqangle}{=}\frac{1}{p}\sum_{i=1}^p{d}_t(x_t(i),y_t(i)).\label{distortion_function} \end{align} The per-time average distortion is defined as \begin{align} {\bf E}\left\{d_t(X_t^p,Y_t^p)\right\}\sr{\triangleqangle}{=}\frac{1}{p}\sum_{i=1}^p{\bf E}\left\{{d}_t(X_t(i),Y_t(i))\right\}.\label{average_distortion} \end{align} We remark that {the following results are still valid even if} the distortion function \eqref{distortion_function} has dependency on previous reproductions $\{Y^p_{1,t-1}:~t\in\mathbb{N}_1^n\}$ (see, e.g., \cite{ma-ishwar:2011}). \begin{definition}(Achievability)\label{def:seq:operation_meaning} A rate-distortion tuple $(R_{1,n},D_{1,n})\sr{\triangleqangle}{=}(R_1,\ldots,R_n,D_1,\ldots,D_n)$ for any ``$n$'' is said to be {\it achievable} for a given sequential causal coding system if for all $\epsilon>0$, there exists a sequential code $\{(f^{(p)}_t,g^{(p)}_t):~t\in\mathbb{N}_1^n\}$ {such that there exists ${\cal P}$ for which \begin{align} \begin{split}\label{seq:achievability} r_t&\leq{R_t}+\epsilon,\\ {\bf E}\left\{d_t(X_t^p,Y_t^p)\right\}&\leq{D_t}+\epsilon,~D_t\geq{0},~\forall{t}\in\mathbb{N}_1^n, \end{split} \end{align} holds $\forall{p}\geq{\cal P}$}. Moreover, let the set of all achievable rate-distortion tuples $(R_{1,n},D_{1,n})$ be denoted by ${\cal R}^{*}$. Then, the minimum total-rate required to achieve the distortion tuple $(D_{1},~D_2,\ldots,D_n)$ is defined by the following optimization problem: \begin{align} {\cal R}^{\op}_{\ssum}(D_{1,n})\sr{\triangleqangle}{=}\inf_{(R_{1,n},D_{1,n})\in{\cal R}^{*}}\sum_{t=1}^nR_t.\label{operational_min_sum_rate} \end{align} \end{definition} \paragraph*{Source model} {The finite alphabet source randomly generates symbols $X^p_{1,n}=x^p_{1,n}\in\mathbb{X}^p_{1,n}$ according to the following temporally correlated joint probability mass function ($\pmf$) \begin{align} {p}(x_{1,n}^p)\sr{\triangleqangle}{=}\otimes_{i=1}^p{p}(x_1(i),\ldots,x_{n}(i)),\label{source_correlated_1} \end{align} where the joint process $\{(X_1(i),\ldots,X_n(i))\}_{i=1}^p$ is identically distributed. This means that for each $i=1,\ldots,p$, the temporally correlated joint process $(X_1(i),\ldots,X_{n}(i))$ is independent of every other temporally correlated joint process $(X_1(j),\ldots,X_{n}(j))$, such that $i\neq{j}$. Furthermore, each temporally correlated joint process $(X_1(i),\ldots,X_{n}(i))$ is spatially identically distributed.} \paragraph*{Achievable rate-distortion regions and minimum achievable total-rate} Next, we characterize the achievable rate-distortion regions and the minimum achievable total-rate for the source model \eqref{source_correlated_1} with the distortion constraint \eqref{average_distortion}. \par The following lemma is given in \cite[Theorem 5]{yang:2011}. \begin{lemma}(Achievable rate-distortion region)\label{lemma:coding_theorem_iid} Consider the source model \eqref{source_correlated_1} with the average distortion of \eqref{average_distortion}. {Then, the {``spatially''} single-letter characterization of the rate-distortion region $(R_{1,n},~D_{1,n})$ is given by}: \begin{align} \begin{split} {\cal R}^{\IID}&=\Bigg\{(R_{1,n},D_{1,n})\Bigg{|}\exists S_{1,n-1}, Y_{1,n},~\{g_t(\cdot)\}_{t=1}^n,\\ \text{s.t.}&\quad R_1\geq{I}(X_1;S_1),~\mbox{(initial time)}\\ &\quad R_t\geq{I}(X_{1,t};S_t|S_{1,t-1}),~t=2,\ldots,n-1,\\ &\quad R_n\geq{I}(X_{1,n};Y_n|S_{1,n-1}),~\mbox{(terminal time)},\\ &\quad D_t\geq{\bf E}\left\{d_t(X_t,Y_t)\right\},~t\in\mathbb{N}_1^n,\\ &\quad Y_1=g_1(S_1),~Y_{t}=g_t(S_{1,t}),~t=2,\ldots,n-1,\\ &\quad S_1\leftrightarrow(X_{1})\leftrightarrow{X}_{2,n},\\ &\quad S_t\leftrightarrow(X_{1,t},S_{1,t-1})\leftrightarrow{X}_{t+1,n},~t=2,\ldots,n-1\Bigg\}, \end{split}\label{single_letter_char_iid} \end{align} where $\{S_{1,n-1},Y_{1,n}\}$ are the auxiliary (encoded) and reproduction $\rvs$, respectively, taking values in some finite alphabet spaces $\{\mathbb{S}_{1,n-1},\mathbb{Y}_{1,n}\}$, and $\{g_t(\cdot):~t\in\mathbb{N}_1^n\}$ are deterministic functions. \end{lemma} \begin{remark}(Comments on Lemma \ref{lemma:coding_theorem_iid})\label{remark:lemma1} In the characterization of Lemma \ref{lemma:coding_theorem_iid}, the spatial index is excluded because the rate and distortion regions are normalized with the total number of spatial components. This point is also shown in \cite[Theorem 4]{yang:2011}. Following \cite{ma-ishwar:2011} or \cite{yang:2011}, Lemma \ref{lemma:coding_theorem_iid} gives a set ${\cal R}^{\IID}$ that is convex and closed (this can be shown by trivially generalizing the time-sharing and continuity arguments of \cite[Appendix C2]{ma-ishwar:2011} to $n$ time-steps). This in turn means that ${\cal R}^*={\cal R^{\IID}}$ (see, e.g., \cite[Theorem 5]{yang:2011}). Thus, \eqref{operational_min_sum_rate} can be reformulated to the following optimization problem: \begin{align} {\cal R}_{\ssum}^{\IID,\op}(D_{1,n})\sr{\triangleqangle}{=}\min_{(R_{1,n},D_{1,n})\in{\cal R}^{\IID}}\sum_{t=1}^nR_t.\label{operational_min_sum_rate_new} \end{align} \end{remark} \par {In what follows, we state a lemma that gives a lower bound on ${\cal R}_{\ssum}^{\IID,\op}(D_{1,n})$. The lemma is stated without a proof as it is already derived in various papers, e.g., \cite[Theorem 5.3.1, Lemma 5.4.1]{tatikonda:2000}, \cite[Theorem 4.1]{silva:2011}, \cite[Corollary 1.1]{ma-ishwar:2011} (for $n=3$-time steps but can be trivially generalized to an arbitrary number of time-steps).} { \begin{lemma}(Lower bound on \eqref{operational_min_sum_rate_new})\label{lemma:total_rate} For $p$ sufficiently large, the following lower bound holds: \begin{align} {\cal R}_{{\ssum}}^{\IID,\op}(D_{1,n})&\geq{\cal R}_{{\ssum}}^{\IID}(D_{1,n})\nonumber\\ &\sr{\triangleqangle}{=}\min_{\substack{{\bf E}\left\{d_t(X_t,Y_t)\right\}\leq{D}_t,~t\in\mathbb{N}_1^n\\Y_1\leftrightarrow{X_{1}}\leftrightarrow{X}_{2,n},\\~Y_t\leftrightarrow(X_{1,t},Y_{1,t-1})\leftrightarrow{X}_{t+1,n},~t=2,\ldots,n-1}}I(X_{1,n};Y_{1,n}),\label{eq:sumrate} \end{align} where $I(X_{1,n};Y_{1,n})=\sum_{t=1}^nI(X_{1,t};Y_t|Y_{1,t-1})$ is a variant of directed information \cite{marko1973,massey:1990} obtained by the conditional independence constraints imposed in the constraint set of \eqref{eq:sumrate}. \end{lemma} } \par {We note that the lower bound in Lemma \ref{lemma:total_rate} is often encountered in the literature by the name nonanticipatory $\epsilon-$entropy and sequential or nonanticipative $\rdf$. }{ \begin{remark}(When do we achieve the lower bound in \eqref{eq:sumrate}?)\label{remark:ideal_quantization} It should be noted that in \cite[Theorem 4]{yang:2011} it was shown via an algorithmic approach (see also \cite[Theorem 5]{yang:2011} for an equivalent proof via a direct and converse coding theorem) that Lemma \ref{lemma:total_rate} is achieved with equality if the number of $\IID$ spatial components tends to infinity, i.e., $p\longrightarrow\infty$, which also means that the optimal minimizer or ``test-channel'' at each time instant in \eqref{eq:sumrate}, corresponds precisely to the distribution generated by a sequential encoder, i.e., $S_t=Y_t$, for any $t\in\mathbb{N}_1^n$ (see also the derivation of \cite[Corollary 1.1]{ma-ishwar:2011}). In other words, the equality holds if the encoder (or quantizer for continuous alphabet sources) simulates exactly the corresponding ``test-channel'' distribution of \eqref{eq:sumrate}. This claim was also demonstrated via an application example for jointly Gaussian $\rvs$ and per-time $\mse$ distortion in \cite[Corollary 1.2]{ma-ishwar:2011} and also stated as a corollary referring to an ``ideal'' $\dpcm$-based $\mse$ quantizer in \cite[Corollary 1.3]{ma-ishwar:2011}. In general, however, for any $p<\infty$, the equality in \eqref{eq:sumrate} is not achievable. \end{remark} Next, we state the generalization of Lemma \ref{lemma:total_rate} when the constrained set is subject to an average total distortion constraint defined as $\frac{1}{n}\sum_{t=1}^n{\bf E}\left\{d_t(X_t,Y_t)\right\}\leq{D}$ with ${\bf E}\left\{d_t(X_t,Y_t)\right\}$ given in \eqref{average_distortion}. This lemma was derived in \cite[Theorem 5.3.1, Lemma 5.4.1]{tatikonda:2000}. \begin{lemma}(Generalization of Lemma \ref{lemma:total_rate})\label{lemma:total_rate_aver_dist} For $p$ sufficiently large, the following lower bound holds: \begin{align} \begin{split} {\cal R}_{{\ssum}}^{\IID,\op}(D)&\geq{\cal R}_{{\ssum}}^{\IID}(D)\\ &=\min_{\substack{\frac{1}{n}\sum_{t=1}^n{\bf E}\left\{d_t(X_t,Y_t)\right\}\leq{D},~t\in\mathbb{N}_1^n\\Y_1\leftrightarrow(X_{1})\leftrightarrow{X}_{2,n},\\~Y_t\leftrightarrow(X_{1,t},Y_{1,t-1})\leftrightarrow{X}_{t+1,n},~t\in\mathbb{N}_2^{n-1}}}I(X_{1,n};Y_{1,n}), \end{split}\label{eq:sumrate_aver_dist} \end{align} \end{lemma} Clearly, one can use the same methodology applied in \cite[Theorems 4, 5]{yang:2011} to demonstrate that the lower bound in \eqref{eq:sumrate_aver_dist} is achieved once $p\longrightarrow\infty$ (see the discussion in Remark \ref{remark:ideal_quantization}). However, we once again point out that in general, \eqref{eq:sumrate_aver_dist} is a lower bound on the minimum achievable rates achieved by causal sequential codes.} \paragraph*{{Information structures}} {Next, we state a few well-known structural results related to the bounds in Lemmas \ref{lemma:total_rate}, \ref{lemma:total_rate_aver_dist}. In particular, if the temporally correlated joint $\pmf$ in \eqref{source_correlated_1} follows a finite-order Markov process, then, the description of the rate-distortion region in Lemma \ref{lemma:coding_theorem_iid}, and the corresponding bounds on the minimum achievable total-rate in Lemmas \ref{lemma:total_rate}, \ref{lemma:total_rate_aver_dist} can be simplified considerably following for instance the framework of \cite{witsenhausen:1979,borkar:2001,stavrou:2018siam}. For the important special case of first-order Markov process, \eqref{single_letter_char_iid} simplifies to \begin{align} {\cal R}^{\IID,1}&=\Bigg\{(R_{1,n},D_{1,n})\Bigg{|}\exists S_{1,n-1}, Y_{1,n},~\{g_t(\cdot)\}_{t=1}^n,\nonumber\\ s.t.&~{R}_1\geq{I}(X_1;S_1),~\mbox{(initial time)}\nonumber\\ &~R_t\geq{I}(X_{t};S_t|S_{1,t-1}),~t=2,\ldots,n-1,\nonumber\\ & ~R_n\geq{I}(X_{n};Y_n|S_{1,n-1}),~\mbox{(terminal time)},\nonumber\\ & ~D_t\geq{\bf E}\left\{d_t(X_t,Y_t)\right\},~t\in\mathbb{N}_1^n,\nonumber\\ & ~Y_1=g_1(S_1),~Y_{t}=g_t(S_{1,t}),~t=2,\ldots,n-1,\nonumber\\ & ~S_1\leftrightarrow(X_{1})\leftrightarrow{X}_{2,n},\nonumber\\ & ~S_t\leftrightarrow(X_{t},S_{1,t-1})\leftrightarrow(X_{1,t-1},{X}_{t+1,n})\Bigg\}. \label{single_letter_char_iid_markov} \end{align} Using \eqref{single_letter_char_iid_markov}, the minimum achievable total-rate can now be simplified to the following optimization problem: \begin{align} {\cal R}_{\ssum}^{\IID,\op,1}(D_{1,n})\sr{\triangleqangle}{=}\min_{(R_{1,n},D_{1,n})\in{\cal R}^{\IID,1}}\sum_{t=1}^nR_t.\label{operational_min_sum_rate_new_markov} \end{align} Using the description of \eqref{operational_min_sum_rate_new_markov}, we can simplify \eqref{eq:sumrate} and \eqref{eq:sumrate_aver_dist}, respectively, as follows: \begin{align} &{\cal R}_{\ssum}^{\IID,\op,1}(D_{1,n})\geq{\cal R}_{{\ssum}}^{\IID,1}(D_{1,n})=\min_{\substack{{\bf E}\left\{d_t(X_t,Y_t)\right\}\leq{D}_t,~t\in\mathbb{N}_1^n\\Y_1\leftrightarrow{X_{1}}\leftrightarrow{X}_{2,n},\\~Y_t\leftrightarrow(X_{t},Y_{1,t-1})\leftrightarrow(X_{1,t-1},{X}_{t+1,n}),~t\in\mathbb{N}_2^{n-1}}}I(X_{1,n};Y_{1,n}),\label{eq:sumrate1}\\ &{\cal R}_{{\ssum}}^{\IID,\op,1}(D)\geq{\cal R}_{{\ssum}}^{\IID,1}(D)\sr{\triangleqangle}{=}\min_{\substack{\frac{1}{n}\sum_{t=1}^n{\bf E}\left\{d_t(X_t,Y_t)\right\}\leq{D},~t\in\mathbb{N}_1^n\\Y_1\leftrightarrow{X_{1}}\leftrightarrow{X}_{2,n},\\~Y_t\leftrightarrow(X_{t},Y_{1,t-1})\leftrightarrow(X_{1,t-1},~{X}_{t+1,n}),~t=2,\ldots,n-1}}I(X_{1,n};Y_{1,n}),\label{eq:sumrate2} \end{align} where $I(X_{1,n};Y_{1,n})=\sum_{t=1}^n{I}(X_t;Y_t|Y_{1,t-1})$.} \par {In the sequel, we use the description of \eqref{eq:sumrate2} to derive our main results.} \section{Application in Quantized State Estimation}\label{sec:examples} \par In this section, {we apply the sequential coding framework of the previous section to a state estimation problem and obtain new results in such applications.} \par We consider a similar scenario to \cite[\S{II}]{khina:2018} where a multi-track system estimates several ``parallel'' Gaussian processes over a single shared communication link as illustrated in Fig. \ref{fig:multitrack}. Following the sequential coding framework, we require the Gaussian source processes to have temporally correlated and spatially $\IID$ components, which are observed by an observer who collects the measured states into a single vector state. Then, the observer/encoder maps the states as random finite-rate packets to a $\mmse$ estimator through a noiseless link. {Compared to the result of \cite[Theorem 1]{khina:2018} which derives a dynamic forward in time recursion of a distortion-rate allocation algorithm when the rate is given at each time instant, here we derive a dynamic rate-distortion reverse-waterfilling algorithm operating forward in time for which we only consider a given distortion threshold $D>0$.} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{multitrackv_2.pdf} \caption{Multi-track state estimation system model.} \label{fig:multitrack} \end{figure} \par First, we describe the problem of interest.\\ {\bf State process.} Consider $p$-parallel time-varying Gauss-Markov processes with $\IID$ spatial components as follows: \begin{align} x_{t}(i)=\alpha_{t-1}x_{t-1}(i)+w_{t-1}(i),~i\in\mathbb{N}_1^p,~t\in\mathbb{N}_1^n,\label{example:GM_process} \end{align} where $x_1(i)\equiv{x}_1$ is given, with $x_1\sim{\cal N}(0;\sigma^2_{x_1})$; the non-random coefficient $\alpha_{t}\in\mathbb{R}$ is known at each time step $t$, and $\{w_t(i)\equiv{w}_t:~i\in\mathbb{N}_1^p\}$, $w_t\sim{\cal N}(0;\sigma^2_{w_{t}})$, is an independent Gaussian noise process at each $t$, independent of $x_1, \forall{i}\in\mathbb{N}_1^p$. Since \eqref{example:GM_process} has $\IID$ spatial components it can be compactly written as a vector or frame as follows: \begin{align} X_{t}=A_{t-1}X_{t-1}+W_{t-1},~X_1=\text{given},~t\in\mathbb{N}_2^n,\label{example:GM_process1} \end{align} where $A_{t-1}=\diag(\alpha_{t-1},\ldots,\alpha_{t-1})\in\mathbb{R}^{p\times{p}}$, $X_t\in\mathbb{R}^p$, and the independent Gaussian noise process $W_t\in\mathbb{R}^p\sim{\cal N}(0;\Sigma_{W_t})$, where $\Sigma_{W_t}=\diag(\sigma^2_{w_t},\ldots,\sigma^2_{w_t})\succ{0}\in\mathbb{R}^{p\times{p}}$ independent of the initial state $X_1$.\\ {\bf Observer/Encoder.} At the observer the spatially $\IID$ time-varying $\mathbb{R}^p$-valued Gauss-Markov processes are collected into a frame $X_t\in\mathbb{R}^p$ and mapped using sequential coding with encoded sequence: \begin{align} S_t=f_t(X_{1,t},S_{1,t-1}),\label{example:encoding} \end{align} where at $t=1$ we assume $S_1=f_1(X_1)$, and $R_t=\frac{{\bf E}|S_t|}{p}$ is the expected (random) rate (per dimension) at each time instant $t$ transmitted through the noiseless link.\\% $z_t\in\{1,\ldots,2^{pR_t}\}$ denotes the data packet available for transmission at each time instant. \\ {\bf \text{MMSE} Decoder.} The data packet $S_t$ is received using the following reconstructed sequence: \begin{align} Y_t=g_t(S_{1,t}),\label{example:decoding} \end{align} where at $t=1$ we have $Y_1=g_1(S_1)$.\\ {\bf Distortion.} We consider the average total $\mse$ distortion normalized over all spatial components as follows: \begin{align} \frac{1}{n}\sum_{t=1}^nD_t~\mbox{with}~D_t\sr{\triangleqangle}{=}\frac{1}{p}{\bf E}\left\{||X_t-Y_t||_2^2\right\}.\label{example:aver_dist} \end{align} \noindent{\bf Performance.} The performance of the above system {(per dimension)} for a given $D>0$ can be cast to the following optimization problem: \begin{align} {\cal R}_{\ssum}^{\IID,\op,1}(D)=\min_{\substack{(f_t,~g_t):~t=1,\ldots,n\\ \frac{1}{n}\sum_{t=1}^nD_t\leq{D}}}\sum_{t=1}^n{R}_t.\label{example:problem_form} \end{align} \par {The next theorem is our first main result in this paper. It derives a lower bound on the performance of Fig. \ref{fig:multitrack} by means of a dynamic reverse-waterfilling algorithm.} \begin{theorem}(Lower bound on \eqref{example:problem_form})\label{theorem:fundam_lim_mmse} For the multi-track system in Fig. \ref{fig:multitrack}, the minimum achievable total-rate for any ``$n$'' and any $p$, however large, is ${\cal R}_{\ssum}^{\IID,\op,1}(D)=\sum_{t=1}^n{R^{\op}_t}$ with the minimum achievable rate distortion at each time instant (per dimension) given by some $R^{\op}_t\geq{R}^*_t$ such that \begin{align} R_t^*=\frac{1}{2}\log_2\left(\frac{\lambda_t}{D_{t}}\right), \label{para_sol_eq.1} \end{align} where $\lambda_t\sr{\triangleqangle}{=}\alpha^2_{t-1}D_{t-1}+\sigma^2_{w_{t-1}}$ and $D_t$ is the distortion at each time instant evaluated based on a dynamic reverse-waterfilling algorithm operating forward in time. The algorithm is as follows: \begin{align} D_{t} &\sr{\triangleqangle}{=}\left\{ \begin{array}{ll} \xi_t & \mbox{if} \quad \xi_t\leq\lambda_{t} \\ \lambda_{t} & \mbox{if}\quad\xi_t>\lambda_{t} \end{array} \right.,~\forall{t},\label{rev_water:eq.1} \end{align} with $\sum_{t=1}^n D_t= nD$, and \begin{align}\label{values_of_xis} \xi_t= \begin{cases} \frac{1}{2b^2_t}\left(\sqrt{1+\frac{2b^2_t}{\theta}}-1\right),~\forall t\in\mathbb{N}_1^{n-1} \\ \frac{1}{2\theta},~t=n \end{cases}, \end{align} where $\theta>0$ is the Lagrangian multiplier tuned to obtain equality $\sum_{t=1}^n D_t= nD$, $b^2_t\sr{\triangleqangle}{=}\frac{\alpha^2_t}{\sigma^2_{w_t}}$, and $D\in(0, \infty)$. \end{theorem} \begin{IEEEproof} See Appendix \ref{proof:theorem:fundam_lim_mmse}. \end{IEEEproof} {In the next remark, we discuss some technical observations regarding Theorem \ref{theorem:fundam_lim_mmse} and draw connections with \cite[Corollary 1.2]{ma-ishwar:2011}.} {\begin{remark}\label{remark:discussion:theorem:mmse} {\bf (1)} The optimization problem in the derivation of Theorem \ref{theorem:fundam_lim_mmse} suggests that $(A_t, \Sigma_{W_t}, \Delta_t, \Lambda_t)$ commute by pairs\cite[p. 5]{harville:1997} since they are all scalar matrices which in turn means that they are simultaneously diagonalizable by an orthogonal matrix \cite[Theorem 21.13.1]{harville:1997} (in this case the orthogonal matrix is the identity matrix hence it is omitted from the characterization of the optimization problem). \\ {\bf (2)} Theorem \ref{theorem:fundam_lim_mmse} extends the result of \cite[Corollary 1.2]{ma-ishwar:2011} who found an explicit expression of the minimum total-rate $\sum_{t=1}^nR_t^*$ for $n=3$ subject to a per-time $\mse$ distortion, to a similar problem constrained by an average total-distortion that we solve using a dynamic reverse-waterfilling algorithm that allocates the rate and the distortion at each instant of time for a fixed finite time horizon. \end{remark} } {\it Implementation of the dynamic reverse-waterfilling:} It should be remarked that a way to implement the reverse-waterfilling algorithm in Theorem \ref{theorem:fundam_lim_mmse} is proposed in \cite[Algorithm 1]{stavrou:2018lcss}. A different algorithm using the {\it bisection method }(for details see, e.g., \cite[Chapter 2.1]{atkinson:1991}) is proposed in Algorithm \ref{algo1}. {The method in Algorithm \ref{algo1} guarantees linear convergence with rate $\frac{1}{2}$. On the other hand, \cite[Algorithm 1]{stavrou:2018lcss} requires a specific proportionality gain factor $\gamma\in(0,1]$ chosen appropriately at each time instant. The choice of $\gamma$ affects the rate of convergence whereas it does not guarantee global convergence of the algorithm}. In Fig. \ref{fig:rate-distortion:allocation}, we illustrate a numerical simulation using Algorithm \ref{algo1} by taking $a_t\in(0,2)$, $\sigma^2_{w_t}=1,$~for $t=\{1,2,\ldots,200\}$ and $D=1$. \begin{figure}[htp] \centering \includegraphics[width=\columnwidth]{rate_distortion_allocation.pdf} \caption{Dynamic rate-distortion allocation for a time-horizon $t=\{1,2\ldots,200\}$ for the system in Fig. \ref{fig:multitrack}.} \label{fig:rate-distortion:allocation} \end{figure} \begin{varalgorithm}{1} \caption{Dynamic reverse-waterfilling algorithm} \begin{algorithmic} \STATE {\textbf{Initialize:} number of time-steps $n$; distortion level $D$; error tolerance $\epsilon$; nominal minimum and maximum value $\theta^{\min}={0}$ and $\theta^{\max}={\frac{1}{2D}}$; initial variance ${\lambda}_1=\sigma^2_{x_{1}}$ of the initial state $x_1$, values $a_t$ and $\sigma^2_{w_t}$ of \eqref{example:GM_process}.} \STATE {Set $\theta=1/2D$; $\text{flag}=0$.} \WHILE{$\text{flag}=0$} \STATE {Compute $D_{t}~\forall~t$ as follows:} \FOR {$t=1:n$} \STATE {Compute $\xi_t$ according to \eqref{values_of_xis}.} \STATE {Compute $D_t$ according to \eqref{rev_water:eq.1}.} \IF {$t<n$} \STATE{Compute $\lambda_{t+1}$ according to $\lambda_{t+1}\sr{\triangleqangle}{=}\alpha^2_{t}D_{t}+\sigma^2_{w_{t}}$.} \ENDIF \ENDFOR \IF {$\frac{1}{n}\sum{D}_t-D\geq\epsilon$} \STATE {Set $\theta^{\min}=\frac{\theta}{n}$.} \ELSE \STATE {Set $\theta^{\max}=\frac{\theta}{n}$.} \ENDIF \IF {$\theta^{\max}-\theta^{\min}\geq\frac{\epsilon}{n}$} \STATE{Compute $\theta=\frac{n(\theta^{\min}+\theta^{\max})}{2}$.} \ELSE \STATE {$\text{flag}\leftarrow 1$} \ENDIF \ENDWHILE \STATE {\textbf{Output:} $\{D_t:~t\in\mathbb{N}_1^n\}$, $\{\lambda_t:~t\in\mathbb{N}_1^n\}$, for a given distortion level $D$.} \end{algorithmic} \label{algo1} \end{varalgorithm} { \subsection{Steady-state solution of Theorem \ref{theorem:fundam_lim_mmse}}\label{subsec:steady-state_solution}} {In this subsection, we study the steady-state case of the lower bound obtained in Theorem \ref{theorem:fundam_lim_mmse}. To do this, first, we restrict the state process of our setup to be time invariant, which means that in \eqref{example:GM_process} the coefficients $\alpha_{t-1}\equiv{\alpha},~\forall{t}$ and $w_t\sim{\cal N}(0;\sigma^2_{w}),~\forall{t}$, or similarly, in \eqref{example:GM_process1} the matrix $A_{t-1}\equiv{A}=\diag(\alpha,\ldots,\alpha),~\forall{t}$ and $W_t\sim{\cal N}(0;\Sigma_W),~\forall{t}$, where $\Sigma_W=\diag(\sigma^2_{w},\ldots,\sigma^2_{w})\succ{0}$. We also denote the steady-state average total rate and distortion as follows: \begin{align} R_{\infty}=\limsup_{n\longrightarrow\infty}\frac{1}{n}\sum_{t=1}^nR_t,~~~~D_{\infty}=\limsup_{n\longrightarrow\infty}\frac{1}{n}\sum_{t=1}^nD_t.\label{steady_state_rate_distortion} \end{align} \noindent{\bf Steady-state Performance.} The minimum achievable steady-state performance of the multi-track system of Fig. \ref{fig:multitrack} when the system is modeled by $p$-parallel time-invariant Gauss-Markov processes {(per dimension)} can be cast to the following optimization problem: \begin{align} {\cal R}_{\ssum,ss}^{\IID,\op,1}(D)=\min_{\substack{(f_t,~g_t):~t=1,\ldots,\infty\\ D_{\infty}\leq{D}}}R_{\infty}.\label{performance_steady_state} \end{align} The next corollary is a consequence of the lower bound derived in Theorem \ref{theorem:fundam_lim_mmse}. It states that the minimum achievable steady state total rate subject to steady-state total distortion constraint is equivalent to having the minimum achievable steady state total rate subject to a fixed distortion budget, i.e., $D_t=D,~\forall{t}$. This result complements equivalent results derived in \cite{derpich:2012}, \cite[Corollary 2]{khina:2018}.} { \begin{corollary}(Lower bound on \eqref{performance_steady_state})\label{corollary:steady_state_rev_water} The minimum achievable steady state performance of \eqref{performance_steady_state}, under a steady-state total distortion constraint $D_{\infty}\leq{D}$ for any $p$ however larger, is bounded from below by ${\cal R}_{\ssum,ss}^{\IID,\op,1}(D)\geq{R}^*_{\infty}$, such that \begin{align} R^*_{\infty}=\frac{1}{2}\log_2\left(\alpha^2+\frac{\sigma^2_w}{D}\right),\label{steady_state_rdf} \end{align} where $R^*_{\infty}\sr{\triangleqangle}{=}\lim_{n\longrightarrow\infty}\frac{1}{n}\sum_{t=1}^nR_t^*$. Consequently, assuming $D_t=D,~\forall{t}$, achieves \eqref{steady_state_rdf} as $n\longrightarrow\infty$. \end{corollary} } { \begin{IEEEproof} To obtain our result, we first take the average total-rate, i.e., $\frac{1}{n}\sum_{t=1}^nR_t$. Then, we show the following inequalities: \begin{align} \frac{1}{n}\sum_{t=1}^nR_t&\stackrel{(a)}\geq\frac{1}{n}\sum_{t=1}^nR_t^*\nonumber\\ &=\frac{1}{n}\sum_{t=1}^n\frac{1}{2}\log_2\left(\frac{\lambda_t}{D_t}\right)\nonumber\\ &\stackrel{(b)}=\frac{1}{n}\sum_{t=1}^n\left[\frac{1}{2}\log_2\left(\alpha^2{D}_{t-1}+\sigma^2_w\right)-\frac{1}{2}\log_2{D_t}\right]\nonumber\\ &=\frac{1}{2n}\log_2\left(\frac{\lambda_1}{D_n}\right)+\frac{1}{2n}\sum_{t=1}^{n-1}\log_2\left(\alpha^2+\frac{\sigma^2_w}{D_t}\right)\nonumber\\ &\stackrel{(c)}\geq\frac{1}{2n}\log_2\frac{\lambda_1}{\sum_{t=1}^nD_t}+\frac{1}{2}\frac{(n-1)}{n}\log_2\left(\alpha^2+\frac{(n-1)\sigma^2_w}{\sum_{t=1}^{n-1}D_t}\right)\nonumber\\ &\stackrel{(d)}\geq\frac{1}{2n}\log_2\left(\frac{\lambda_1}{nD}\right)+\frac{1}{2}\frac{(n-1)}{n}\log_2\left(\alpha^2+\frac{\sigma^2_w}{\frac{nD}{n-1}}\right), \label{derivation_steady_state} \end{align} where $(a)$ follows from Theorem \ref{theorem:fundam_lim_mmse}; $(b)$ follows because for time-invariant processes $\lambda_t=\alpha^2D_{t-1}+\sigma^2_{w}$; $(c)$ follows because in the first term $D_n\leq\sum_{t=1}^nD_t$ and in the second term we apply Jensen's inequality \cite[Theorem 2.6.2]{cover-thomas2006}; $(d)$ follows because in the first term $\sum_{t=1}^{n}D_t\leq{nD}$ and in the second term $\sum_{t=1}^{n-1}D_t\leq\sum_{t=1}^{n}D_t\leq{nD}$ since $D_n\geq{0}$. {We prove that ${\cal R}_{\ssum,ss}^{\IID,\op,1}(D)\geq{R}^*_{\infty}$ where $R^*_{\infty}$ is given by \eqref{steady_state_rate_distortion} by evaluating \eqref{derivation_steady_state} in the limit $n\longrightarrow\infty$ and then minimizing both sides. This is obtained because the first term equals to zero ($\lambda_1, D$ are constants), in the second term $\lim_{n\longrightarrow\infty}\left(\frac{n-1}{n}\right)=1$, $\lim_{n\longrightarrow\infty}\left(\frac{n}{n-1}\right)=1$ and then by taking the minimization in both sides. This completes the proof.} \end{IEEEproof} \begin{remark}(Connections to existing works)\label{remark:existing_nrdf} We note that the steady-state lower bound (per dimension) obtained in Corollary \ref{corollary:steady_state_rev_water} corresponds precisely to the solution of the time-invariant scalar-valued Gauss-Markov processes with per-time MSE distortion constraint derived in \cite[Equation (14)]{tatikonda:2004} and to the solution of stationary Gauss-Markov processes with MSE distortion constraint derived in \cite[Theorem 3]{derpich:2012}, \cite[Equation (1.43)]{gorbunov-pinsker1972b}. \end{remark}} \subsection{Upper bounds to the minimum achievable total-rate}\label{subsec:upper_bound_mmse} In this section, we employ a sequential causal $\dpcm$-based scheme using pre/post filtered $\ecdq$ (for details, see, e.g., \cite[Chapter 5]{zamir:2014}) that ensures standard performance guarantees (achievable upper bounds) on the minimum achievable sum-rate ${\cal R}_{\ssum}^{\IID,\op,1}(D)=\sum_{t=1}^nR^{\op}_t$ of the multi-track setup of Fig. \ref{fig:multitrack}. {The reason for the choice of this quantization scheme is twofold. First, it can be implemented in practice and, second, it allows to find analytical achievable bounds and approximations on finite-dimensional quantizers which generate near-Gaussian quantization noise and Gaussian quantization noise for infinite dimensional quantizers \cite{zamir-feder:1996b}.} \par {We first describe the sequential causal $\dpcm$ scheme using $\mmse$ quantization for parallel time-varying Gauss-Markov processes. Then, we bound the rate performance of such scheme using $\ecdq$ and vector quantization followed by memoryless entropy coding. This can be seen as a generalization of \cite[Corollary 1.2]{ma-ishwar:2011} to any finite time when the rate is allocated at each time instant. Observe that because the state is modeled as a first-order Gauss-Markov process, the sequential causal coding is precisely equivalent to predictive coding (see, e.g., \cite{stavrou:2018stsp},~\cite[Theorem 3]{yang:2014}). Therefore, we can immediately apply the standard sequential causal $\dpcm$ \cite{farvardin:1985,jayant:1990} approach (with $\mathbb{R}^p$-valued $\mmse$ quantizers) to obtain an achievable rate in our system. \\ {\bf DPCM scheme.} At each time instant $t$ the {\it encoder or innovations' encoder} performs the linear operation \begin{align} \widehat{X}_t=X_t-A_{t-1}Y_{t-1},\label{dpcm_enc} \end{align} where at $t=1$ we have $\widehat{X}_{1}=X_1$ and also $Y_{t-1}\sr{\triangleqangle}{=}{\bf E}\left\{X_{t-1}|S_{1,t-1}\right\}$, i.e., an estimate of $X_{t-1}$ given the previous quantized symbols $S_{1,t-1}$.\footnote{Note that the process $\widehat{X}_t$ has a temporal correlation since it is the error of $X_t$ from all quantized symbols $S_{1,t-1}$ and not the infinite past of the source $X_{-\infty,t}=(X_{-\infty},\ldots,X_t)$. Hence, $\widehat{X}_t$ is only an estimate of the true process.} Then, by means of a $\mathbb{R}^p$-valued $\mmse$ quantizer that operates at a rate (per dimension) $R_t$, we generate the quantized reconstruction $\widehat{Y}_t$ of the residual source $\widehat{X}_t$ denoted by $\widehat{Y}_t=Y_t-A_{t-1}Y_{t-1}$. Then, we send $S_t$ over the channel (the corresponding data packet to $\widehat{Y}_t$). At the {\it decoder} we receive $S_t$ and recover the quantized symbol $\widehat{Y}_t$ of $\widehat{X}_t$.} \begin{figure}[htp] \centering \includegraphics[width=\columnwidth]{dpcm.pdf} \caption{{$\dpcm$ of parallel processes.}} \label{fig:dpcm} \end{figure} Then, we generate the estimate $Y_t$ using the linear operation \begin{align} Y_t=\widehat{Y}_t+A_{t-1}Y_{t-1}.\label{dpcm_dec} \end{align} Combining both \eqref{dpcm_enc}, \eqref{dpcm_dec}, we obtain \begin{align} X_t-Y_t=\widehat{X}_t-\widehat{Y}_t.\label{data_processing} \end{align} {\it MSE Performance.} From \eqref{data_processing}, we see that the error between $X_t$ and $Y_t$ is equal to the quantization error introduced by $\widehat{X}_t$ and $\widehat{Y}_t$. This also means that the $\mse$ distortion (per dimension) at each instant of time satisfy \begin{align} D_t=\frac{1}{p}{\bf E}\{||X_t-Y_t||_2^2\}=\frac{1}{p}{\bf E}\{||\widehat{X}_t-\widehat{Y}_t||_2^2\}.\label{dpcm_mse} \end{align} A pictorial view of the $\dpcm$ scheme is given in Fig. \ref{fig:dpcm}. {The following theorem is another main result of this section.} \begin{theorem}(Upper bound to $R_{\ssum}^{\IID,\op,1}(D)$)\label{theorem:achievability} Suppose that in \eqref{example:problem_form} we apply a sequential causal $\dpcm$-based $\ecdq$ with a lattice quantizer. Then, the minimum achievable total-rate ${\cal R}_{\ssum}^{\IID,\op,1}(D)=\sum_{t=1}^n{R}^{\op}_t$, where at each time instant ${R}^{\op}_t$ is upper bounded as follows: \begin{align} R^{\op}_t\leq{R}^*_t+\frac{1}{2}\log_2\left(2\pi{e}G_p\right)+\frac{1}{p},~\forall{t}, ~\mbox{(bits/dimension)},\label{achievable_bound_ecdq} \end{align} where $R_t^*$ is obtained from Theorem \ref{theorem:fundam_lim_mmse}, $\frac{1}{2}\log_2\left(2\pi{e}G_p\right)$ is the divergence of the quantization noise from Gaussianity; $G_p$ is the dimensionless normalized second moment of the lattice \cite[Definition 3.2.2]{zamir:2014} and $\frac{1}{p}$ is the additional cost due to having prefix-free (instantaneous) coding. \end{theorem} \begin{IEEEproof} See Appendix~\ref{proof:theorem:achievability}. \end{IEEEproof} \par {Next, we remark some technical comments related to Theorem \ref{theorem:achievability}, to better explain its novelty compared to the existing similar schemes in the literature.} \begin{remark}(Comments on Theorem \ref{theorem:achievability})\label{remark:comments_on_theorem_achievability} {\bf (1)} {The bound of Theorem \ref{theorem:achievability} allows the transmit rate to vary at each time instant for a finite time horizon while it achieves the $\mmse$ distortion at each time step $t$. This is because our $\dpcm$-based $\ecdq$ scheme is constrained by total-rates that we find at each instant of time using the dynamic reverse-waterfilling algorithm of Theorem \ref{theorem:fundam_lim_mmse}. This loose rate-constraint is the new input of our bound compared to similar existing bounds in the literature (see, e.g., \cite[Theorem 6, Remark 16]{khina:2018}, \cite[Corollary 5.2]{silva:2011}, \cite[Theorem 5]{stavrou:2018stsp}) that assume {\it fixed rates averaged across the time or asymptotically average total rate constraints} hence restricting their transmit rate at each instant of time to be the same for any time horizon.}\\ {\bf (2)} Recently in \cite{kostina:2019} (see also \cite{khina:2018}), it is pointed out that for discrete-time processes one can assume in the $\ecdq$ coding scheme the clocks of the entropy encoder and the entropy decoder to be synchronized, thus, eliminating the additional rate-loss due to prefix-free coding. This assumption, will give a better upper bound in Theorem \ref{theorem:achievability} because the term $\frac{1}{p}$ will be removed. \end{remark} \noindent{{\bf Steady-state performance.} Next, we describe how to obtain an upper bound on \eqref{performance_steady_state}. Suppose that the system is modeled by $p$-parallel time-invariant Gauss-Markov processes {(per dimension)} similar to \S\ref{subsec:steady-state_solution}.} { \begin{corollary}(Upper bound on \eqref{performance_steady_state})\label{corollary:achievability_ss} Suppose that in \eqref{example:problem_form} we apply a sequential causal $\dpcm$-based $\ecdq$ with a lattice quantizer assuming the system is time-invariant and that $D_t=D$, $\forall{t}$. Then, the minimum achievable steady-state performance ${\cal R}_{\ssum,ss}^{\IID,\op,1}(D)={R}^{\op}_{\infty}$ is upper bounded as follows: \begin{align} {R}^{\op}_{\infty}\leq{R}^*_\infty+\frac{1}{2}\log_2\left(2\pi{e}G_p\right)+\frac{1}{p},~\mbox{(bits/dimension)},\label{achievable_bound_ecdq_steadu_state} \end{align} where $R_\infty^*$ is given by \eqref{steady_state_rdf}. \end{corollary} \begin{IEEEproof} This follows from Theorem \ref{theorem:achievability} and Corollary \ref{corollary:steady_state_rev_water}. \end{IEEEproof} We note that Corollary \ref{corollary:achievability_ss} is a known infinite time horizon bound derived in several paper in the literature, such as those discussed in Remark \ref{remark:comments_on_theorem_achievability}, {\bf (1)}. } \paragraph*{\it Computation of Theorem \ref{theorem:achievability}} Unfortunately, finding $G_p$ in \eqref{achievable_bound_ecdq} for good high-dimensional quantizers {of possibly finite dimension} is currently an open problem (although it can be approximated for any dimension using for example product lattices \cite{conway-sloane1999}). Therefore, in what follows we propose existing computable bounds to the achievable upper bound of Theorem \ref{theorem:achievability} for any high-dimensional lattice quantizer. Note that these bounds were derived as a consequence of the main result by Zador \cite{conway-sloane1999}, namely, it is possible to reduce the $\mse$ distortion normalized per dimension using higher-dimensional quantizers. Toward this end, Zador introduced a lower bound on $G_p$ using the dimensionless normalized second moment of a $p$-dimensional sphere, hereinafter denoted by $G(S_p)$, for which it holds that: \begin{align} G(S_p)=\frac{1}{(p+2)\pi}\Gamma\left(\frac{p}{2}+1\right)^{\frac{2}{p}},\label{zador_lower_bound} \end{align} where $\Gamma(\cdot)$ is the gamma function. Moreover, $G_p$ and $G(S_p)$ are connected via the following inequalities: \begin{align} \frac{1}{2\pi{e}}\stackrel{(a)}\leq{G}(S_p)\stackrel{(b)}\leq{G}_p\stackrel{(c)}\leq\frac{1}{12},\label{connection_sphere_packing} \end{align} where $(a), (b)$ holds with equality for $p\longrightarrow\infty$; $(c)$ holds with equality if $p=1$.\\ Note that in \cite[equation (82)]{conway-sloane1999}, there is also an upper bound on $G_p$ due to Zador. The bound is the following: \begin{align} G_p\leq\frac{1}{p\pi}\Gamma\left(\frac{p}{2}+1\right)^{\frac{2}{p}}\Gamma\left(1+\frac{2}{p}\right).\label{zador_upper_bound} \end{align} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.8\columnwidth} {\includegraphics[width=\columnwidth]{oper_rates_p24_v2.pdf}} \end{subfigure} \begin{subfigure}[b]{0.8\columnwidth} {\includegraphics[width=\columnwidth]{oper_rates_p500_v2.pdf}} \end{subfigure} \caption{Bounds on the minimum achievable total-rate.} \label{fig:oper_rates} \end{figure} In Fig. \ref{fig:oper_rates} we illustrate two plots where we compute the bounds derived in Theorems \ref{theorem:fundam_lim_mmse}, \ref{theorem:achievability} for two different scenarios. {In Fig.~\ref{fig:oper_rates}, {(a)}, we choose $t=\{1,\ldots,20\}$, $a_t\in(0,1.5)$,~$\sigma^2_{w_t}=1$, and~$D=1$, to illustrate the gap between the time-varying rate-distortion allocation obtained using the lower bound \eqref{para_sol_eq.1} and the upper bound \eqref{achievable_bound_ecdq} when the latter is approximated with the best known quantizer up to twenty four dimensions that is a lattice known as Leech lattice quantizer (for details see, e.g., \cite[Table 2.3]{conway-sloane1999}). For this experiment the gap between the two bounds is approximately $0.126$ bits/dimension. In Fig.~\ref{fig:oper_rates}, {(b)}, we perform another experiment assuming the same values for ($a_t,~\sigma^2_{w_t},~D$), whereas the quantization is performed for $500$ dimensions. We observe that the achievable bounds obtained via \eqref{zador_lower_bound} and \eqref{zador_upper_bound} are quite tight (they have a gap of approximately $0.0014$ bits/dimension) whereas the gap between the lower bound \eqref{para_sol_eq.1} with the achievable upper bound \eqref{achievable_bound_ecdq} approximated by \eqref{zador_lower_bound} is $0.0097$ bits/dimension, and the one approximated by \eqref{zador_upper_bound} is approximately $0.011$ bits/dimension. Thus, compared to the first experiment where $p=24$, the gap between the bounds on the minimum achievable rate $R^{\op}_t$ is considerably decreased because we increased the number of dimensions in the system. Clearly, when the number of dimensions in the system increase, the gap between \eqref{para_sol_eq.1} and the high dimensional approximations of \eqref{achievable_bound_ecdq} will become arbitrary small. The two bounds will coincide as $p\longrightarrow\infty$, because then, the gap of coding noise from Gaussianity goes to zero (see, e.g., \cite{zador:1982}, \cite[Lemma 1]{zamir-feder:1996b}) and also because for $p\longrightarrow\infty$, \eqref{zador_lower_bound} is equal to \eqref{zador_upper_bound} (see, e.g.,\cite[equation (83)]{conway-sloane1999}).} \section{Application in NCSs}\label{sec:ncs} \par In this section, {we demonstrate the sequential coding framework in the NCS setup of Fig. \ref{fig:scalar_ncs} by applying the results obtained in {\S}\ref{sec:examples}. } \begin{figure}[htp] \centering \includegraphics[width=0.9\columnwidth]{scalar_ncs.pdf} \caption{Controlled system model.} \label{fig:scalar_ncs} \end{figure} We first, describe each component of Fig. \ref{fig:scalar_ncs}.\\ \noindent{\bf Plant.} Consider $p$ parallel time-varying controlled Gauss-Markov processes as follows: \begin{align} x_{t+1}(i)=\alpha_{t}x_{t}(i)+\beta_{t}u_t(i)+w_{t}(i),~i\in\mathbb{N}_1^p,~t\in\mathbb{N}_1^n,\label{example:controlled_process} \end{align} where $x_1(i)\equiv{x}_1$ is given with $x_1\sim{\cal N}(0;\sigma^2_{x_1})$,~$\forall{i}$; the non-random coefficients $(\alpha_t,\beta_t)\in\mathbb{R}$ are known to the system {with $(\alpha_t,\beta_t)\neq{0},~\forall{t}$}; {$\{u_t(i):~i\in\mathbb{N}_1^p\}$ is the controlled process with $u_t(i)\neq{u}_t(\ell),~$ for any $(i,{\ell})\in\mathbb{N}_1^p$}; $\{w_t(i)\equiv{w}_t:~i\in\mathbb{N}_1^p\}$ is an independent Gaussian noise process such that $w_t\sim{\cal N}(0;\sigma^2_{w_{t}})$, {$\sigma^2_{w_t}>0$}, independent of $x_1$,~$\forall{i}$. Again, similar to \S\ref{sec:examples}, \eqref{example:controlled_process} can be compactly written as follows \begin{align} X_{t+1}=A_{t}X_{t}+B_tU_t+W_{t},~X_1=\mbox{given},~t\in\mathbb{N}_1^n,\label{example:controlled_process1} \end{align} where {$A_t=\diag\left(\alpha_{t},\ldots,\alpha_t\right)\in\mathbb{R}^{p\times{p}}$, $B_t=\diag\left(\beta_{t},\ldots,\beta_t\right)\in\mathbb{R}^{p\times{p}}$,~$U_t\in\mathbb{R}^p$,~ $W_t\in\mathbb{R}^p\sim{\cal N}(0;\Sigma_{W_t})$, $\Sigma_{W_t}=\diag\left(\sigma^2_{w_t},\ldots,\sigma^2_{w_t}\right)\succ{0}$ is an independent Gaussian noise process independent of $X_1$}. Note that in this setup, the plant is fully observable for the observer that acts as an encoder but not for the controller due to the quantization noise (coding noise).\\ {\bf Observer/Encoder.} At the encoder the controlled process is collected into a frame $X_t\in\mathbb{R}^p$ from the plant and encoded as follows: \begin{align} S_t=f_t(X_{1,t},S_{1,t-1}),\label{example2:encoding} \end{align} where at $t=1$ we have $S_1=f_1(X_1)$, and $R_t=\frac{{\bf E}|S_t|}{p}$ is the rate at each time instant $t$ available for transmission via the noiseless channel. Note that in the design of Fig. \ref{fig:scalar_ncs}, the channel is noiseless, and the controller/decoder are deterministic mappings, thus, the observer/encoder implicitly has access to earlier control signals $U_{1,t-1}\in\mathbb{U}_{1,t-1}$.\\ {\bf Decoder/Controller.} The data packet $S_t$ is received by the controller using the following reconstructed sequence: \begin{align} U_t=g_t(S_{1,t}).~\label{example2:decoding} \end{align} According to \eqref{example2:decoding}, when the sequence $S_{1,t}$ is available at the decoder/controller, all past control signals $U_{1,t-1}$ are completely specified. \\ {\bf Quadratic cost.} The cost of control (per dimension) is defined as \begin{align}\label{eq:cost} {\lqg}_{1,n}&\sr{\triangleqangle}{=} \frac{1}{p}{\bf E}\left\{{\sum_{t = 1}^{n-1}\left(X_t^{\mbox{\tiny T}}\widetilde{Q}_tX_t + U_t^{\mbox{\tiny T}}\widetilde{N}_tU_t\right) + X_n^{\mbox{\tiny T}}\widetilde{Q}_nX_n}\right\} , \end{align} where $\widetilde{Q}_t=\diag\left(Q_t\ldots,Q_t\right)\succeq{0},~\widetilde{Q}_t\in\mathbb{R}^{p\times{p}}$ and $\widetilde{N}_t=\diag\left(N_t,\ldots,N_t\right)\succ{0},~\widetilde{N}_t\in\mathbb{R}^{p\times{p}}$, are designing parameters that penalize the state variables or the control signals.\\ {\bf Performance.} The performance of Fig. \ref{fig:scalar_ncs} (per dimension) can be cast to a finite-time horizon quantized \text{LQG} control problem subject to all communication constraints as follows: \begin{align} {\Gamma^{\IID,\op}_{\ssum}}(R)=\min_{\substack{(f_t,~g_t):~t=1,\ldots,n\\ \frac{1}{n}\sum_{t=1}^nR_t\leq{R}}}{\lqg}_{1,n}.\label{example2:problem_form} \end{align} \paragraph*{Iterative Encoder/Controller Design} In general, as \eqref{example2:problem_form} suggests, the optimal performance of the system in Fig. \ref{fig:scalar_ncs} is achieved only when the encoder/controller pair is designed jointly. This is a quite challenging task especially when the channel is {noisy} because information structure {is} non-nested in such cases (for details see, e.g., \cite{yuksel-basar:2013}). There are examples, however, where the separation principle applies and the task comes much easier. More precisely, the so-called certainty equivalent controller remains optimal if the estimation errors are independent of previous control commands (i.e., dual effect is absent) \cite{bar-shalom:1974}. In our case, the optimal control strategy will be a certainty equivalence controller {\it if we assume a fixed and given sequence of encoders $\{f^*_t:~t\in\mathbb{N}_1^n\}$ {and the corresponding quantizer follows a predictive quantizer policy (similar to the $\dpcm$-based $\ecdq$ scheme proposed in \S\ref{subsec:upper_bound_mmse}), i.e., at each time instant it subtracts the effect of the previous control signals at the encoder and adds them at the decoder}} (see, e.g., \cite[Proposition 3]{bao-skoglund-johansson2011}, \cite{fu:2012}, \cite[\S{III}]{yuksel:2014}). Moreover, the separation principle will also be optimal if we consider an $\mmse$ estimate of the state (similar to what we have established in \S\ref{sec:examples}), and an encoder that minimizes a distortion for state estimation at the controller. The resulting separation principle is termed ``{\it weak separation principle}'' \cite{fu:2012} as it relies on the fixed and given quantization policies. This is different from the well-known full separation principle in the classical LQG stochastic control problem \cite{bertsekas:2005} where the problem separates naturally into a state estimator and a state feedback controller without any loss of optimality. {The previous analysis is described by a modified version of \eqref{example2:problem_form} as follows \begin{align} {\Gamma^{\IID,\op}_{\ssum}(R)\leq\Gamma^{\IID,\op,ws}_{\ssum}}=\min_{\substack{(f^*_t,~g_t):~t=1,\ldots,n\\ \frac{1}{n}\sum_{t=1}^nR_t\leq{R}}}{\lqg}_{1,n}.\label{example2:problem_form_weak_sep} \end{align} } Next, we give the known solution of \eqref{example2:problem_form_weak_sep} in the form of a lemma that was first derived in \cite{tatikonda:2004,fu:2012} for the more general setup of correlated vector-valued controlled Gauss-Markov processes with linear quadratic cost. \begin{lemma}(Weak separation principle for Fig. \ref{fig:scalar_ncs})\label{lemma:scalar:separation} The optimal controller that minimizes \eqref{example2:problem_form} is given by \begin{align}\label{eq:lem:scalar:separation:controller} U_t = -L_t{\bf E}\left\{X_t|S_{1,t}\right\}, \end{align} where ${\bf E}\left\{X_t|S_{1,t}\right\}$ are the fixed quantized state estimates obtained from the estimation problem in \S\ref{sec:examples}; $\widetilde{L}_t=\diag(L_t,\ldots,L_t)\in\mathbb{R}^p$ is the optimal $\lqg$ control (feedback) gain obtained as follows: \begin{align}\label{eq:LQG:scalar:Kt} \tilde{L}_t &=\left(B^2_t\widetilde{K}_{t+1}+\widetilde{N}_t\right)^{-1}B_t\widetilde{K}_{t+1}A_t, \end{align} and $\widetilde{K}_t=\diag(K_t,\ldots,K_t)\succeq{0}$ is obtained using the backward recursions: \begin{align}\label{eq:lem:LQG:Lt} \widetilde{K}_t &=A_t^2\left(\widetilde{K}_{t+1}-\widetilde{K}_{t+1}B^2_t(B_t^2\widetilde{K}_{t+1}+\widetilde{N}_t)^{-1}\widetilde{K}_{t+1}\right)+\widetilde{Q}_t, \end{align} with $\widetilde{K}_{n+1} = 0$. Moreover, this controller achieves a minimum linear quadratic cost of \begin{align} \begin{split} {\Gamma^{\IID,\op,ws}_{\ssum}}=&\frac{1}{p}\sum_{t = 1}^n\Big\{\trace(\Sigma_{W_t}\widetilde{K}_t)+\trace(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}{\bf E}\{||X_t - Y_t||_2^2\})\Big\},\label{total_min_lqg} \end{split} \end{align} where ${\bf E}\{||X_t - Y_t||_2^2\}$ is the $\mmse$ distortion obtained using any quantization (coding) in the control/estimation system. \end{lemma} { Before we prove our main theorem, we define the instantaneous cost of control as follows: \begin{align} \begin{split} &{\lqg}^{\op}_t\sr{\triangleqangle}{=}\frac{1}{p}\Big\{\trace(\Sigma_{W_t}\widetilde{K}_t)+\trace(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}{\bf E}\{||X_t - Y_t||_2^2\})\Big\},~t\in\mathbb{N}_1^n.\end{split}\label{per_time_cost_of_control} \end{align} } \par Next, we use Lemma \ref{lemma:scalar:separation} to derive a lower bound on \eqref{example2:problem_form_weak_sep}. { \begin{theorem}({Lower bound on \eqref{example2:problem_form_weak_sep}})\label{theorem:fund_limt_lqg} {For fixed coding policies, the minimum total-cost of control (per dimension) of \eqref{example2:problem_form_weak_sep}}, for any ``$n$'' and any $p$, however large, is ${\Gamma^{\IID,\op,ws}_{\ssum}}=\sum_{t=1}^n{\lqg}^{\op}_{t}$, with ${\lqg}^{\op}_{t}\geq{\lqg}^*_t$ such that \begin{align} {\lqg}_{t}^*=\sigma^2_{w_t}K_t + \alpha_t\beta_t L_t K_{t+1}D(R^*_t), \label{para_sol_cost_contr:eq.1} \end{align} where $D(R^*_t)$ is given by: \begin{align} D(R^*_t)&\sr{\triangleqangle}{=}\left\{ \begin{array}{ll} \frac{\sigma^2_{w_t}}{2^{2R^*_t}-\alpha^2_t}, & \mbox{$\forall{t}\in\mathbb{N}_1^{n-1}$} \\ 2^{-2R^*_n}, & \mbox{for $t=n$} \end{array} \right.,\label{min_dist:eq.1} \end{align} with the pair $(D(R^*_t),R^*_t)$ given by \eqref{para_sol_eq.1}-\eqref{values_of_xis}. \end{theorem} \begin{proof} See Appendix \ref{proof:theorem:fund_limt_lqg}. \end{proof} } {In what follows, we include a technical remark related to the {\it lower bound} on the total cost-rate function of Theorem \ref{theorem:fund_limt_lqg}.} {\begin{remark}(Technical remarks on Theorem \ref{theorem:fund_limt_lqg})\label{remark:technical_remarks_lower_bound} The expression of the lower bound in Theorem \ref{theorem:fund_limt_lqg}, can be reformulated for any $n$, and any $p$, to the equivalent expression of the total rate-cost function, denoted hereinafter by $\sum_{t=1}^nR({\lqg}_t^*)$, as follows \begin{align} R({\lqg}_t^*)=\frac{1}{2}\log_2\left(\alpha_t^2+\frac{\alpha_t\beta_tL_tK_{t+1}\sigma^2_{w_t}}{{\lqg}^*_t-\sigma^2_{w_t}K_t}\right), ~t\in\mathbb{N}_1^{n-1},\label{rate_cost_function} \end{align} with $R({\lqg}_n^*)\equiv{R}_n^*$ as it is independent of ${\lqg}_n^*$. Interestingly, one can observe that by substituting in \eqref{rate_cost_function} the per-dimension version of \eqref{eq:LQG:scalar:Kt} we obtain \begin{align} {R}({\lqg}_t^*)&=\frac{1}{2}\log_2\left(\alpha_t^2\left(1+\frac{\frac{\beta_t^2K^2_{t+1}\sigma^2_{w_t}}{\beta_t^2K^2_{t+1}+N_t}}{\lqg^*_t-\sigma^2_{w_t}K_t}\right)\right),\\ &=\frac{1}{2}\left[\log_2(\alpha_t^2)+\log_2\left(1+\frac{\frac{\beta_t^2K^2_{t+1}\sigma^2_{w_t}}{\beta_t^2K^2_{t+1}+N_t}}{\lqg^*_t-\sigma^2_{w_t}K_t}\right)\right].\label{rate_cost_alternative_def} \end{align} The bound in \eqref{rate_cost_alternative_def} extends the result of \cite[Equation (16)]{kostina:2019} from an asymptotically average total-rate cost function to the case of a total-rate cost function where at each instant of time the rate-cost function is obtained using an allocation of $\lqg_t^*$ obtained due to the rate-allocation of the quantized state estimation problem of Theorem \ref{theorem:fundam_lim_mmse}. Additionally, the expression in \eqref{rate_cost_alternative_def} reveals an interesting observation regarding the absolute minimum data rates for mean square stability of the plant (per dimension), i.e., $\sup_t{\bf E}\{(x_t)^2\}<\infty$ (see, e.g., \cite[Eq. (25)]{nair:2004} for the definition) for a fixed finite time horizon. In particular, \eqref{rate_cost_alternative_def} suggests that for unstable time-varying plants with arbitrary disturbances modeled as in \eqref{example:controlled_process1}, and provided that at each time instant the cost of control (per dimension) is with communication constraints, i.e., ${\lqg}_t^*>\sigma^2_{{w}_t}K_t$ (the derivation without communication constraints is well known as the separation principle holds without a loss and ${\lqg}_t^*=\sigma^2_{{w}_t}K_t,~\forall{t}$ \cite{bertsekas:2005}), then, the minimum possible rates at each time instant $t$, namely, ${R}({\lqg}_t^*)$, cannot be lower than $\log_2|\alpha_t|$, when $|\alpha_t|>1$. This result extends known observations for time-invariant plants (see e.g., \cite[Remark 1]{kostina:2019}) to parallel and (possibly unbounded) time-varying plants for any fixed finite time horizon. \end{remark} } Next, we use Theorem \ref{theorem:achievability} to find an upper bound on ${\Gamma^{\IID,\op,ws}_{\ssum}}$. \begin{theorem}(Upper bound on \eqref{example2:problem_form_weak_sep})\label{theorem:achievability_lqg} Suppose that in the system of Fig. \ref{fig:scalar_ncs}, the fixed coding policies are obtained using the predictive coding scheme via sequential causal $\dpcm$-based $\ecdq$ coding scheme with an $\mathbb{R}^p$-valued lattice quantizer described in Theorem \ref{theorem:achievability}. Then, ${\Gamma^{\IID,\op,ws}_{\ssum}}=\sum_{t=1}^n{\lqg}^{\op}_t$ for any $n$, and any $p$, with the instantaneous cost of control $\{\lqg_t:~t\in\mathbb{N}_1^{n-1}\}$ (per dimension) to be upper bounded as follows: \begin{equation} {\lqg}^{\op}_t\leq\sigma^2_{w_t}K_t+ \alpha_t\beta_t L_t K_{t+1}\frac{4^{\frac{1}{p}}(2\pi{e}G_p)\sigma^2_{w_t}}{2^{2R^{\op}_t}-4^{\frac{1}{p}}(2\pi{e}G_p)\alpha_t^2}, \label{achievable_bound_ecdq_lqg} \end{equation} whereas, at $t=n$, ${\lqg}^{\op}_n=\sigma^2_{w_n}K_n$ and $R^{\op}_t$ is bounded above as in \eqref{achievable_bound_ecdq}. \end{theorem} \begin{IEEEproof} See Appendix \ref{proof:theorem:achievability_lqg}. \end{IEEEproof} \begin{remark}(Comments on Theorem \ref{theorem:achievability_lqg}) For infinitely large spatial components, i.e., $p\longrightarrow\infty$, the upper bound in \eqref{achievable_bound_ecdq_lqg} approaches the lower bound in Theorem \ref{theorem:fund_limt_lqg} because $G_\infty\longrightarrow\frac{1}{2\pi{e}}$ (see e.g, \cite[Lemma 1]{zamir-feder:1996b}). {Moreover, one can easily obtain the equivalent inverse problem of the total rate-cost function for the upper bound in \eqref{achievable_bound_ecdq_lqg} similar to Remark \ref{remark:technical_remarks_lower_bound}}. \end{remark} { Next, we note the main technical difference of both Theorems \ref{theorem:fund_limt_lqg}, \ref{theorem:achievability_lqg} compared to existing results in the literature. \begin{remark}(Connections to existing works) {\bf (1)} Our bounds on $\lqg$ cost extend similar bounds derived in \cite[Theorems 7, 8]{khina:2018} to average total-rate constraints for any fixed finite time horizon. This constraint requires the use of a dynamic reverse-waterfilling optimization algorithm (derived in Theorem \ref{theorem:fundam_lim_mmse}) to optimally assign the rates at each instant of time for the whole fixed finite time horizon. In contrast, the fixed rate constraint (averaged across the time) assumed in \cite[Theorem 7, 8]{khina:2018} does not require a similar optimization technique because at each instant of time the transmit rate is the same. Another structural difference compared to \cite[Theorem 7, 8]{khina:2018} is that in our bound we decouple the dependency of $D_{t-1}$ at each time instant. \\ {\bf (2)} Our results also extend the steady-state bounds on $\lqg$ cost obtained in \cite{silva:2011,silva:2016,tanaka:2016} to cost-rate functions constrained by total-rates obtained for any fixed finite time horizon. By assumption, the rate constraint in those papers implies fixed (uniform) rates at each instant of time whereas our bounds require a rate allocation algorithm to assign optimally the rate at each time slot. \end{remark} \subsection{Steady-state solution of Theorems \ref{theorem:fund_limt_lqg}, \ref{theorem:achievability_lqg}}\label{subsec:steady-state_solution_lqg}} {In this subsection, we study the steady-state case of the bounds derived in Theorems \ref{theorem:fund_limt_lqg}, \ref{theorem:achievability_lqg}. We start by making the following assumptions, i.e., \begin{itemize} \item[(A1)] we restrict the controlled process \eqref{example:controlled_process1} to be time invariant, which means that $A_t\equiv{A}=\diag\left(\alpha,\ldots,\alpha\right)\in\mathbb{R}^{p\times{p}}$, $B_t\equiv{B}=\diag\left(\beta,\ldots,\beta\right)\in\mathbb{R}^{p\times{p}}$,~$W_t\in\mathbb{R}^p\sim{\cal N}(0;\Sigma_{W})$, $\Sigma_{W}=\diag\left(\sigma^2_{w},\ldots,\sigma^2_{w}\right)\succ{0},~\forall{t}$; \item[(A2)] we restrict the design parameters that penalize the control cost \eqref{eq:cost} to also be time invariant, i.e., $\widetilde{Q}_t\equiv\diag(Q,\ldots,Q)$, $\widetilde{N}_t\equiv\diag(N, \ldots, N)$; \item[(A3)] we fix $D_t\equiv{D}$, $\forall{t}$. \end{itemize} } { We denote the steady-state value of the total cost of control, (per dimension) as follows: \begin{align} {\lqg}_{\infty}=\limsup_{n\longrightarrow\infty}\frac{1}{n}\sum_{t=1}^n{\lqg}_t.\label{steady_state_lqg} \end{align} \noindent{\bf Steady-state Performance.} The minimum achievable steady-state performance (per dimension) of the quantized $\lqg$ control problem of Fig. \ref{fig:scalar_ncs} under the weak separation principle can be cast to the following optimization problem: \begin{align} \Gamma^{\IID,\op,ws}_{\ssum,ss}=\min_{\substack{(f^*_t,~g_t):~t=1,\ldots,\infty\\ R_\infty\leq{R}}}{\lqg}_{\infty}.\label{performance_steady_state_lqg} \end{align} } {In the next two corollaries, we prove the lower and upper bounds on \eqref{performance_steady_state_lqg}. These bounds follow from the assumptions (A1)-(A3) and Corollaries \ref{corollary:steady_state_rev_water}, \ref{corollary:achievability_ss}.} { \begin{corollary}(Lower bound on \eqref{performance_steady_state_lqg})\label{corollary:steady_state_lqg_lower} The minimum achievable steady state performance of \eqref{performance_steady_state_lqg}, under the assumptions (A1)-(A3), for any $p$, is such that $\Gamma^{\IID,\op,ws}_{\ssum,ss}\geq{\lqg}^{*}_{\infty}$, where \begin{align} {\lqg}^{*}_{\infty}=\sigma^2_{w}K_\infty + \alpha\beta L_\infty K_\infty\frac{\sigma^2_{w}}{2^{2R_{\infty}^*}-\alpha^2},\label{steady_state_lqg_lower_bound} \end{align} where ${\lqg}^*_{\infty}\sr{\triangleqangle}{=}\lim_{n\longrightarrow\infty}\frac{1}{n}\sum_{t=1}^n{\lqg}_t^*$, with $L_\infty, K_\infty$ given by \eqref{eq:LQG:scalar:Kt_sted_s} and \eqref{ss_explicit_K}, respectively. \end{corollary} \begin{IEEEproof} The derivation follows from the assumptions (A1)-(A3). In particular, \begin{align} \frac{1}{n}\sum_{t=1}^n{\lqg}_t\stackrel{(i)}\geq&\frac{1}{n}\sum_{t=1}^n{\lqg}^{*}_t\nonumber\\ \stackrel{(ii)}=&\frac{1}{n}\sum_{t=1}^n\left(\sigma^2_{w}K_\infty + \alpha\beta L_\infty K_\infty{D}\right),\label{proof_ss_lqg_lower} \end{align} where $(i)$ follows from Theorem \ref{theorem:fund_limt_lqg}; $(ii)$ follows from the assumptions (A1)-(A3). In particular, by imposing the assumptions (A1), (A2), in Lemma \ref{lemma:scalar:separation} we obtain that the steady-steady optimal $\lqg$ control (feedback) gain (per dimension) becomes: \begin{align}\label{eq:LQG:scalar:Kt_sted_s} {L}_\infty=\frac{\alpha\beta{K_\infty}}{\beta^2{K}_{\infty}+{N}}, \end{align} where $K_\infty$ is the positive solution of the quadratic equation: \begin{align}\label{eq:lem:LQG:Lt_sted_s} \beta^2{K}_\infty+\left((1-\alpha^2)N-\beta^2{Q}\right)K_\infty-QN =0 \end{align} given by the formula \begin{align} {K}_\infty=\frac{1}{2\beta^2}\left(\sqrt{\bar{f}^2+4\beta^2QN}-\bar{f}\right),\label{ss_explicit_K} \end{align} with $\bar{f}=(1-\alpha^2)N-\beta^2{Q}$. Finally by assumption (A3), we obtain from Corollary \ref{corollary:steady_state_rev_water} that $D\equiv{D}(R_t^*)=\frac{\sigma^2_{w}}{2^{2R_{\infty}^*}-\alpha^2}$, $\forall{t}$. The result follows once we let in \eqref{proof_ss_lqg_lower} $n\longrightarrow\infty$. This completes the derivation. \end{IEEEproof} } { \begin{corollary}(Upper bound on \eqref{performance_steady_state_lqg})\label{corollary:steady_state_lqg_upper} The minimum achievable steady state performance of \eqref{performance_steady_state_lqg}, under the assumptions (A1)-(A3), for any $p$, is upper bounded as follows \begin{align} \Gamma^{\IID,\op,ws}_{\ssum,ss}\leq\sigma^2_{w}K_\infty + \alpha\beta L_\infty K_\infty\frac{4^{\frac{1}{p}}(2\pi{e}G_p)\sigma^2_{w}}{2^{2R^{\op}_{\infty}}-4^{\frac{1}{p}}(2\pi{e}G_p)\alpha^2},\label{steady_state_lqg_upper_bound} \end{align} where $R^{\op}_\infty$ is upper bounded by \eqref{achievable_bound_ecdq_steadu_state} and $K_\infty$, $L_\infty$ are given by \eqref{ss_explicit_K} and \eqref{eq:LQG:scalar:Kt_sted_s}, respectively. \end{corollary} \begin{IEEEproof} We omit the derivation because it is similar to the one obtained for the lower bound. In contrast to the lower bound, here we make use of Theorem \ref{theorem:achievability_lqg} and Corollary \ref{corollary:achievability_ss}. \end{IEEEproof} Note that for Theorems \ref{theorem:fund_limt_lqg}, \ref{theorem:achievability_lqg} we can remark the following. \begin{remark}(Comments on Corollaries \ref{corollary:steady_state_lqg_lower}, \ref{corollary:steady_state_lqg_upper})\label{remark:conditions_stability} The lower bound of Corollary \ref{corollary:steady_state_lqg_lower} (per dimension) is precisely the bound obtained by Tatikonda et al. in \cite[\S{V}]{tatikonda:2004} (see also \cite[\S{6}]{tatikonda:1998}) for scalar time-invariant Gauss-Markov processes. The upper bound of Corollary \ref{corollary:steady_state_lqg_upper} (per dimension) is similar to the upper bounds derived in \cite{silva:2011,silva:2016,khina:2018}. It is also similar to the upper bound obtained in \cite{tanaka:2016} albeit their space-filling term is obtained differently. \end{remark} } { \section{Discussion and Open Questions}\label{sec:discussion} } { In this section, we discuss certain open problems that can be solved based on this work and discuss certain observations that stem from our main results. \subsection{Dynamic reverse-waterfilling algorithm for multivariate Gaussian processes} Further to the technical observation raised in Remark \ref{remark:discussion:theorem:mmse}, {\bf (1)}, it seems that the simultaneous diagonalization of $(A_t, \Sigma_{W_t}, \Delta_t, \Lambda_t)$ by an orthogonal matrix is sufficient in order to extend the derivation of a dynamic reverse-waterfilling algorithm to the more general case of multivariate time-varying Gauss-Markov processes. Our claim is further supported by the fact that for time-invariant multidimensional Gauss-Markov processes simultaneous diagonalization is shown to be sufficient for the derivation of a reverse-waterfilling algorithm in \cite[Corollary 1]{stavrou:2020}. \subsection{Non-Gaussian processes} Although not addressed in this paper, the non-asymptotic lower bounds derived in Theorems \ref{theorem:fundam_lim_mmse}, \ref{theorem:fund_limt_lqg} can be extended to linear models driven by independent non-Gaussian noise processes ${w_t}\sim(0;\sigma^2_{w_t})$ using entropy power inequalities \cite{kostina:2019}. \subsection{Packet drops with instantaneous ACK} It would be interesting to extend our setup to the more practical scenario of communication links prone to packet drops. In such case one needs to take into account the various packet erasure models (e.g., $\IID$ or Markov models) to study their impact on the non-asymptotic bounds derived for the two application examples of this paper. Existing results for uniform (fixed) rate allocation are already studied in \cite{khina:2018}. } \section{Conclusion}\label{sec:conclusions} \par {We revisited the sequential coding of correlated sources with independent spatial components to use it in the derivation of non-asymptotic, finite dimensional lower and upper bounds for two application examples in stochastic systems. Our application examples included a parallel time-varying quantized state-estimation problem subject to a total $\mse$ distortion constraint and a parallel time-varying quantized $\lqg$ closed-loop control system with linear quadratic cost. For the latter example, its lower bound revealed the minimum possible rates for mean square stability of the plant at each instant of time when the system operates for a fixed finite time horizon. } \appendices \begin{comment} \section{Proof of Theorem \ref{theorem:markov_source}}\label{appendix:proof:theorem:markov_source} First, observe that for any point $(R_{1,n},D_{1,n})\in{\cal R}^{\IID,\op,m}$ there exist auxiliary $\rvs$ and deterministic functions that satisfy the constraint set of the rate-distortion region in \eqref{single_letter_char_iid_markov}. Choose a fixed positive integer $m\in\{1,\ldots,t\}$,~$t\in\mathbb{N}_1^n$. Then, we obtain \begin{align} \begin{split} &{\cal R}_{{\ssum}}^{\IID,\op,m}(D_{1,n})\equiv\sum_{t=1}^nR_t\\ &\geq{I}(X_1;S_1)+\ldots+{I}(X_{n+1-m,n};Y_n|S_{1,n-1})\\ &\stackrel{(a)}=I(X_{1,n};S_{1,n-1},Y_{n})\\%\sum_{t=1}^{n-1}{I}(X_{t+1-m,t};S_t|S_{1,t-1})\sum_{t=1}^n{I}(X_{1,t};Y_t|S_{1,t-1})\\ &\stackrel{(b)}=I(X_{1,n};S_{1,n-1},Y_{1,n})\\%\sum_{t=0}^n{I}(X_{t+1-m,t};Y_t|Y_{1,t-1},S_{1,t-1})\\ &\geq{I}(X_{1,n};Y_{1,n})\\ &\geq\min_{\mbox{the constraints in \eqref{eq:sumrate1} hold}}I(X_{1,n};Y_{1,n})=R_{\ssum}^{\IID,m}(D_{1,n}), \end{split}\label{upper_bound_proof} \end{align} where $(a)$ stems from the fact that the conditional independence constraints in \eqref{single_letter_char_iid_markov} hold; $(b)$ stems from the fact that $Y_{1,t-1}=g_{}(S_{1,t-1}),~\forall{t}$. Note that ${I}(X_{1,n};Y_{1,n})=\sum_{t=1}^n{I}(X_{t+1-m,t};Y_t|Y_{1,t-1})$ because the conditional independence constraints in \eqref{eq:sumrate1} are satisfied. Thus, a lower bound to the minimum achievable total-rate is obtained. \par Now, because a possible realization of the encoded symbols is always $\{S_t=Y_t:~\forall{t}\in\mathbb{N}_1^{n-1}\}$, we obtain \begin{align} \begin{split} &\sum_{t=0}^n{R}_t=\min_{\mbox{the constraints of \eqref{single_letter_char_iid_markov} hold}}I(X_{1,n};Y_{n},S_{1,n-1})\\ &\leq\min_{\mbox{the constraints in \eqref{eq:sumrate1} hold}}I(X_{1,n};Y_{1,n})={\cal R}_{{\ssum}}^{\IID,m}(D_{1,n}). \end{split}\label{lower_bound_proof1} \end{align} This completes the proof. $\QEDA$ \end{comment} \section{Proof of Theorem \ref{theorem:fundam_lim_mmse}}\label{proof:theorem:fundam_lim_mmse} Since the source is modeled as a time-varying first-order Gauss-Markov process, then from \eqref{eq:sumrate2} we obtain: \begin{align} \begin{split} &{\cal R}_{{\ssum}}^{\IID,\op,1}(D)\geq{\cal R}_{{\ssum}}^{\IID,1}(D),\\ &=\min_{\substack{\frac{1}{n}\frac{1}{p}\sum_{t=1}^n{\bf E}\left\{||X_t-Y_t||_2^2\right\}\leq{D},\\Y_1\leftrightarrow{X_{1}}\leftrightarrow{X}_{2,n},\\~Y_t\leftrightarrow(X_{t},Y_{1,t-1})\leftrightarrow(X_{1,t-1},{X}_{t+1,n})}}\frac{1}{p}\sum_{t=1}^nI(X_{t};Y_t|Y_{1,t-1})\end{split}.\label{exam:eq:sumrate1} \end{align} It is trivial to see that the $\rhs$ term in \eqref{exam:eq:sumrate1} corresponds precisely to the sequential or $\nrdf$ obtained for parallel Gauss-Markov processes with a total $\mse$ distortion constraint which is a simple generalization of the scalar-valued problem that has already been studied in \cite{stavrou:2018lcss}. Therefore, using the analysis of \cite{stavrou:2018lcss} we can obtain: \begin{align} &{\cal R}_{\ssum}^{\IID,1}(D)\nonumber\\ &\stackrel{(a)}=\min_{\mbox{contraint in \eqref{exam:eq:sumrate1}}}\frac{1}{p}\sum_{t=1}^n\left\{h(X_t|Y_{1,t-1})-h(X_t|Y_{1,t})\right\}\nonumber\\ &\stackrel{(b)}=\frac{1}{p}\min_{\substack{\Delta_t\succeq{0},~t\in\mathbb{N}_1^n\\ \frac{1}{n}\frac{1}{p}\sum_{t=1}^n\trace{(\Delta_t)}\leq{D}}} \sum_{t=1}^n \max\left[0,\frac{1}{2}\log_2\left(\frac{|\Lambda_t|}{|\Delta_t|}\right)\right],\nonumber\\ &=\min_{\substack{D_t\geq{0},~t\in\mathbb{N}_1^n\\\frac{1}{n}\sum_{t=1}^nD_t\leq{D}}} \sum_{t=1}^n \max\left[0,\frac{1}{2}\log_2\left(\frac{\lambda_t}{D_t}\right)\right],\label{scalar:eq.solution} \end{align} {where $(a)$ follows by definition; $(b)$ follows from the fact that $h(X_t|Y_{1,t-1})=\frac{1}{2}\log_2(2\pi{e})^p|\Lambda_t|$ where $\Lambda_t=\diag\left(\lambda_t,\ldots\lambda_t\right)\in\mathbb{R}^{p\times{p}}$ with $\lambda_t=\alpha^2_{t-1}D_{t-1}+\sigma^2_{w_{t-1}}$, and that $h(X_t|Y_{1,t})=\frac{1}{2}\log_2(2\pi{e})^p|\Delta_t|$ where $\Delta_t=\diag\left(D_t,\ldots,D_t\right)\in\mathbb{R}^{p\times{p}}$ for $D \in [0, \infty)$. The optimization problem of (\ref{scalar:eq.solution}) is already solved in \cite[Theorem 2]{stavrou:2018lcss} and is given by \eqref{para_sol_eq.1}-\eqref{values_of_xis}.} \section{Proof of Theorem \ref{theorem:achievability}}\label{proof:theorem:achievability} {In this proof we bound the rate performance of the $\dpcm$ scheme described in \S\ref{subsec:upper_bound_mmse} at each time instant for any fixed finite time $n$ using an $\ecdq$ scheme that utilizes the forward Gaussian test-channel realization that achieves the lower bound of Theorem \ref{theorem:fundam_lim_mmse}. In this scheme in fact we replace the quantization noise with an additive Gaussian noise with the same second moments.\footnote{See e.g., \cite{zamir-feder:1996} or \cite[Chapter 5]{zamir:2014} and the references therein.} First note that the Gaussian test-channel linear realization of the lower bound in Theorem \ref{theorem:fundam_lim_mmse} is known to be\cite{stavrou:2018lcss} \begin{align} Y_t=H_tX_t+(I_p-H_t)A_{t-1}Y_{t-1}+H^{\frac{1}{2}}V_t,~V_t\sim{\cal N}(0;\Delta_t),\label{realization} \end{align} where $H_t\sr{\triangleqangle}{=}{I_p-\Delta_t\Lambda^{-1}_t}\succeq{0}$,~$\Delta_t\sr{\triangleqangle}{=}\diag(D_t,\ldots,D_t)\succ{0}$,~$\Lambda_t=\diag\left(\lambda_t,\ldots\lambda_t\right)\succ{0}$.\\ {\bf Pre/Post Filtered ECDQ with multiplicative factors for parallel sources.} \cite{zamir-feder:1996} First, we consider a $p-$dimensional lattice quantizer $Q_p$ \cite{conway-sloane1999} such that \begin{align} {\bf E}\{Z_tZ_t^{\mbox{\tiny T}}\}=\Sigma_{V^c_t}, ~\Sigma_{V_t^c}\succ{0},\nonumber \end{align} where $Z_t\in\mathbb{R}^p$ is a random dither vector generated both at the encoder and the decoder independent of the input signals $\widehat{X}_t$ and the previous realizations of the dither, uniformly distributed over the basic Voronoi cell of the $p-$dimensional lattice quantizer $Q_p$ such that $V_t^c\sim{Unif}(0;\Sigma_{V_t^c})$. At the {\it encoder} the lattice quantizer quantize $H_{t}^{\frac{1}{2}}\widehat{X}_t+Z_t$, that is, $Q_p(H_{t}^{\frac{1}{2}}\widehat{X}_t+Z_t)$ ,where $\widehat{X}_t$ is given by \eqref{dpcm_enc}. Then, the encoder applies entropy coding to the output of the quantizer and transmits the output of the entropy coder. At the {\it decoder} the coded bits are received and the output of the quantizer is reconstructed, i.e., $Q_p(H_{t}^{\frac{1}{2}}\widehat{X}_t+Z_t)$. Then, it generates an estimate by subtracting ${Z_t}$ from the quantizer's output and multiplies the result by $\Phi_t$ as follows: \begin{align} Y_t=\Phi_t(Q_p(H_{t}^{\frac{1}{2}}\widehat{X}_t+Z_t)-Z_t),\label{dec_ecdq} \end{align} where $\Phi_t=H_{t}^{\frac{1}{2}}$. The {\it coding rate at each instant of time} of the conditional entropy of the $\mse$ quantizer is given by\cite{zamir-feder:1996} \begin{align} H(Q_p|Z_t)&={I}(H^{\frac{1}{2}}\widehat{X}_t;H\widehat{X}_t+H^{\frac{1}{2}}V_t^c)\nonumber\\ &\stackrel{(a)}={I}(H^{\frac{1}{2}}\widehat{X}_t;H\widehat{X}_t+H^{\frac{1}{2}}V_t)+{\cal D}(V^c_t||V_t)\nonumber\\ &\qquad-{\cal D}(H\widehat{X}_t+H^{\frac{1}{2}}V^c_t||H\widehat{X}_t+H^{\frac{1}{2}}V_t)\nonumber\\ &\stackrel{(b)}\leq{I}(H^{\frac{1}{2}}\widehat{X}_t;H\widehat{X}_t+H^{\frac{1}{2}}V_t)+{\cal D}(V^c_t||V_t)\nonumber\\ &\stackrel{(c)}\leq{I}(H^{\frac{1}{2}}\widehat{X}_t;H\widehat{X}_t+H^{\frac{1}{2}}V_t)+\frac{p}{2}\log(2\pi{e}G_p)\nonumber\\ &\stackrel{(d)}={I}(X_t;Y_t|Y_{1,t-1})+\frac{p}{2}\log(2\pi{e}G_p) \label{cond_entropy_quant} \end{align} where $V_t^c\in\mathbb{R}^p$ is the (uniform) coding noise in the $\ecdq$ scheme and $V_t$ is the corresponding Gaussian counterpart; $(a)$ follows because the two random vectors $V_t^c, V_t$ have the same second moments hence we can use the identity ${\cal D}(x||x')=h(x')-h(x)$; $(b)$ follows because ${\cal D}(H\widehat{X}_t+H^{\frac{1}{2}}V_t^c||H\widehat{X}_t+H^{\frac{1}{2}}V_t)\geq{0}$; $(c)$ follows because the divergence of the coding noise from Gaussianity is less than or equal to $\frac{p}{2}\log(2\pi{e}G_p)$ \cite{zamir-feder:1996b} where $G_p$ is the dimensionless normalized second moment of the lattice \cite[Definition 3.2.2]{zamir:2014}; $(d)$ follows from data processing properties, i.e., $I(X_t;Y_t|Y_{1,t-1})\stackrel{(\ast)}=I(X_t;Y_t|Y_{t-1})\stackrel{(\ast\ast)}=I(\widehat{X}_t;\widehat{Y}_t)\stackrel{(\ast\ast\ast)}={I}(H^{\frac{1}{2}}\widehat{X}_t;H\widehat{X}_t+H^{\frac{1}{2}}V_t)$ where $(\ast)$ follows from the realization of \eqref{realization}, $(\ast\ast)$ follows from the fact that $\widehat{X}_t$ and $\widehat{Y}_t$ (obtained by \eqref{dpcm_dec}) are independent of $Y_{t-1}$, and $(\ast\ast\ast)$ follows from \eqref{dpcm_enc}, \eqref{realization} and the fact that $H$ is an invertible operation. Since we assume joint (memoryless) entropy coding with lattice quantizers, then, the total coding rate per dimension is obtained as follows\cite[Chapter 5.4]{cover-thomas2006} \begin{align} \sum_{t=1}^n\frac{{\bf E}|S_t|}{p}&\leq\frac{1}{p}\sum_{t=1}^n\left({H}(Q_p|Z_t)+1\right)\nonumber\\ &\stackrel{(e)}\leq\frac{1}{p}\sum_{t=1}^n{I}({X}_t;{Y}_t|Y_{1,t-1})+\frac{n}{2}\log(2\pi{e}G_p)+\frac{n}{p}\nonumber\\ &\stackrel{(f)}=\frac{1}{2p}\sum_{t=1}^n\log_2\frac{|\Lambda_t|}{|\Delta_t|}+\frac{n}{2}\log(2\pi{e}G_p)+\frac{n}{p},\label{ecdq_ineq} \end{align} where $(e)$ follows from \eqref{cond_entropy_quant}; $(f)$ follows from the derivation of Theorem \ref{theorem:fundam_lim_mmse}. The derivation is complete once we minimize both sides of inequality in \eqref{ecdq_ineq} with the appropriate constraint sets. } \section{Proof of Theorem \ref{theorem:fund_limt_lqg}}\label{proof:theorem:fund_limt_lqg} { Note that from \eqref{total_min_lqg} we obtain \begin{align} \begin{split} &{\Gamma^{\IID,\op,ws}}=\sum_{t=1}^n{\lqg}^{\op}_t\\ &=\frac{1}{p}\sum_{t=1}^n\Big\{\trace(\Sigma_{W_t}\widetilde{K}_t)\\ &+\trace(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}{\bf E}\{||X_t - Y_t||_2^2\})\Big\}\\ &\stackrel{(a)}\geq\frac{1}{p}\sum_{t=1}^n\Big\{\trace(\Sigma_{W_t}\widetilde{K}_t)\\ &+\trace(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}{\bf E}\{||X_t - {\bf E}\{X_t|S_{1,t}\}||_2^2\})\Big\}\\ &\stackrel{(b)}\geq\frac{1}{p}\sum_{t=1}^n\Bigg\{\trace(\Sigma_{W_t}\widetilde{K}_t)+\trace\Bigg(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}\\ &{\bf E}_{\bar{S}_{1,t-1}}\left\{\frac{1}{2\pi{e}}2^{\frac{2}{p}h(X_t|S_{1,t-1}=\bar{S}_{1,t-1})}\right\}2^{-2R^*_t}\Bigg)\Bigg\}\\ &\stackrel{(c)}\geq\frac{1}{p}\sum_{t=1}^n\Bigg\{\trace(\Sigma_{W_t}\widetilde{K}_t)+\trace\Bigg(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}\\ &\left\{\frac{1}{2\pi{e}}2^{\frac{2}{p}h(X_t|S_{1,t-1})}2^{-2R^*_t}\right\}\Bigg)\Bigg\},\\ &\stackrel{(d)}\geq\sum_{t=1}^n\Big\{\sigma^2_{w_t}K_t+\alpha_t\beta_tL_tK_{t+1}D(R_t^*)\Big\}\sr{\triangleqangle}{=}\sum_{t=1}^n{\lqg}^*_t, \end{split}\label{lower_bound_lqg} \end{align} where $(a)$ follows from the fact that $Y_t$ is $\mathbb{S}_{1,t}-$measurable and the $\mmse$ is obtained for $Y_t={\bf E}\{X_t|S_{1,t}\}$; $(b)$ follows from the fact that ${\bf E}\{||X_t - {\bf E}\{X_t|S_{1,t}\}||_2^2\})={\bf E}_{\bar{S}_{1,t-1}}\left\{{\bf E}\{||X_t - {\bf E}\{X_t|S_{1,t}\}||_2^2|S_{1,t-1}=\bar{S}_{1,t-1}\}\right\}$, where ${\bf E}_{\bar{S}_{1,t}}\{\cdot\}$ is the expectation with respect to some vector $\bar{S}_{1,t-1}$ that is distributed similarly to $S_{1,t-1}$, also from the $\mse$ inequality in \cite[Theorem 17.3.2]{cover-thomas2006} and finally from the fact that $R^*_t\geq{0}$, where $R^*_t=\frac{1}{p}\left\{h^*(X_t|Y_{1,t-1})-h^*(X_t|Y_{1,t})\right\}$ (see the derivation of Theorem \ref{theorem:fundam_lim_mmse}, ({\bf 1})) with $h^*(X_t|Y_{1,t-1})$, $h^*(X_t|Y_{1,t})$ being the minimized values in \eqref{scalar:eq.solution}; $(c)$ follows from Jensen's inequality \cite[Theorem 2.6.2]{cover-thomas2006}, i.e., ${\bf E}_{\bar{S}_{1,t-1}}\left\{2^{\frac{2}{p}h(X_t|S_{1,t-1}=\bar{S}_{1,t-1})}\right\}\geq{2}^{\frac{2}{p}h(X_t|S_{1,t-1})}$; $(d)$ follows from the fact that $\{h(X_t|S_{1,t-1})=h(A_{t-1}X_{t-1}+B_{t-1}U_{t-1}+W_{t-1}|S_{1,t-1}):~t\in\mathbb{N}_2^n\}$ is completely specified from the independent Gaussian noise process $\{W_{t-1}:~t\in\mathbb{N}_2^n\}$ because $\{U_{t-1}=g_{t}(S_{1,t-1}):~t\in\mathbb{N}_2^n\}$ (see \eqref{example2:decoding}) are constants conditioned on $S_{1,t-1}$. Therefore, $h(X_t|S_{1,t-1})$ is conditionally Gaussian thus equivalent to $h(X_t|Y_{1,t-1})$. This further means that $\frac{1}{2\pi{e}}{2}^{\frac{2}{p}h(X_t|Y_{1,t-1})}2^{-2R^*_t}\geq\frac{1}{2\pi{e}}{2}^{\frac{2}{p}h^*(X_t|Y_{1,t-1})}2^{-2R^*_t}\stackrel{(\star)}=\frac{1}{2\pi{e}}{2}^{\frac{1}{p}\log_2(2\pi{e})^p|\Delta_t^*|}\stackrel{(\star\star)}=\min\{D_t\}\equiv{D}(R_t^*)$, where $(\star)$ follows because $h^*(X_t|Y_{1,t})=\frac{1}{2}\log_2(2\pi{e})^p|\Delta_t^*|$ and $(\star\star)$ follows because $\Delta_t^*=\diag(\min\{D_t\},\ldots,\min\{D_t\})$.} \par {It remains to find $D(R_t^*)$ at each time instant in \eqref{lower_bound_lqg}. To do so, we reformulate the solution of the dynamic reverse-waterfilling solution in \eqref{para_sol_eq.1} as follows: \begin{align} &nR\equiv{\cal R}_{\ssum}^{\IID,1}=\sum_{t=1}^n{R^*_t}\equiv\frac{1}{2}\sum_{t=1}^n \log_2\left(\frac{\lambda_t}{D_t}\right)\nonumber\\ &=\frac{1}{2}\left\{\underbrace{\cancelto{0}{\log_2(\lambda_1)}}_{\text{initial~step}}+\sum_{t=1}^{n-1}\log_2\left(\alpha^2_t+\frac{\sigma^2_{w_t}}{D_t}\right)-\underbrace{\log_2 D_n}_{\text{final~step}}\right\}.\label{reform_object_func} \end{align} From \eqref{reform_object_func} we observe that at each time instant, the rate $R^*_t$ is a function of only one distortion $D_t$ since we have now decoupled the correlation with $D_{t-1}$. Moreover, we can assume without loss of generality, that the initial step is zero because it is independent of $D_0$. Thus, from \eqref{reform_object_func}, we can find at each time instant a $D_t\in(0,\infty)$ such that the rate is $R_t^*\in[{0},\infty)$. Since the rate distortion problem is equivalent to the distortion rate problem (see, e.g., \cite[Chapter 10]{cover-thomas2006}) we can immediately compute the total-distortion rate function, denoted by $D^{\IID,1}_{\ssum}(R)$, as follows: \begin{align} D^{\IID,1}_{\ssum}(R)\sr{\triangleqangle}{=}\sum_{t=1}^nD(R_t^*)=\sum_{t=1}^{n-1} \frac{\sigma^2_{w_t}}{2^{2R^*_t}-\alpha_t^2}+2^{-2R^*_n}.\label{scalar:dist_rate1} \end{align} Substituting $D(R_t^*)$ at each time instant in \eqref{lower_bound_lqg} the result follows.}\\ This completes the proof. \section{Proof of Theorem \ref{theorem:achievability_lqg}}\label{proof:theorem:achievability_lqg} {Note that from Lemma \ref{lemma:scalar:separation}, \eqref{total_min_lqg}, we obtain: \begin{align} \begin{split} &{\Gamma^{\IID,\op,ws}_{\ssum}}=\frac{1}{p}\sum_{t = 1}^n\Big\{\trace(\Sigma_{W_t}\widetilde{K}_t)\\ &+\trace(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}{\bf E}\{||X_t - Y_t||_2^2\})\Big\}\\ &=\frac{1}{p}\sum_{t = 1}^n\Big\{\trace(\Sigma_{W_t}\widetilde{K}_t)\\ &+\trace(A_tB_t\widetilde{L}_t \widetilde{K}_{t+1}D(R^{\op}_t))\Big\}\\ &\stackrel{(a)}\leq\sum_{t=1}^{n-1}\left\{\sigma^2_{w_t}K_t+ \alpha_t\beta_t L_t K_{t+1}\frac{4^{\frac{1}{p}}(2\pi{e}G_p)\sigma^2_{w_t}}{2^{2R^{\op}_t}-4^{\frac{1}{p}}(2\pi{e}G_p)\alpha_t^2}\right\}\\ &+\sigma^2_{w_n}K_n, \end{split}\label{proof:upper_bound_lqg_cost} \end{align} where $(a)$ {is obtained in two steps. As a first step, expand the inequality obtained in Theorem \ref{theorem:achievability}, \eqref{achievable_bound_ecdq} for the time horizon $n$ as follows \begin{align} R^{\op}_1+\ldots+R^{\op}_n\leq{R}_1^*+\ldots+R_n^*+c \end{align} where $c=\frac{n}{p}\log_2(2\pi{e}G_p)+\frac{n}{p}$. As a second step, we reformulate $\{R_t^*:~t\in\mathbb{N}_1^n\}$ similar to \eqref{reform_object_func} (in the derivation of Theorem \ref{theorem:fund_limt_lqg}) so that we decouple the dependence on $D_{t-1}$ at each time step.} Finally, for each $R^{\op}_t,~t=1,2\ldots,n,$ we solve the resulting inequality with respect to $D(R_t^{\op})$ which gives \begin{align} \begin{split} &D(R^{\op}_t)\leq\frac{4^{\frac{1}{p}}(2\pi{e}G_p)\sigma^2_{w_{t}}}{2^{2R^{\op}_{t}}-4^{\frac{1}{p}}(2\pi{e}G_p)\alpha_{t}^2},~t\in\mathbb{N}_1^{n-1}. \end{split}\label{achievable_distortions} \end{align} Observe that the last step $t=n$ is not needed because in \eqref{total_min_lqg} we have ${K}_{n+1}=0$. This completes the proof.} { \section*{Acknowledgement} } {The authors wish to thank the Associate Editor and the anonymous reviewers for their valuable comments and suggestions. They are also indebted to Prof. T. Charalambous for reading the paper and proposing the idea of bisection method for Algorithm \ref{algo1}. They are also grateful to Prof. J. {\O}stergaard for fruitful discussions on technical issues of the paper.} \bibliographystyle{IEEEtran}
2024-02-18T23:40:08.468Z
2020-05-19T02:29:43.000Z
algebraic_stack_train_0000
1,438
16,441
proofpile-arXiv_065-7059
\section{Introduction} Object detection is one of the fundamental tasks in computer vision, and has been researched extensively. In the past few years, state-of-the-art methods for this task are based on deep convolutional neural networks (such as Faster R-CNN \cite{ren2015faster}, RetinaNet~\cite{lin2017feature}), due to their impressive performance. Typically, the designs of object detection networks are much more complex than those for image classification, because the former need to localize and classify multiple objects in an image simultaneously while the latter only need to output image-level labels. Due to its complex structure and numerous hyper-parameters, designing effective object detection networks is more challenging and usually needs much manual effort. On the other hand, Neural Architecture Search (NAS) approaches~\cite{ghiasi2019fpn,nekrasov2018fast,zoph2016neural} have been showing impressive results on automatically discovering top-performing neural network architectures in large-scale search spaces. Compared to manual designs, NAS methods are data-driven instead of experience-driven, and hence need much less human intervention. As defined in~\cite{elsken2018neural}, the workflow of NAS can be divided into the following three processes: $1$) sampling architecture from a search space following some search strategies; $2$) evaluating the performance of the sampled architecture; and $3$) updating the parameters based on the performance. One of the main problems prohibiting NAS from being used in more realistic applications is its search efficiency. The evaluation process is the most time consuming part because it involves a full training procedure of a neural network. To reduce the evaluation time, in practice a proxy task is often used as a lower cost substitution. In the proxy task, the input, network parameters and training iterations are often scaled down to speedup the evaluation. However, there is often a performance gap for samples between the proxy tasks and target tasks, which makes the evaluation process biased. How to design proxy tasks that are both accurate and efficient for specific problems is a challenging problem. Another solution to improve search efficiency is constructing a supernet that covers the complete search space and training candidate architectures with shared parameters~\cite{liu2018darts, pham2018enas}. However, this solution leads to significantly increased memory consumption and restricts itself to small-to-moderate sized search spaces. To our knowledge, studies on efficient and accurate NAS approaches to object detection networks are rarely touched, despite its significant importance. To this end, we present a fast and memory saving NAS method for object detection networks, which is capable of discovering top-performing architectures within significantly reduced search time. Our overall detection architecture is based on FCOS~\cite{tian2019fcos}, a simple anchor-free one-stage object detection framework, in which the feature pyramid network and prediction head are searched using our proposed NAS method. Our main contributions are summarized as follows. \begin{itemize} \item In this work, we propose a fast and memory-efficient NAS method for searching both FPN and head architectures, with carefully designed proxy tasks, search space and evaluation strategies, which is able to find top-performing architectures over $3,000$ architectures using $28$ GPU-days only. Specifically, this high efficiency is enabled with the following designs. $-$ Developing a fast proxy task training scheme by skipping the backbone finetuning stage; $-$ Adapting progressive search strategy to reduce time cost taken by the extended search space; $-$ Using a more discriminative criterion for evaluation of searched architectures. $-$ Employing an efficient anchor-free one-stage detection framework with simple post processing; \item Using NAS, we explore the workload relationship between FPN and head, proving the importance of weight sharing in head. \item We show that the overall structure of NAS-FCOS is general and flexible in that it can be equipped with various backbones including MobileNetV$2$, ResNet-$50$, ResNet-$101$ and ResNeXt-$101$, and surpasses state-of-the-art object detection algorithms using comparable computation complexity and memory footprint. More specifically, our model can improve the AP by $1.5\sim3.5$ points on all above models comparing to their FCOS counterparts. \end{itemize} \section{Related Work} \subsection{Object Detection} The frameworks of deep neural networks for object detection can be roughly categorized into two types: one-stage detectors~\cite{lin2017focal} and two-stage detectors~\cite{he2017mask, ren2015faster}. Two-stage detection frameworks first generate class-independent region proposals using a region proposal network (RPN), and then classify and refine them using extra detection heads. In spite of achieving top performance, the two-stage methods have noticeable drawbacks: they are computationally expensive and have many hyper-parameters that need to be tuned to fit a specific dataset. In comparison, the structures of one-stage detectors are much simpler. They directly predict object categories and bounding boxes at each location of feature maps generated by a single CNN backbone. Note that most state-of-the-art object detectors (including both one-stage detectors~\cite{lin2017focal,liu2016ssd, yolov3} and two-stage detectors~\cite{ren2015faster}) make predictions based on anchor boxes of different scales and aspect ratios at each convolutional feature map location. However, the usage of anchor boxes may lead to high imbalance between object and non-object examples and introduce extra hyper-parameters. More recently, anchor-free one-stage detectors~\cite{kong2019foveabox,law2018cornernet,tian2019fcos,zhou2019objects,zhu2019fsaf} have attracted increasing research interests, due to their simple fully convolutional architectures and reduced consumption of computational resources. \subsection{Neural Architecture Search} NAS is usually time consuming. We have seen great improvements from $24,000$ GPU-days~\cite{zoph2016neural} to $0.2$ GPU-day~\cite{zhou2019bayesnas}. The trick is to first construct a supernet containing the complete search space and train the candidates all at once with bi-level optimization and efficient weight sharing~\cite{liu2019auto,liu2018darts}. But the large memory allocation and difficulties in approximated optimization prohibit the search for more complex structures. Recently researchers~\cite{cai2018proxylessnas,guo2019single,stamoulis2019single} propose to apply single-path training to reduce the bias introduced by approximation and model simplification of the supernet. DetNAS~\cite{chen2019detnas} follows this idea to search for an efficient object detection architecture. One limitation of the single-path approach is that the search space is restricted to a sequential structure. Single-path sampling and straight through estimate of the weight gradients introduce large variance to the optimization process and prohibit the search for more complex structures under this framework. Within this very simple search space, NAS algorithms can only make trivial decisions like kernel sizes for manually designed modules. Object detection models are different from single-path image classification networks in their way of merging multi-level features and distributing the task to parallel prediction heads. Feature pyramid networks (FPNs) ~\cite{ghiasi2019fpn,Alexander2019panoptic,lin2017feature,Liu2019AnEnd,zhao2019pyramid}, designed to handle this job, plays an important role in modern object detection models. NAS-FPN~\cite{ghiasi2019fpn} targets on searching for an FPN alternative based on one-stage framework RetinaNet~\cite{lin2017focal}. Feature pyramid architectures are sampled with a recurrent neural network (RNN) controller. The RNN controller is trained with reinforcement learning (RL). However, the search is very time-consuming even though a proxy task with ResNet-10 backbone is trained to evaluate each architecture. Since all these three kinds of research (~\cite{chen2019detnas,ghiasi2019fpn} and ours) focus on object detection framework, we demonstrate the differences among them that {\it DetNAS~\cite{chen2019detnas} aims to search for the designs of better backbones, while NAS-FPN~\cite{ghiasi2019fpn} searches the FPN structure, and our search space contains both FPN and head structure.} To speed up reward evaluation of RL-based NAS, the work of~\cite{nekrasov2018fast} proposes to use progressive tasks and other training acceleration methods. By caching the encoder features, they are able to train semantic segmentation decoders with very large batch sizes very efficiently. In the sequel of this paper, we refer to this technique as fast decoder adaptation. However, directly applying this technique to object detection tasks does not enjoy similar speed boost, because they are either not in using a fully-convolutional model~\cite{lin2017feature} or require complicated post processing that are not scalable with the batch size~\cite{lin2017focal}. To reduce the post processing overhead, we resort to a recently introduced anchor-free one-stage framework, namely, FCOS~\cite{tian2019fcos}, which significantly improve the search efficiency by cancelling the processing time of anchor-box matching in RetinaNet. Compared to its anchor-based counterpart, FCOS significantly reduces the training memory footprint while being able to improve the performance. \section{Our Approach} In our work, we search for anchor-free fully convolutional detection models with fast decoder adaptation. Thus, NAS methods can be easily applied. \subsection{Problem Formulation} We base our search algorithm upon a one-stage framework FCOS due to its simplicity. Our training tuples $\{(\mathbf x, Y)\}$ consist of input image tensors $\mathbf x$ of size $(3\times H\times W)$ and FCOS output targets $Y$ in a pyramid representation, which is a list of tensors $\mathbf y_l$ each of size $((K+4+1)\times H_l\times W_l)$ where $H_l\times W_l$ is feature map size on level $p$ of the pyramid. $(K+4+1)$ is the output channels of FCOS, the three terms are length-$K$ one-hot classification labels, $4$ bounding box regression targets and $1$ centerness factor respectively. The network $g: \mathbf x\rightarrow \hat{Y}$ in original FCOS consists of three parts, a backbone $b$, FPN $f$ and multi-level subnets we call prediction heads $h$ in this paper. First backbone $b: \mathbf x\rightarrow C$ maps the input tensor to a set of intermediate-leveled features $C = \{\mathbf c_3, \mathbf c_4, \mathbf c_5\}$, with resolution $(H_i\times W_i) = (H/2^i \times W/2^i)$. Then FPN $f: C\rightarrow P$ maps the features to a feature pyramid $P=\{\mathbf p_3, \mathbf p_4, \mathbf p_5, \mathbf p_6, \mathbf p_7\}$. Then the prediction head $h: \mathbf p\rightarrow \mathbf y$ is applied to each level of $P$ and the result is collected to create the final prediction. To avoid overfitting, same $h$ is often applied to all instances in $P$. Since objects of different scales require different effective receptive fields, the mechanism to select and merge intermediate-leveled features $C$ is particularly important in object detection network design. Thus, most researches~\cite{liu2016ssd,ren2015faster} are carried out on designing $f$ and $h$ while using widely-adopted backbone structures such as ResNet~\cite{he2016identity}. Following this principle, our search goal is to decide when to choose which features from $C$ and how to merge them. To improve the efficiency, we reuse the parameters in $b$ pretrained on target dataset and search for the optimal structures after that. For the convenience of the following statement, we call the network components to search for, namely $f$ and $h$, together the decoder structure for the objection detection network. $f$ and $h$ take care of different parts of the detection job. $f$ extracts features targeting different object scales in the pyramid representations $P$, while $h$ is a unified mapping applied to each feature in $P$ to avoid overfitting. In practice, people seldom discuss the possibility of using a more diversified $f$ to extract features at different levels or how many layers in $h$ need to be shared across the levels. In this work, we use NAS as an automatic method to test these possibilities. \subsection{Search Space} Considering the different functions of $f$ and $h$, we apply two search space respectively. Given the particularity of FPN structure, a basic block with new overall connection and $f$'s output design is built for it. For simplicity, sequential space is applied for $h$ part. We replace the cell structure with atomic operations to provide even more flexibility. To construct one basic block, we first choose two layers $\mathbf x_1$, $\mathbf x_2$ from the sampling pool $X$ at \texttt{id1}, \texttt{id2}, then two operations \texttt{op1}, \texttt{op2} are applied to each of them and an aggregation operation \texttt{agg} merges the two output into one feature. To build a deep decoder structure, we apply multiple basic blocks with their outputs added to the sampling pool. Our basic block $bb_t: X_{t-1}\rightarrow X_t$ at time step $t$ transforms the sampling pool $X_{t-1}$ to $X_t = X_{t-1}\cup \{\mathbf{x}_t\}$, where $\mathbf{x}_t$ is the output of $bb_t$. \begin{table}[t] \begin{center} \begin{tabular}{c|c} \hline ID & Description \T\B\\ \hline 0 & separable conv $3\times3$\T\B\\ \hline 1 & separable conv $3\times3$ with dilation rate $3$\T\B\\ \hline 2 & separable conv $5\times5$ with dilation rate $6$\T\B\\ \hline 3 & skip-connection\T\B\\ \hline 4 & deformable $3\times3$ convolution\B\\ \hline \end{tabular} \caption{Unary operations used in the search process. \label{table:unary}} \end{center} \end{table} The candidate operations are listed in Table~\ref{table:unary}. We include only separable/depth-wise convolutions so that the decoder can be efficient. In order to enable the decoder to apply convolutional filters on irregular grids, here we have also included deformable $3\times3$ convolutions~\cite{zhu2018deformable}. For the aggregation operations, we include element-wise sum and concatenation followed by a $1\times1$ convolution. The decoder configuration can be represented by a sequence with three components, FPN configuration, head configuration and weight sharing stages. We provide detailed descriptions to each of them in the following sections. The complete diagram of our decoder structure is shown in Fig.~\ref{fig:main}. \begin{figure*}[thb!] \centering \subfloat{\includegraphics[width = 0.95\linewidth]{images/structure.pdf}} \caption{A conceptual example of our NAS-FCOS decoder. It consists of two sub networks, an FPN $f$ and a set of prediction heads $h$ which have shared structures. One notable difference with other FPN-based one-stage detectors is that our heads have partially shared weights. Only the last several layers of the predictions heads (marked as yellow) are tied by their weights. The number of layers to share is decided automatically by the search algorithm. Note that both FPN and head are in our actual search space; and have more layers than shown in this figure. Here the figure is for illustration only. \label{fig:main}} \end{figure*} \subsubsection{FPN Search Space} As mentioned above, the FPN $f$ maps the convolutional features $C$ to $P$. First, we initialize the sampling pool as $X_0 = C$. Our FPN is defined by applying the basic block $7$ times to the sampling pool, $f:= bb_1^f\circ bb_2^f \circ \cdots \circ bb_7^f$. To yield pyramid features $P$, we collect the last three basic block outputs $\{\mathbf x_5, \mathbf x_6, \mathbf x_7\}$ as $\{\mathbf p_3, \mathbf p_4, \mathbf p_5\}$. To allow shared information across all layers, we use a simple rule to create global features. If there is some dangling layer $\mathbf x_t$ which is not sampled by later blocks $\{bb_i^f|i > t\}$ nor belongs to the last three layers $t < 5$, we use element-wise add to merge it to all output features \begin{align} \mathbf{p}^*_i = \mathbf p_i + \mathbf x_t, \,\, i\in\{3, 4, 5\}. \end{align} Same as the aggregation operations, if the features have different resolution, the smaller one is upsampled with bilinear interpolation. To be consistent with FCOS, $\mathbf{p}_6$ and $\mathbf{p}_7$ are obtained via a $3\times3$ stride-$2$ convolution on $\mathbf{p}_5$ and $\mathbf{p}_6$ respectively. \subsubsection{Prediction Head Search Space} Prediction head $h$ maps each feature in the pyramid $P$ to the output of corresponding $\mathbf y$, which in FCOS and RetinaNet, consists of four $3\times3$ convolutions. To explore the potential of the head, we therefore extend a sequential search space for its generation. Specifically, our head is defined as a sequence of six basic operations. Compared with candidate operations in the FPN structures, the head search space has two slight differences. First, we add standard convolution modules (including conv$1$x$1$ and conv$3$x$3$) to the head sampling pool for better comparison. Second, we follow the design of FCOS by replacing all the Batch Normalization (BN) layers to Group Normalization (GN)~\cite{wu2018group} in the operations sampling pool of head, considering that head needs to share weights between different levels, which causes BN invalid. The final output of head is the output of the last (sixth) layer. \subsubsection{Searching for Head Weight Sharing} To add even more flexibility and understand the effect of weight sharing in prediction heads, we further add an index $i$ as the location where the prediction head starts to share weights. For every layer before stage $i$, the head $h$ will create independent set of weights for each FPN output level, otherwise, it will use the global weights for sharing purpose. Considering the independent part of the heads being extended FPN branch and the shared part as head with adaptive-length, we can further balance the workload for each individual FPN branch to extract level-specific features and the prediction head shared across all levels. \subsection{Search Strategy} \label{search strategy} RL based strategy is applied to the search process. We rely on an LSTM-based controller to predict the full configuration. We consider using a progressive search strategy rather than the joint search for both FPN structure and prediction head part, since the former requires less computing resources and time cost than the latter. The training dataset is randomly split into a meta-train $D_t$ and meta-val $D_v$ subset. To speed up the training, we fix the backbone network and cache the pre-computed backbone output $C$. This makes our single architecture training cost independent from the depth of backbone network. Taking this advantage, we can apply much more complex backbone structures and utilize high quality multilevel features as our decoder's input. We find that the process of backbone finetuning can be skipped if the cached features are powerful enough. Speedup techniques such as Polyak weight averaging are also applied during the training. The most widely used detection metric is average precision (AP). However, due to the difficulty of object detection task, at the early stages, AP is too low to tell the good architectures from the bad ones, which makes the controller take much more time to converge. To make the architecture evaluation process easier even at the early stages of the training, we therefore use negative loss sum as the reward instead of average precision: \begin{equation} \begin{split} R(a) = &- \sum_{(x, Y)\in D_v}(L_{cls}(x, Y|a) \\ &+ L_{reg}(x, Y|a) + L_{ctr}(x, Y|a)) \end{split} \label{reward} \end{equation} where $L_{cls}$, $L_{reg}$, $L_{ctr}$ are the three loss terms in FCOS. Gradient of the controller is estimated via proximal policy optimization (PPO)~\cite{schulman2017proximal}. \label{search strategy} \section{Experiments} \subsection{Implementation Details} \subsubsection{Searching Phase} We design a fast proxy task for evaluating the decoder architectures sampled in the searching phase. PASCAL VOC is selected as the proxy dataset, which contains $5715$ training images with object bounding box annotations of $20$ classes. Transfer capacity of the structures can be illustrated since the search and full training phase use different datasets. The VOC training set is randomly split into a meta-train set with $4,000$ images and a meta-val set with $1715$ images. For each sampled architecture, we train it on meta-train and compute the reward~\eqref{reward} on meta-val. Input images are resized to short size $384$ and then randomly cropped to $384\times384$. Target object sizes of interest are scaled correspondingly. We use Adam optimizer with learning rate $8$e$-4$ and batch size $200$. Polyak averaging is applied with the decay rates of $0.9$. The decoder is evaluated after $300$ iterations. As we use fast decoder adaptation, the backbone features are fixed and cached during the search phase. To enhance the cached backbone features, we first initialize them with pre-trained weights provided by open-source implementation of FCOS\footnote{https://tinyurl.com/FCOSv1} and then finetune on VOC using the training strategies of FCOS. Note that the above finetuning process is only performed once at the begining of the search phase. A progressive strategy is used for the search of $f$ and $h$. We first search for the FPN part and retain the original head. All operations in the FPN structure have $64$ output channels. The decoder inputs $C$ are resized to fit output channel width of FPN via $1\times1$ convolutions. After this step, a searched FPN structure is fixed and the second stage searching for the head will be started based on it. Most parameters for searching head are identical to those for searching FPN structure, with the exception that the output channel width is adjusted from $64$ to $128$ to deliver more information. For the FPN search part, the controller model nearly converged after searching over $2.8$K architectures on the proxy task as shown in Fig.~\ref{fig:reward}. Then, the top-$20$ best performing architectures on the proxy task are selected for the next full training phase. For the head search part, we choose the best searched FPN among the top-$20$ architectures and pre-fetch its features. It takes about $600$ rounds for the controller to nearly converge, which is much faster than that for searching FPN architectures. After that, we select for full training the top-$10$ heads that achieve best performance on the proxy task. In total, the whole search phase can be finished within $4$ days using $8$ V100 GPUs. \begin{table*}[th!] \centering \scalebox{0.95}{ \begin{tabular}{ll|cc|cc} \hline\noalign{\smallskip} Decoder & Backbone & FLOPs (G) & Params (M) & AP\\ \noalign{\smallskip}\hline\noalign{\smallskip} FPN-RetinaNet @$256$ & MobileNetV$2$ & $133.4$ & $11.3$ & $30.8$\\ FPN-FCOS @$256$ & MobileNetV$2$ & $105.4$ & $9.8$ & $31.2$\\ NAS-FCOS (ours) @$128$ & MobileNetV$2$ & $\mathbf{39.3}$ & $\mathbf{5.9}$ & $32.0$\\ NAS-FCOS (ours) @$128$-$256$ & MobileNetV$2$ & $95.6$ & $9.9$ & $33.8$\\ NAS-FCOS (ours) @$256$ & MobileNetV$2$ & $121.8$ & $16.1$ & $\mathbf{34.7}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} FPN-RetinaNet @$256$ & R-$50$ & $198.0$ & $33.6$ & $36.1$\\ FPN-FCOS @$256$ & R-$50$ & $169.9$ & $32.0$ & $37.4$\\ NAS-FCOS (ours) @$128$ & R-$50$ & $\mathbf{104.0}$ & $\mathbf{27.8}$ & $37.9$\\ NAS-FCOS (ours) @$128$-$256$ & R-$50$ & $160.4$ & $31.8$ & $39.1$\\ NAS-FCOS (ours) @$256$ & R-$50$ & $189.6$ & $38.4$ & $\mathbf{39.8}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} FPN-RetinaNet @$256$ & R-$101$ & $262.4$ & $52.5$ & $37.8$\\ FPN-FCOS @$256$ & R-$101$ & $\mathbf{234.3}$ & $\mathbf{50.9}$ & $41.5$\\ NAS-FCOS (ours) @$256$ & R-$101$ & $254.0$ & $57.3$ & $\mathbf{43.0}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} FPN-FCOS @$256$ & X-$64$x$4$d-$101$ & $371.2$ & $89.6$ & $43.2$ \\ NAS-FCOS (ours) @$128$-$256$ & X-$64$x$4$d-$101$ & $\mathbf{361.6}$ & $\mathbf{89.4}$ & $\mathbf{44.5}$\\ \noalign{\smallskip}\hline\noalign{\smallskip} FPN-FCOS @$256$ w/improvements & X-$64$x$4$d-$101$ & $371.2$ & $89.6$ & $44.7$ \\ NAS-FCOS (ours) @$128$-$256$ w/improvements & X-$64$x$4$d-$101$ & $\mathbf{361.6}$ & $\mathbf{89.4}$ & $\mathbf{46.1}$\\ \noalign{\smallskip}\hline \end{tabular} } \caption{Results on test-dev set of MS COCO after full training. R-$50$ and R-$101$ represents ResNet backbones and X-$64$x$4$d-$101$ represents ResNeXt-$101$ ($64\times4$d). All networks share the same input image resolution. FLOPs and parameters are being measured on $1088\times800$, which is the median of the input size on COCO. For RetinaNet and FCOS, we use official models provided by the authors. For our NAS-FCOS, @$128$ and @$256$ means that the decoder channel width is $128$ and $256$ respectively. @$128$-$256$ is the decoder with $128$ FPN width and $256$ head width. The same improving tricks used on the newest FCOS version are used in our model for fair comparison. \label{table:det}} \end{table*} \begin{figure}[t!] \centering \includegraphics[width=0.50\textwidth]{images/reward_steps.pdf} \caption{Performance of reward during the proxy task, which has been growing throughout the process, indicating that the model of reinforcement learning works.} \label{fig:reward} \end{figure} \subsubsection{Full Training Phase} In this phase, we fully train the searched models on the MS COCO training dataset, and select the best one by evaluating them on MS COCO validation images. Note that our training configurations are exactly the same as those in FCOS for fair comparison. Input images are resized to short size $800$ and the maximum long side is set to be $1333$. The models are trained using $4$ V100 GPUs with batch size $16$ for $90$K iterations. The initial learning rate is $0.01$ and reduces to one tenth at the $60$K-th and $80$K-th iterations. The improving tricks are applied only on the final model (w/improv). \begin{figure}[h!] \centering \includegraphics[width=0.50\textwidth]{images/fpn_structure_2.pdf} \caption{Our discovered FPN structure. $C_2$ is omitted from this figure since it is not chosen by this particular structure during the search process.} \label{fig:fpn} \end{figure} \subsection{Search Results} The best FPN structure is illustrated in Fig.~\ref{fig:fpn}. The controller identifies that deformable convolution and concatenation are the best performing operations for unary and aggregation respectively. From Fig.~\ref{fig:searched_head}, we can see that the controller chooses to use $4$ operations (with two skip connections), rather than the maximum allowed $6$ operations. Note that the discovered ``dconv + $1$x$1$ conv'' structure achieves a good trade-off between accuracy and FLOPs. Compared with the original head, our searched head has fewer FLOPs/Params (FLOPs $79.24$G vs.\ $89.16$G, Params $3.41$M vs.\ $4.92$M) and significantly better performance (AP $38.7$ vs.\ $37.4$). \begin{figure}[t!] \centering \includegraphics[width=0.50\textwidth]{images/searched_head.pdf} \caption{Our discovered Head structure. } \label{fig:searched_head} \end{figure} We use the searched decoder together with either light-weight backbones such as MobileNet-V2~\cite{sandler2018mobilenetv2} or more powerful backbones such as ResNet-$101$~\cite{he2016identity} and ResNeXt-$101$~\cite{xie2016aggregated}. To balance the performance and efficiency, we implement three decoders with different computation budgets: one with feature dimension of $128$ (@$128$), one with $256$ (@$256$) and another with FPN channel width $128$ and prediction head $256$ (@$128$-$256$). The results on the COCO test-dev with short side being $800$ is shown in Table~\ref{table:det}. The searched decoder with feature dimension of $256$ (@$256$) surpasses its FCOS counterpart by $1.5$ to $3.5$ points in AP under different backbones. The one with $128$ channels (@$128$) has significantly reduced parameters and calculation, making it more suitable for resource-constrained environments. In particular, our searched model with $128$ channels and MobileNetV2 backbone suparsses the original FCOS with the same backbone by $0.8$ AP points with only $1/3$ FLOPS. The third type of decoder (@$128$-$256$) achieves a good balance between accuracy and parameters. Note that our searched model outperforms the strongest FCOS variant by $1.4$ AP points ($46.1$ vs.\ $44.7$) with slightly smaller FLOPs and Params. The comparison of FLOPs and number of parameters with other models are illustrated in Fig.~\ref{fig:flops} and Fig.~\ref{fig:params} respectively. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{images/proportion.pdf} \caption{Trend graph of head weight sharing during search. The coordinates in the horizontal axis represent the number of the statistical period. A period consists of $50$ head structures. The vertical axis represents the proportion of heads that fully share weights in $50$ structures.} \label{fig:share_weights} \end{figure} In order to understand the importance of weight sharing in head, we add the number of layers shared by weights as an object of the search. Fig.~\ref{fig:share_weights} shows a trend graph of head weight sharing during search. We set $50$ structures as a statistical cycle. As the search deepens, the proportion of fully shared structures increases, indicating that on the multi-scale detection model, head weight sharing is a necessity. \begin{table*}[t!] \centering \scalebox{0.95}{ \begin{tabular}{l|c|c|c|c} \hline\noalign{\smallskip} Arch & FLOPs (G) & Search Cost (GPU-day) & Searched Archs & AP\\ \noalign{\smallskip}\hline\noalign{\smallskip} NAS-FPN @$256$ R-$50$ & \textgreater$325.0$ & $333\times$\#TPUs & $17000$ & \textless$38.0$\\ NAS-FPN $7$@$256$ R-$50$ & $1125.5$ & $333\times$\#TPUs & $17000$ & $44.8$\\ DetNAS-FPN-Faster & - & $44$ & $2200$ & $40.2$\\ DetNAS-RetinaNet & - & $44$ & $2200$ & $33.3$\\ \noalign{\smallskip}\hline\noalign{\smallskip} NAS-FCOS (ours) @$256$ R-$50$ & $\mathbf{189.6}$ & $\mathbf{28}$ & $3000$ & $39.8$\\ NAS-FCOS (ours) @$128$-$256$ X-$64$x$4$d-$101$ & $361.6$ & $\mathbf{28}$ & $3000$ & $\mathbf{46.1}$\\ \noalign{\smallskip}\hline \end{tabular} } \caption{ Comparison with other NAS methods. For NAS-FPN, the input size is $1280\times1280$ and the search cost should be timed by their number of TPUs used to train each architecture. Note that the FLOPs and AP of NAS-FPN @$256$ here are from Figure $11$ in NAS-FPN~\cite{ghiasi2019fpn}, and NAS-FPN $7$@$256$ stacks the searched FPN structure $7$ times. The input images are resized such that their shorter size is 800 pixels in DetNASNet~\cite{chen2019detnas} and our models. \label{table:nas}} \end{table*} \begin{figure}[b!] \centering \includegraphics[width=0.48\textwidth]{images/correlation.pdf} \caption{Correlation between the search reward obtained on the VOC meta-val dataset and the AP evaluated on COCO-val.} \label{fig:correlation} \end{figure} We also demonstrate the comparison with other NAS methods for object detection in Table~\ref{table:nas}. Our method is able to search for twice more architectures than DetNAS~\cite{chen2019detnas} per GPU-day. Note that the AP of NAS-FPN~\cite{ghiasi2019fpn} is achieved by stacking the searched FPN $7$ times, while we do not stack our searched FPN. Our model with ResNeXt-101 ($64$x$4$d) as backbone outperforms NAS-FPN by $1.3$ AP points while using only $1/3$ FLOPs and less calculation cost. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{images/FLOPs-with-mAP_crop_new_head.pdf} \caption{Diagram of the relationship between FLOPs and AP with different backbones. Points of different shapes represent different backbones. NAS-FCOS@$128$ has a slight increase in precision which also gains the advantage of computation quantity. One with $256$ channels obtains the highest precision with more computation complexity. Using FPN channel width $128$ and prediction head $256$ (@$128$-$256$) offers a trade-off. } \label{fig:flops} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{images/Params-with-mAP_crop_new_head.pdf} \caption{Diagram of the relationship between parameters and AP with different backbones. Adjusting the number of channels in the FPN structure and head helps to achieve a balance between accuracy and parameters.} \label{fig:params} \end{figure} We further measure the correlation between rewards obtained during the search process with the proxy dataset and APs attained by same architectures trained on COCO. Specifically, we randomly sample $15$ architectures from all the searched structures trained on COCO with batch size $16$. Since full training on COCO is time-consuming, we reduce the iterations to $60$K. The model is then evaluated on the COCO $2017$ validation set. As visible in Fig.~\ref{fig:correlation}, there is a strong correlation between search rewards and APs obtained from COCO. Poor- and well-performing architectures can be distinguished by the rewards on the proxy task very well. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{images/reward_design.pdf} \caption{Comparison of two different RL reward designs. The vertical axis represents AP obtained from the proxy task on the validation dataset.} \label{fig:metric} \end{figure} \subsection{Ablation Study} \subsubsection{Design of Reinforcement Learning Reward} As we discussed above, it is common to use widely accepted indicators as rewards for specific tasks in the search, such as mIOU for segmentation and AP for object detection. However, we found that using AP as reward did not show a clear upward trend in short-term search rounds (blue curve in Fig.~\ref{fig:metric}). We further analyze the possible reason to be that the controller tries to learn a mapping from the decoder to the reward while the calculation of AP itself is complicated, which makes it difficult to learn this mapping within a limited number of iterations. In comparison, we clearly see the increase of AP with the validation loss as RL rewards (red curve in Fig.~\ref{fig:metric}). \begin{table}[t!] \centering \scalebox{0.95}{ \begin{tabular}{c|c|c} \hline\noalign{\smallskip} Decoder & Search Space & AP\\ \noalign{\smallskip}\hline\noalign{\smallskip} FPN-FCOS @$256$ & - & $37.4$\\ \noalign{\smallskip}\hline\noalign{\smallskip} NAS-FCOS @$256$ & $h$ only & $38.7$\\ NAS-FCOS @$256$ & $f$ only & $38.9$\\ NAS-FCOS @$256$ & $f$ + $h$ & $\mathbf{39.8}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} } \caption{Comparisons between APs obtained under different search space with ResNet-50 backbone. \label{table:effective}} \end{table} \subsubsection{Effectiveness of Search Space} To further discuss the impact of the search spaces $f$ and $h$, we design three experiments for verification. One is to search $f$ with the original head being fixed, one is to search $h$ with the original FPN being fixed and another is to search the entire decoder ($f$+$h$). As shown in Table~\ref{table:effective}, it turns out that searching $f$ brings slightly more benefits than searching $h$ only. And our progressive search which combines both $f$ and $h$ achieves a better result. \subsubsection{Impact of Deformable Convolution} As aforementioned, deformable convolutions are included in the set of candidate operations for both $f$ and $h$, which are able to adapt to the geometric variations of objects. For fair comparison, we also replace the whole standard $3\times3$ convolutions with deformable $3\times3$ convolutions in FPN structure of the original FCOS and repeat them twice, making the FLOPs and parameters nearly equal to our searched model. The new model is therefore called DeformFPN-FCOS. It turns out that our NAS-FCOS model still achieves better performance (AP $= 38.9$ with FPN search only, and AP $= 39.8$ with both FPN and Head searched) than the DeformFPN-FCOS model (AP $= 38.4$) under this circumstance. \section{Conclusion} In this paper, we have proposed to use Neural Architecture Search to further optimize the process of designing object detection networks. It is shown in this work that top-performing detectors can be efficiently searched using carefully designed proxy tasks, search strategies and model evaluation metrics. The experiments on COCO demonstrates the efficiency of our discovered model NAS-FCOS and its flexibility to be used with various backbone architectures. {\small \bibliographystyle{reference}
2024-02-18T23:40:08.483Z
2020-02-26T02:10:04.000Z
algebraic_stack_train_0000
1,440
5,797
proofpile-arXiv_065-7080
\section{Introduction} Set families are fundamental objects in combinatorics, algorithmics, machine learning, discrete geometry, and combinatorial optimization. The Vapnik-Chervonenkis dimension (the \emph{VC-dimension} for short) $\vcd(\SS)$ of a set family $\SS\subseteq 2^U$ is the size of a largest subset of $X\subseteq U$ which can be \emph{shattered} by $\SS$ \cite{VaCh}, i.e., $2^{X}=\{X\cap S: S\in\mathcal{S}\}$. Introduced in statistical learning by Vapnik and Chervonenkis \cite{VaCh}, the VC-dimension was adopted in the above areas as complexity measure and as a combinatorial dimension of $\SS$. Two important inequalities relate a set family $\SS\subseteq 2^{U}$ with its VC-dimension. The first one, the \emph{Sauer-Shelah lemma} \cite{Sauer,Shelah} establishes that if $|U|=m$, then the number of sets in a set family $\SS\subseteq 2^{U}$ with VC-dimension $d$ is upper bounded by $\binom{m}{\leq d}$. The second stronger inequality, called the \emph{sandwich lemma}, proves that $|\SS|$ is sandwiched between the number of \emph{strongly shattered} sets (i.e., sets $X$ such that $\SS$ contains an $X$-cube, see Section \ref{OM-COM-AMP}) and the number of shattered sets \cite{AnRoSa,BoRa,Dr,Pa}. The set families for which the Sauer-Shelah bounds are tight are called {\it maximum families} \cite{GaWe,FlWa} and the set families for which the upper bounds in the sandwich lemma are tight are called {\it ample, lopsided, and extremal families} \cite{BaChDrKo,BoRa,La}. Every maximum family is ample, but not vice versa. To take a graph-theoretical point of view on set families, one considers the subgraph $G(\SS)$ of the hypercube $Q_m$ induced by the subsets of $\SS\subseteq 2^{U}$. (Sometimes $G(\SS)$ is called the {\it 1-inclusion graph} of $\SS$ \cite{Hau,HaLiWa}.) Each edge of $G(\SS)$ corresponds to an element of $U$. Then analogously to edge-contraction and minors in graph theory, one can consider the operation of simultaneous contraction of all edges of $G(\SS)$ defined by the same element $e\in U$. The resulting graph is the 1-inclusion graph $G(\SS_e)$ of the set family $\SS_e\subseteq 2^{U\setminus \{ e\}}$ obtained by identifying all pairs of sets of $\SS$ differing only in $e$. Given $Y\subseteq U$, we call the set family $\SS_Y$ and its 1-inclusion graph $G(\SS_Y)$ obtained from $\SS$ and $G(\SS)$ by successively contracting the edges labeled by the elements of $Y$ the {\it Q-minors} of $\SS$ and $G(\SS)$. Then $X\subseteq U$ is shattered by $\SS$ if and only if the Q-minor $G(\SS_{U\setminus X})$ is a full cube. Thus, the cubes play the same role for Q-minors as the complete graphs for classical graph minors. To take a metric point of view on set families, one restricts to set families whose 1-inclusion graph satisfies further properties. The typical property here is that the 1-inclusion graph $G(\SS)$ of $\SS$ is an isometric (distance-preserving) subgraph of the hypercube $Q_m$. Such graphs are called {\it partial cubes}. Partial cubes can be characterized in a pretty and efficient way \cite{Dj} and can be recognized in quadratic time \cite{Epp}. Partial cubes comprise many important and complex graph classes occurring in metric graph theory and initially arising in completely different areas of research such as geometric group theory, combinatorics, discrete geometry, and media theory (for a comprehensive presentation of partial cubes and their classes, see the survey \cite{BaCh_survey} and the books \cite{DeLa,HaImKl,Ov1}). For example, 1-inclusion graphs of ample families (and thus of maximum families) are partial cubes \cite{BaChDrKo,La} (in view of this, we will call such graphs {\it ample partial cubes} and {\it maximum partial cubes}, respectively). Other important examples comprise median graphs (aka 1-skeletons of CAT(0) cube complexes \cite{Ch_CAT,Sa}) and, more generally, 1-skeletons of CAT(0) Coxeter zonotopal complexes~\cite{HaPa}, the tope graphs of oriented matroids (OMs)~\cite{BjLVStWhZi}, of affine oriented matroids (AOMs)~\cite{KnMa}, and of lopsided sets (LOPs)~\cite{KnMa,La}, where the latter coincide with ample partial cubes (AMPs). More generally, tope graphs of complexes of oriented matroids (COMs)~\cite{BaChKn,KnMa} capture all of the above. Other classes of graphs defined by distance or convexity properties turn out to be partial cubes: bipartite cellular graphs (aka bipartite graphs with totally decomposable metrics)~\cite{BaCh_cellular}, bipartite Pasch \cite{Ch_thesis,Ch_separation} and bipartite Peano \cite{Po_Peano} graphs, netlike graphs~\cite{Po1}, and hypercellular graphs \cite{ChKnMa}. Many mentioned classes of partial cubes can be characterized via forbidden $Q$-minors; in case of partial cubes, $Q$-minors are endowed with a second operation called \emph{restriction} and are called {\it partial cube minors}, or {\it pc-minors} \cite{ChKnMa}. The class of partial cubes is closed under pc-minors. Thus, given a set $G_1,G_2,\ldots,G_n$ of partial cubes, one considers the set ${\mathcal F}(G_1,\ldots,G_n)$ of all partial cubes not having any of $G_1,G_2,\ldots,G_n$ as a pc-minor. Then $\mathcal{F}(Q_2)$ is the class of trees, $\mathcal{F}(P_3)$ is the class of hypercubes, and $\mathcal{F}(K_2\square P_3)$ consists of bipartite cacti~\cite[page 12]{Ma}. Other obstructions lead to more interesting classes, e.g., almost-median graphs ($\mathcal{F}(C_6)$~\cite[Theorem 4.4.4]{Ma}), hypercellular graphs ($\mathcal{F}(Q_3^-)$~\cite{ChKnMa}), median graphs ($\mathcal{F}(Q_3^-, C_6)$~\cite{ChKnMa}), bipartite cellular graphs ($\mathcal{F}(Q_3^-, Q_3)$~\cite{ChKnMa}), rank two COMs ($\mathcal{F}(SK_4, Q_3)$~\cite{KnMa}), and two-dimensional ample graphs ($\mathcal{F}(C_6, Q_3)$~\cite{KnMa}). Here $Q_3^-$ denotes the 3-cube $Q_3$ with one vertex removed and $SK_4$ the full subdivision of $K_4$, see Figure~\ref{fig:COMobstructions}. Bipartite Pasch graphs have been characterized in \cite{Ch_thesis,Ch_separation} as partial cubes excluding 7 isometric subgraphs of $Q_4$ as pc-minors. Littlestone and Warmuth~\cite{LiWa} introduced the sample compression technique for deriving generalization bounds in machine learning. Floyd and Warmuth~\cite{FlWa} asked whether any set family $\SS$ of VC-dimension $d$ has a sample compression scheme of size~$O(d)$. This question remains one of the oldest open problems in computational machine learning. It was recently shown in \cite{MoYe} that labeled compression schemes of size $O(2^d)$ exist. Moran and Warmuth \cite{MoWa} designed labeled compression schemes of size $d$ for ample families. Chalopin et al. \cite{ChChMoWa} designed (stronger) unlabeled compression schemes of size $d$ for maximum families and characterized such schemes for ample families via unique sink orientations of their 1-inclusion graphs. For ample families of VC-dimension 2 such unlabeled compression schemes exist because they admit corner peelings \cite{ChChMoWa,MeRo}. In view of this, it was noticed in \cite{RuRuBa} and \cite{MoWa} that the original sample compression conjecture of \cite{FlWa} would be solved if {\it one can show that any set family $\SS$ of VC-dimension $d$ can be extended to an ample (or maximum) partial cube of VC-dimension $O(d)$ or can be covered by $exp(d)$ ample partial cubes of VC-dimension $O(d)$.} These questions are already nontrivial for set families of VC-dimension 2. In this paper, we investigate the first question for partial cubes of VC-dimension 2, i.e., the class $\mathcal{F}(Q_3)$, that we will simply call {\it two-dimensional partial cubes}. We show that two-dimensional partial cubes can be extended to ample partial cubes of VC-dimension 2 -- a property that is not shared by general set families of VC-dimension 2. In relation to this result, we establish that all two-dimensional partial cubes can be obtained via amalgams from two types of combinatorial cells: maximal full subdivisions of complete graphs and convex cycles not included in such subdivisions. We show that all such cells are gated subgraphs. On the way, we detect a variety of other structural properties of two-dimensional partial cubes. Since two-dimensional partial cubes are very natural from the point of view of pc-minors and generalize previously studied classes such as bipartite cellular graphs~\cite{ChKnMa}, we consider these results of independent interest also from this point of view. In particular, we point out relations to tope graphs of COMs of low rank and region graphs of pseudoline arrangements. See Theorem~\ref{characterization} for a full statement of our results on two-dimensional partial cubes. Figure \ref{fig:two-dim-pc} presents an example of a two-dimensional partial cube which we further use as a running example. We also provide two characterizations of partial cubes of VC-dimension $\le d$ for any $d$ (i.e., of the class ${\mathcal F}(Q_{d+1})$) via hyperplanes and isometric expansions. However, \emph{understanding the structure of graphs from ${\mathcal F}(Q_{d+1})$ with $d\ge 3$ remains a challenging open question.} \begin{figure}[htb] \centering \includegraphics[width=.35\textwidth]{M} \caption{A two-dimensional partial cube $M$} \label{fig:two-dim-pc} \end{figure} \section{Preliminaries} \subsection{Metric subgraphs and partial cubes} All graphs $G=(V,E)$ in this paper are finite, connected, and simple. The {\it distance} $d(u,v):=d_G(u,v)$ between two vertices $u$ and $v$ is the length of a shortest $(u,v)$-path, and the {\it interval} $I(u,v)$ between $u$ and $v$ consists of all vertices on shortest $(u,v)$-paths: $I(u,v):=\{ x\in V: d(u,x)+d(x,v)=d(u,v)\}.$ An induced subgraph $H$ of $G$ is {\it isometric} if the distance between any pair of vertices in $H$ is the same as that in $G.$ An induced subgraph of $G$ (or the corresponding vertex set $A$) is called {\it convex} if it includes the interval of $G$ between any two of its vertices. Since the intersection of convex subgraphs is convex, for every subset $S\subseteq V$ there exists the smallest convex set $\conv(S)$ containing $S$, referred to as the {\it convex hull} of $S$. A subset $S\subseteq V$ or the subgraph $H$ of $G$ induced by $S$ is called {\it gated} (in $G$)~\cite{DrSch} if for every vertex $x$ outside $H$ there exists a vertex $x'$ (the {\it gate} of $x$) in $H$ such that each vertex $y$ of $H$ is connected with $x$ by a shortest path passing through the gate $x'$. It is easy to see that if $x$ has a gate in $H$, then it is unique and that gated sets are convex. Since the intersection of gated subgraphs is gated, for every subset $S\subseteq V$ there exists the smallest gated set $\gate( S)$ containing $S,$ referred to as the {\it gated hull} of $S$. A graph $G=(V,E)$ is {\it isometrically embeddable} into a graph $H=(W,F)$ if there exists a mapping $\varphi : V\rightarrow W$ such that $d_H(\varphi (u),\varphi (v))=d_G(u,v)$ for all vertices $u,v\in V$, i.e., $\varphi(G)$ is an isometric subgraph of $H$. A graph $G$ is called a {\it partial cube} if it admits an isometric embedding into some hypercube $Q_m$. For an edge $e=uv$ of $G$, let $W(u,v)=\{ x\in V: d(x,u)<d(x,v)\}$. For an edge $uv$, the sets $W(u,v)$ and $W(v,u)$ are called {\it complementary halfspaces} of $G$. \begin{theorem} \cite{Dj} \label{Djokovic} A graph $G$ is a partial cube if and only if $G$ is bipartite and for any edge $e=uv$ the sets $W(u,v)$ and $W(v,u)$ are convex. \end{theorem} To establish an isometric embedding of $G$ into a hypercube, Djokovi\'{c}~\cite{Dj} introduced the following binary relation $\Theta$ (called \emph{Djokovi\'{c}-Winkler relation}) on the edges of $G$: for two edges $e=uv$ and $e'=u'v'$ we set $e\Theta e'$ if and only if $u'\in W(u,v)$ and $v'\in W(v,u)$. Under the conditions of the theorem, $e\Theta e'$ if and only if $W(u,v)=W(u',v')$ and $W(v,u)=W(v',u')$, i.e. $\Theta$ is an equivalence relation. Let $E_1,\ldots,E_m$ be the equivalence classes of $\Theta$ and let $b$ be an arbitrary vertex taken as the basepoint of $G$. For a $\Theta$-class $E_i$, let $\{ G^-_i,G^+_i\}$ be the pair of complementary convex halfspaces of $G$ defined by setting $G^-_i:=G(W(u,v))$ and $G^+_i:=G(W(v,u))$ for an arbitrary edge $uv\in E_i$ such that $b\in G^-_i$. Then the isometric embedding $\varphi$ of $G$ into the $m$-dimensional hypercube $Q_m$ is obtained by setting $\varphi(v):=\{ i: v\in G^+_i\}$ for any vertex $v\in V$. Then $\varphi(b)=\varnothing$ and for any two vertices $u,v$ of $G$, $d_G(u,v)=|\varphi (u)\Delta \varphi(v)|.$ The bipartitions $\{ G^-_i,G^+_i\}, i=1,\ldots,m,$ can be canonically defined for all subgraphs $G$ of the hypercube $Q_m$, not only for partial cubes. Namely, if $E_i$ is a class of parallel edges of $Q_m$, then removing the edges of $E_i$ from $Q_m$ but leaving their end-vertices, $Q_m$ will be divided into two $(m-1)$-cubes $Q'$ and $Q''$. Then $G^-_i$ and $G^+_i$ are the intersections of $G$ with $Q'$ and $Q''$. For a $\Theta$-class $E_i$, the {\it boundary} $\partial G^-_i$ of the halfspace $G^-_i$ consists of all vertices of $G^-_i$ having a neighbor in $G^+_i$ ($\partial G^+_i$ is defined analogously). Note that $\partial G^-_i$ and $\partial G^+_i$ induce isomorphic subgraphs (but not necessarily isometric) of $G$. Figure \ref{fig:M-Theta-class}(a) illustrates a $\Theta$-class $E_i$ of the two-dimensional partial cube $M$, the halfspaces $M_i^-, M_i^+$ and their boundaries $\partial M_i^-,\partial M_i^+$. \begin{figure}[htb] \centering \includegraphics[width=0.80\textwidth]{M_theta_class} \caption{(a) The halfspaces and their boundaries defined by a $\Theta$-class $E_i$ of $M$. (b) The two-dimensional partial cube $M_*=\pi_i(M)$ obtained from $M$ by contracting $E_i$.} \label{fig:M-Theta-class} \end{figure} An \emph{antipode} of a vertex $v$ in a partial cube $G$ is a vertex $-v$ such that $G=\conv(v,-v)$. Note that in partial cubes the antipode is unique and $\conv(v,-v)$ coincides with the interval $I(v,-v)$. A partial cube $G$ is \emph{antipodal} if all its vertices have antipodes. A partial cube $G$ is said to be \emph{affine} if there is an antipodal partial cube $G'$, such that $G$ is a halfspace of $G'$. \subsection{Partial cube minors}\label{minors} Let $G$ be a partial cube, isometrically embedded in the hypercube $Q_m$. For a $\Theta$-class $E_i$ of $G$, an {\it elementary restriction} consists of taking one of the complementary halfspaces $G^-_i$ and $G^+_i$. More generally, a {\it restriction} is a subgraph of $G$ induced by the intersection of a set of (non-complementary) halfspaces of $G$. Such an intersection is a convex subgraph of $G$, thus a partial cube. Since any convex subgraph of a partial cube $G$ is the intersection of halfspaces \cite{AlKn,Ch_thesis}, the restrictions of $G$ coincide with the convex subgraphs of $G$. For a $\Theta$-class $E_i$, we say that the graph $\pi_i(G)$ obtained from $G$ by contracting the edges of $E_i$ is an ($i$-){\it contraction} of $G$; for an illustration, see Figure \ref{fig:M-Theta-class}(b). For a vertex $v$ of $G$, we will denote by $\pi_i(v)$ the image of $v$ under the $i$-contraction, i.e., if $uv$ is an edge of $E_i$, then $\pi_i(u)=\pi_i(v)$, otherwise $\pi_i(u)\ne \pi_i(v)$. We will apply $\pi_i$ to subsets $S\subset V$, by setting $\pi_i(S):=\{\pi_i(v): v\in S\}$. In particular we denote the $i$-{\it contraction} of $G$ by $\pi_i(G)$. From the proof of the first part of ~\cite[Theorem 3]{Ch_hamming} it easily follows that $\pi_i(G)$ is an isometric subgraph of $Q_{m-1}$, thus the class of partial cubes is closed under contractions. Since edge contractions in graphs commute, if $E_i,E_j$ are two distinct $\Theta$-classes, then $\pi_j(\pi_i(G))=\pi_i(\pi_j(G))$. Consequently, for a set $A$ of $k$ $\Theta$-classes, we can denote by $\pi_A(G)$ the isometric subgraph of $Q_{m-k}$ obtained from $G$ by contracting the equivalence classes of edges from $A$. Contractions and restrictions commute in partial cubes \cite{ChKnMa}. Consequently, any set of restrictions and any set of contractions of a partial cube $G$ provide the same result, independently of the order in which we perform them. The resulting graph $G'$ is a partial cube, and $G'$ is called a {\it partial cube minor} (or {\it pc-minor}) of $G$. For a partial cube $H$ we denote by ${\mathcal F}(H)$ the class of all partial cubes not having $H$ as a pc-minor. In this paper we investigate the class ${\mathcal F}(Q_3)$. \begin{figure}[htb] \centering \includegraphics[width=0.90\textwidth]{COMobstructions} \caption{The excluded pc-minors of isometric dimension $\le 4$ for COMs.} \label{fig:COMobstructions} \end{figure} With the observation that a convex subcube of a partial cube can be obtained by contractions as well, the proof of the following lemma is straightforward. \begin{lemma} \label{VCdim_d} A partial cube $G$ belongs to ${\mathcal F}(Q_{d+1})$ if and only if $G$ has VC-dimension $\le d$. \end{lemma} Let $G$ be a partial cube and $E_i$ be a $\Theta$-class of $G$. Then $E_i$ {\it crosses} a convex subgraph $H$ of $G$ if $H$ contains an edge $uv$ of $E_i$ and $E_i$ {\it osculates} $H$ if $E_i$ does not cross $H$ and there exists an edge $uv$ of $E_i$ with $u\in H$ and $v\notin H$. Otherwise, $E_i$ is {\it disjoint} from $H$. The following results summarize the properties of contractions of partial cubes established in \cite{ChKnMa} and \cite{KnMa}: \begin{lemma} \label{contraction-ChKnMa} Let $G$ be a partial cube and $E_i$ be a $\Theta$-class of $G$. \begin{itemize} \item[(i)] \cite[Lemma 5]{ChKnMa} If $H$ is a convex subgraph of $G$ and $E_i$ crosses or is disjoint from $H$, then $\pi_i(H)$ is also a convex subgraph of $\pi_i(G)$; \item[(ii)] \cite[Lemma 7]{ChKnMa} If $S$ is a subset of vertices of $G$, then $\pi_i(\conv(S))\subseteq \conv(\pi_i(S))$. If $E_i$ crosses $S$, then $\pi_i(\conv(S))= \conv(\pi_i(S))$; \item[(iii)] \cite[Lemma 10]{ChKnMa} If $S$ is a gated subgraph of $G$, then $\pi_i(S)$ is a gated subgraph of $\pi_i(G)$. \end{itemize} \end{lemma} \begin{lemma}\cite{KnMa}\label{antipodal-contraction} Affine and antipodal partial cubes are closed under contractions. \end{lemma} \subsection{OMs, COMs, and AMPs}\label{OM-COM-AMP} In this subsection, we recall the definitions of oriented matroids, complexes of oriented matroids, and ample set families. \subsubsection{OMs: oriented matroids}\label{sub:OM} Co-invented by Bland $\&$ Las Vergnas~\cite{BlLV} and Folkman $\&$ Lawrence~\cite{FoLa}, and further investigated by many other authors, oriented matroids represent a unified combinatorial theory of orientations of (orientable) ordinary matroids. OMs capture the basic properties of sign vectors representing the circuits in a directed graph or more generally the regions in a central hyperplane arrangement in ${\mathbb R}^d$. OMs obtained from a hyperplane arrangement are called \emph{realizable}. Just as ordinary matroids, oriented matroids may be defined in a multitude of distinct but equivalent ways, see the book by Bj\"orner et al.~\cite{BjLVStWhZi}. Let $U$ be a finite set and let $\covectors$ be a {\it system of sign vectors}, i.e., maps from $U$ to $\{\pm 1,0\} = \{-1,0,+1\}$. The elements of $\covectors$ are also referred to as \emph{covectors} and denoted by capital letters $X, Y, Z$, etc. We denote by $\leq$ the product ordering on $\{ \pm 1,0\}^{U} $ relative to the standard ordering of signs with $0 \leq -1$ and $0 \leq +1$. The \emph{composition} of $X$ and $Y$ is the sign vector $X\circ Y$, where for all $e\in U$ one defines $(X\circ Y)_e = X_e$ if $X_e\ne 0$ and $(X\circ Y)_e=Y_e$ if $X_e=0$. The \emph{topes} of $\covectors$ are the maximal elements of $\covectors$ with respect to $\leq$. A system of sign vectors $(U,\covectors)$ is called an \emph{oriented matroid} (OM) if $\covectors$ satisfies the following three axioms: \begin{itemize} \item [{\bf (C)}] ({\sf Composition)} $X\circ Y \in \covectors$ for all $X,Y \in \covectors$. \item[{\bf (SE)}] ({\sf Strong elimination}) for each pair $X,Y\in\covectors$ and for each $e\in U$ such that $X_eY_e=-1$, there exists $Z \in \covectors$ such that $Z_e=0$ and $Z_f=(X\circ Y)_f$ for all $f\in U$ with $X_fY_f\ne -1$. \item[{\bf (Sym)}] ({\sf Symmetry}) $-\covectors=\{ -X: X\in \covectors\}=\covectors,$ that is, $\covectors$ is closed under sign reversal. \end{itemize} Furthermore, a system of sign-vectors $(U,\covectors)$ is \emph{simple} if it has no ``redundant'' elements, i.e., for each $e \in U$, $\{X_e: X\in \covectors\}=\{+, -,0 \}$ and for each pair $e\neq f$ in $U$, there exist $X,Y \in \covectors$ with $\{X_eX_f,Y_eY_f\}=\{+, -\}$. From (C), (Sym), and (SE) it easily follows that the set ${\mathcal T}$ of topes of any simple OM $\covectors$ are $\{-1,+1\}$-vectors. Therefore ${\mathcal T}$ can be viewed as a set family (where $-1$ means that the corresponding element does not belong to the set and $+1$ that it belongs). We will only consider simple OMs, without explicitly stating it every time. The \emph{tope graph} of an OM $\covectors$ is the 1-inclusion graph $G({\mathcal T})$ of $\mathcal T$ viewed as a set family. The \emph{Topological Representation Theorem of Oriented Matroids} of \cite{FoLa} characterizes tope graphs of OMs as region graphs of pseudo-sphere arrangements in a sphere $S^d$~\cite{BjLVStWhZi}. See the bottom part of Figure~\ref{fig:AOM} for an arrangement of pseudo-circles in $S^2$. It is also well-known (see for example ~\cite{BjLVStWhZi}) that tope graphs of OMs are partial cubes and that $\covectors$ can be recovered from its tope graph $G({\mathcal T})$ (up to isomorphism). Therefore, we can define all terms in the language of tope graphs. In particular, the isometric dimension of $G({\mathcal T})$ is $|U|$ and its VC-dimension coincides with the dimension $d$ of the sphere $S^d$ hosting a representing pseudo-sphere arrangement. Moreover a graph $G$ is the tope graph of an \emph{affine oriented matroid} (AOM) if $G$ is a halfspace of a tope graph of an OM. In particular, tope graphs of AOMs are partial cubes as well. \subsubsection{COMs: complexes of oriented matroids} Complexes of oriented matroids (COMs) have been introduced and investigated in \cite{BaChKn} as a far-reaching natural common generalization of oriented matroids, affine oriented matroids, and ample systems of sign-vectors (to be defined below). Some research has been connected to COMs quite quickly, see e.g.~\cite{Baum,Hochstattler,Margolis} and the tope graphs of COMs have been investigated in depth in \cite{KnMa}, see Subsection~\ref{sub:pc-Minors}. COMs are defined in a similar way as OMs, simply replacing the global axiom (Sym) by a weaker local axiom (FS) of face symmetry: a \emph{complex of oriented matroids (COMs)} is a system of sign vectors $(U,\covectors)$ satisfying (SE), and the following axiom: \begin{itemize} \item[{\bf (FS)}] ({\sf Face symmetry}) $X\circ -Y \in \covectors$ for all $X,Y \in \covectors$. \end{itemize} As for OMs we generally restrict ourselves to \emph{simple} COMs, i.e., COMs defining simple systems of sign-vectors. It is easy to see that (FS) implies (C), yielding that OMs are exactly the COMs containing the zero sign vector ${\bf 0}$, see~\cite{BaChKn}. Also, AOMs are COMs, see~\cite{BaChKn} or~\cite{Baum}. In analogy with realizable OMs, a COM is \emph{realizable} if it is the systems of sign vectors of the regions in an arrangement $U$ of (oriented) hyperplanes restricted to a convex set of ${\mathbb R}^d$. See Figure~\ref{fig:COMexample} for an example in ${\mathbb R}^2$. For other examples of COMs, see \cite{BaChKn}. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{COMexample} \caption{The system of sign-vectors associated to an arrangement of hyperplanes restricted to a convex set and the tope graph of the resulting realizable COM.} \label{fig:COMexample} \end{figure} The simple twist between (Sym) and (FS) leads to a rich combinatorial and geometric structure that is build from OM cells but is quite different from OMs. Let $(U,\covectors)$ be a COM and $X$ be a covector of $\covectors$. The \emph{face} of $X$ is $F(X):=\{ X\circ Y: Y\in \covectors\}$. By \cite[Lemma 4]{BaChKn}, each face $F(X)$ of $\covectors$ is an OM. Moreover, it is shown in \cite[Section 11]{BaChKn} that replacing each combinatorial face $F(X)$ of $\covectors$ by a PL-ball, we obtain a contractible cell complex associated to each COM. The \emph{topes} and the \emph{tope graphs} of COMs are defined in the same way as for OMs. Again, the topes ${\mathcal T}$ are $\{-1,+1\}$-vectors, the tope graph $G({\mathcal T})$ is a partial cubes, and the COM $\covectors$ can be recovered from its tope graph, see~\cite{BaChKn} or~\cite{KnMa}. As for OMs, the isometric dimension of $G({\mathcal T})$ is $|U|$. If a COM is realizable in $\mathbb{R}^d$, then the VC-dimension of $G({\mathcal T})$ is at most $d$. For each covector $X\in\covectors$, the tope graph of its face $F(X)$ is a gated subgraph of the tope graph of $\covectors$ \cite{KnMa}: the gate of any tope $Y$ in $F(X)$ is the covector $X\circ Y$ (which is obviously a tope). All this implies that the tope graph of any COM $\covectors$ is obtained by amalgamating gated tope subgraphs of its faces, which are all OMs. Let $\uparr \covectors:=\{ Y \in \{ \pm 1,0\}^{U}: X \leq Y \text{ for some } X \in \covectors\}$. Then the \emph{ample systems} (AMPs)\footnote{In the papers on COMs, these systems of sign-vectors are called lopsided (LOPs).} of sign vectors are those COMs such that $\uparr\covectors=\covectors$~\cite{BaChKn}. From the definition it follows that any face $F(X)$ consists of the sign vectors of all faces of the subcube of $[-1,+1]^{U}$ with barycenter $X$. \subsubsection{AMPs: ample set families}\label{sub:AMP} Just above we defined ample systems as COMs satisfying $\uparr\covectors=\covectors$. This is not the first definition of ample systems; all previous definitions define them as families of sets and not as systems of sign vectors. Ample sets have been introduced by Lawrence \cite{La} as asymmetric counterparts of oriented matroids and have been re-discovered independently by several works in different contexts~\cite{BaChDrKo,BoRa,Wiedemann}. Consequently, they received different names: lopsided~\cite{La}, simple~\cite{Wiedemann}, extremal~\cite{BoRa}, and ample~\cite{BaChDrKo,Dr}. Lawrence~\cite{La} defined ample sets for the investigation of the possible sign patterns realized by points of a convex set of $\mathbb{R}^d$. Ample set families admit a multitude of combinatorial and geometric characterizations \cite{BaChDrKo,BoRa,La} and comprise many natural examples arising from discrete geometry, combinatorics, graph theory, and geometry of groups \cite{BaChDrKo,La} (for applications in machine learning, see \cite{ChChMoWa,MoWa}). Let $X$ be a subset of a set $U$ with $m$ elements and let $Q_m=Q(U)$. A {\it $X$-cube} of $Q_m$ is the 1-inclusion graph of the set family $\{ Y\cup X': X'\subseteq X\}$, where $Y$ is a subset of $U\setminus X$. If $|X|=m'$, then any $X$-cube is a $m'$-dimensional subcube of $Q_m$ and $Q_m$ contains $2^{m-m'}$ $X$-cubes. We call any two $X$-cubes \emph{parallel cubes}. Recall that $X\subseteq U$ is {\it shattered} by a set family ${\mathcal S}\subseteq 2^{U}$ if $\{ X\cap S: S\in {\mathcal S}\}=2^X$. Furthermore, $X$ is \emph{strongly shattered} by $\mathcal S$ if the 1-inclusion graph $G({\mathcal S})$ of $\mathcal S$ contains a $X$-cube. Denote by $\oX(\SS)$ and $\uX(\SS)$ the families consisting of all shattered and of all strongly shattered sets of $\SS$, respectively. Clearly, $\uX(\SS)\subseteq \oX(\SS)$ and both $\oX(\SS)$ and $\uX(\SS)$ are closed by taking subsets, i.e., $\oX(\SS)$ and $\uX(\SS)$ are \emph{abstract simplicial complexes}. The \emph{VC-dimension} \cite{VaCh} $\vcd(\SS)$ of $\SS$ is the size of a largest set shattered by $\SS$, i.e., the dimension of the simplicial complex $\oX(\SS)$. The fundamental \emph{sandwich lemma} (rediscovered independently in \cite{AnRoSa,BoRa,Dr,Pa}) asserts that $\lvert\uX(\SS)\rvert \le \lvert \SS\rvert \le \lvert \oX(\SS)\rvert$. If $d=\vcd(\SS)$ and $m=\lvert U\rvert$, then $\oX(\SS)$ cannot contain more than $\Phi_d(m):=\sum_{i=0}^d \binom{m}{i}$ simplices. Thus, the sandwich lemma yields the well-known \emph{Sauer-Shelah lemma} \cite{Sauer,Shelah,VaCh} that $\lvert \SS \rvert \le \Phi_d(m)$. A set family $\SS$ is called \emph{ample} if $\lvert \SS\rvert=\lvert \oX(\SS)\rvert$ \cite{BoRa,BaChDrKo}. As shown in those papers this is equivalent to the equality $\oX(\SS)=\uX(\SS)$, i.e., $\SS$ is ample if and only if any set shattered by $\SS$ is strongly shattered. Consequently, the VC-dimension of an ample family is the dimension of the largest cube in its 1-inclusion graph. A nice characterization of ample set families was provided in \cite{La}: $\SS$ is ample if and only if for any cube $Q$ of $Q_m$ if $Q\cap \SS$ is closed by taking antipodes, then either $Q\cap \SS=\varnothing$ or $Q$ is included in $G(\SS)$. The paper \cite{BaChDrKo} provides metric and recursive characterizations of ample families. For example, it is shown in \cite{BaChDrKo} that $\SS$ is ample if and only if any two parallel $X$-cubes of the 1-inclusion graph $G(\SS)$ of $\SS$ can be connected in $G(\SS)$ by a shortest path of $X$-cubes. This implies that 1-inclusion graphs of ample set families are partial cubes; therefore further we will speak about \emph{ample partial cubes}. Note that maximum set families (i.e., those which the Sauer-Shelah lemma is tight) are ample. \subsubsection{Characterizing tope graphs of OMs, COMS, and AMPs}\label{sub:pc-Minors} In this subsection we recall the characterizations of \cite{KnMa} of tope graphs of COMs, OMs, and AMPs. We say that a partial cube $G$ {\it is a COM,} an {\it OM}, an {\it AOM}, or an {\it AMP} if $G$ is the tope graph of a COM, OM, AOM, or AMP, respectively. Tope graphs of COMs and AMPs are closed under pc-minors and tope graphs of OMs and AOMs are closed under contractions but not under restrictions. Convex subgraphs of OMs are COMs and convex subgraphs of tope graphs of uniform OMs are ample. The reverse implications are conjectured in~\cite[Conjecture 1]{BaChKn} and~\cite[Conjecture]{La}, respectively. As shown in~\cite{KnMa}, a partial cube is the \emph{tope graph of a COM} if and only if all its antipodal subgraphs are gated. Another characterization from the same paper is by an infinite family of excluded pc-minors. This family is denoted by $\mathcal{Q}^-$ and defined as follows. For every $m \geq 4$ there are partial cubes $X_m^1, \ldots, X_m^{m+1}\in \mathcal{Q}^-$. Here, $X_m^{m+1}:=Q_m\setminus\{(0,\ldots,0),(0,\ldots,1,0)\}$, $X_m^{m}=X_m^{m+1}\setminus\{(0,\ldots,0,1)\}$, and $X_m^{m-i}=X_m^{m-i+1}\setminus\{e_{im}\}$. Here $e_{im}$ denotes the vector with all zeroes except positions, $i$ and $m$, where it is one. See Figure~\ref{fig:COMobstructions} for the members of $\mathcal{Q}^-$ of isometric dimension at most $4$. Note in particular that $X_4^1=SK_4$. Ample partial cubes can be characterized by the excluding set $\mathcal{Q}^{- -}=\{Q_m^{- -}: m\geq 4\}$, where $Q_m^{- -}=Q_m\setminus\{(0,\ldots,0), (1,\ldots,1)\}$ \cite{KnMa}. Further characterizations from~\cite{KnMa} yield that OMs are exactly the antipodal COMs, and (as mentioned at the end of Subsection~\ref{sub:OM}) AOMs are exactly the halfspaces of OMs. On the other hand, ample partial cubes are exactly the partial cubes in which all antipodal subgraphs are hypercubes. A central notion in COMs and OMs is the one of the \emph{rank} of $G$, which is the largest $d$ such that $G\in\mathcal{F}(Q_{d+1})$. Hence this notion fits well with the topic of the present paper and combining the families of excluded pc-minors $\mathcal{Q}^{- -}$ and $\mathcal{Q}^{-}$, respectively, with $Q_3$ one gets: \begin{proposition}\cite[Corollary 7.5]{KnMa}\label{prop:excludedminors} The class of two-dimensional ample partial cubes coincides with $\mathcal{F}(Q_3, C_6)$. The class of two-dimensional COMs coincides with $\mathcal{F}(Q_3, SK_4)$. \end{proposition} \begin{figure}[htb] \centering \includegraphics[width=0.80\textwidth]{diskbis} \caption{From upper left to bottom right: a disk $G$, a pseudoline arrangement $U$ whose region graph is $G$, adding a line $\ell_{\infty}$ to $U$, the pseudocircle arrangement $U'$ obtained from $U\cup\{\ell_{\infty}\}$ with a centrally mirrored copy, the pseudocircle arrangement $U'$ with region graph $G'$, the OM $G'$ with halfspace $G$.} \label{fig:AOM} \end{figure} \subsubsection{Disks} A \emph{pseudoline arrangement} $U$ is a family of simple non-closed curves where every pair of curves intersects exactly once and crosses in that point. Moreover, the curves must be extendable to infinity without introducing further crossings. Note that several curves are allowed to cross in the same point. Figure~\ref{fig:AOM} for an example. We say that a partial cube $G$ is a \emph{disk} if it is the region graph of a pseudoline arrangement $U$. The $\Theta$-classes of $G$ correspond to the elements of $U$. Contrary to a convention sometimes made in the literature, we allow a pseudoline arrangement $U$ to be empty, consisting of only one element, or all pseudolines to cross in a single point. These situations yield the simplest examples of disks, namely: $K_1$, $K_2$, and the even cycles. Disks are closed under contraction, since contracting a $\Theta$-class correspond to removing a line from the pseudoline arrangement. It is well-known that disks are tope graphs of AOMs of rank at most $2$. A quick explanation can be found around~\cite[Theorem 6.2.3]{BjLVStWhZi}. The idea is to first add a line $\ell_{\infty}$ at infinity to the pseudoline arrangement $U$ representing $G$. Then embed the disk enclosed by $\ell_{\infty}$ on a hemisphere of $S^2$, such that $\ell_{\infty}$ maps on the equator. Now, mirror the arrangement through the origin of $S^2$ in order to obtain a \emph{pseudocircle arrangement} $U'$. The region graph of $U'$ is an OM $G'$, and the regions on one side of $\ell_{\infty}$ correspond to a halfspace of $G'$ isomorphic to $G$. See Figure~\ref{fig:AOM} for an illustration. \section{Hyperplanes and isometric expansions} In this section we characterize the graphs from ${\mathcal F}(Q_{d+1})$ (i.e., partial cubes of VC-dimension $\le d$) via the hyperplanes of their $\Theta$-classes and via the operation of isometric expansion. \subsection{Hyperplanes} Let $G$ be isometrically embedded in the hypercube $Q_m$. For a $\Theta$-class $E_i$ of $G$, recall that $G^-_i,G^+_i$ denote the complementary halfspaces defined by $E_i$ and $\partial G^-_i, G^+_i$ denote their boundaries. The {\it hyperplane} $H_i$ of $E_i$ has the middles of edges of $E_i$ as the vertex-set and two such middles are adjacent in $H_i$ if and only if the corresponding edges belong to a common square of $G$, i.e., $H_i$ is isomorphic to $\partial G^-_i$ and $\partial G^+_i$. Combinatorially, $H_i$ is the 1-inclusion graph of the set family defined by $\partial H^-_i\cup \partial H^+_i$ by removing from each set the element $i$. \begin{proposition} \label{VCdim_pc} A partial cube $G$ has VC-dimension $\le d$ (i.e., $G$ belongs to ${\mathcal F}(Q_{d+1})$) if and only if each hyperplane $H_i$ of $G$ has VC-dimension $\le d-1$. \end{proposition} \begin{proof} If some hyperplane $H_i$ of $G\in {\mathcal F}(Q_{d+1})$ has VC-dimension $d$, then $\partial G^-_i$ and $\partial G^+_i$ also have VC-dimension $d$ and their union $\partial H^-_i\cup \partial H^+_i$ has VC-dimension $d+1$. Consequently, $G$ has VC-dimension $\ge d+1$, contrary to Lemma \ref{VCdim_d}. To prove the converse implication, denote by ${\mathcal H}_{d-1}$ the set of all partial cubes of $G$ in which the hyperplanes have VC-dimension $\le d-1$. We assert that ${\mathcal H}_{d-1}$ is closed under taking pc-minors. First, ${\mathcal H}_{d-1}$ is closed under taking restrictions because the hyperplanes $H'_i$ of any convex subgraph $G'$ of a graph $G\in {\mathcal H}_{d-1}$ are subgraphs of the respective hyperplanes $H_i$ of $G$. Next we show that ${\mathcal H}_{d-1}$ is closed under taking contractions. Let $G\in {\mathcal H}_{d-1}$ and let $E_i$ and $E_j$ be two different $\Theta$-classes of $G$. Since $\pi_{j}(G)$ is a partial cube, to show that $\pi_{j}(G)$ belongs to ${\mathcal H}_{d-1}$ it suffices to show that $\partial \pi_{j}(G)^-_i=\pi_{j}(\partial G^-_i)$. Indeed, this would imply that the $i$th hyperplane of $\pi_{j}(G)$ coincides with the $j$th contraction of the $i$th hyperplane of $G$. Consequently, this would imply that the VC-dimension of all hyperplanes of $\pi_{j}(G)$ is at most $d-1$. Pick $v\in \pi_{j}(\partial G^-_i)$. Then $v$ is the image of the edge $v'v''$ of the hypercube $Q_m$ such that at least one of the vertices $v',v''$, say $v'$, belongs to $\partial G^-_i$. This implies that the $i$th neighbor $u'$ of $v'$ in $Q_m$ belongs to $\partial G^+_i$. Let $u''$ be the common neighbor of $u'$ and $v''$ in $Q_m$ and $u$ be the image of the edge $u'u''$ by the $j$-contraction. Since $u'\in \partial G^+_i$, the $i$th edge $vu$ belongs to $\pi_j(G)$, whence $v\in \partial \pi_{j}(G)^-_i$ and $u\in \partial \pi_{j}(G)^+_i$. This shows $\pi_{j}(\partial G^-_i)\subseteq \partial \pi_{j}(G)^-_i$. To prove the converse inclusion, pick a vertex $v\in \partial \pi_{j}(G)^-_i$. This implies that the $i$-neighbor $u$ of $v$ in $Q_m$ belongs to $\partial \pi_{j}(G)^+_i$. As in the previous case, let $v$ be the image of the $j$-edge $v'v''$ of the hypercube $Q_m$ and let $u'$ and $u''$ be the $i$-neighbors of $v'$ and $v''$ in $Q_m$. Then $u$ is the image of the $j$-edge $u'u''$. Since the vertices $u$ and $v$ belong to $\pi_{j}(G)$, at least one vertex from each of the pairs $\{ u',u''\}$ and $\{ v',v''\}$ belongs to $G$. If one of the edges $u'v'$ or $u''v''$ of $Q_m$ is an edge of $G$, then $u\in \pi_{j}(\partial G^+_i)$ and $v\in \pi_{j}(\partial G^-_i)$ and we are done. Finally, suppose that $u'$ and $v''$ are vertices of $G$. Since $G$ is an isometric subgraph of $Q_m$ and $d(u',v'')=2$, a common neighbor $v',u''$ of $u'$ and $v''$ also belongs to $G$ and we fall in the previous case. This shows that $\partial \pi_{j}(G)^-_i\subseteq \pi_{j}(\partial G^-_i)$. Consequently, ${\mathcal H}_{d-1}$ is closed under taking pc-minors. Since $Q_{d+1}$ does not belong to ${\mathcal H}_{d-1}$, if $G$ belongs to ${\mathcal H}_{d-1}$, then $G$ does not have $Q_{d+1}$ as a pc-minor, i.e., $G\in {\mathcal F}(Q_{d+1})$. \end{proof} \begin{corollary} \label{VCdim_1} A partial cube $G$ belongs to ${\mathcal F}(Q_{3})$ if and only if each hyperplane $H_i$ of $G$ has VC-dimension $\le 1$. \end{corollary} \begin{remark} In Proposition~\ref{VCdim_pc} it is essential for $G$ to be a partial cube. For example, let ${\mathcal S}$ consist of all subsets of even size of an $m$-element set. Then the 1-inclusion graph $G({\mathcal S})$ of ${\mathcal S}$ consists of isolated vertices (i.e., $G({\mathcal S})$ does not contain any edge). Therefore, any hyperplane of $G({\mathcal S})$ is empty, however the VC-dimension of $G({\mathcal S})$ depends on $m$ and can be arbitrarily large. \end{remark} By Corollary \ref{VCdim_1}, the hyperplanes of graphs from ${\mathcal F}(Q_{3})$ have VC-dimension 1. However they are not necessarily partial cubes: any 1-inclusion graph of VC-dimension 1 may occur as a hyperplane of a graph from ${\mathcal F}(Q_{3})$. Thus, it will be useful to establish the metric structure of 1-inclusion graphs of VC-dimension 1. We say that a 1-inclusion graph $G$ is a {\it virtual isometric tree} of $Q_m$ if there exists an isometric tree $T$ of $Q_m$ containing $G$ as an induced subgraph. Clearly, each virtually isometric tree is a forest in which each connected component is an isometric subtree of $Q_m$. \begin{proposition} \label{virtual_isometric_tree} An induced subgraph $G$ of $Q_m$ has VC-dimension $1$ if and only if $G$ is a virtual isometric tree of $Q_m$. \end{proposition} \begin{proof} Each isometric tree of $Q_m$ has VC-dimension $1$, thus any virtual isometric tree has VC-dimension $\le 1$. Conversely, let $G$ be an induced subgraph of $Q_m$ of VC-dimension $\le 1$. We will say that two parallelism classes $E_i$ and $E_j$ of $Q_m$ are {\it compatible} on $G$ if one of the four intersections $G^-_i\cap G^-_j, G^-_i\cap G^+_j,G^+_i\cap G^-_j, G^+_i\cap G^+_j$ is empty and {\it incompatible} if the four intersections are nonempty. From the definition of VC-dimension immediately follows that $G$ has VC-dimension 1 if and only if any two parallelism classes of $Q_m$ are compatible on $G$. By a celebrated result by Buneman \cite{Bu} (see also \cite[Subsection 3.2]{DrHuKoMouSp}), on the vertex set of $G$ one can define a weighted tree $T_0$ with the same vertex-set as $G$ and such that the bipartitions $\{G^-_i,G^+_i\}$ are in bijection with the splits of $T_0$, i.e., bipartitions obtained by removing edges of $T_0$. The length of each edge of $T_0$ is the number of $\Theta$-classes of $Q_m$ defining the same bipartition of $G$. The distance $d_{T_0}(u,v)$ between two vertices of $T_0$ is equal to the number of parallelism classes of $Q_m$ separating the vertices of $T_0$. We can transform $T_0$ into an isometrically embedded tree $T$ of $Q_m$ in the following way: if the edge $uv$ of $T_0$ has length $k>1$, then replace this edge by any shortest path $P(u,v)$ of $Q_m$ between $u$ and $v$. Then it can be easily seen that $T$ is an isometric tree of $Q_m$, thus $G$ is a virtual isometric tree. \end{proof} \subsection{Isometric expansions}\label{subsec:isometric} In order to characterize median graphs Mulder \cite{Mu} introduced the notion of a convex expansion of a graph. A similar construction of isometric expansion was introduced in \cite{Ch_thesis,Ch_hamming}, with the purpose to characterize isometric subgraphs of hypercubes. A triplet $(G^1,G^0,G^2)$ is called an {\it isometric cover} of a connected graph $G$, if the following conditions are satisfied: \begin{itemize} \item $G^1$ and $G^2$ are two isometric subgraphs of $G$; \item $V(G)=V(G^1)\cup V(G^2)$ and $E(G)=E(G^1)\cup E(G^2)$; \item $V(G^1)\cap V(G^2)\ne \varnothing$ and $G^0$ is the subgraph of $G$ induced by $V(G^1)\cap V(G^2)$. \end{itemize} A graph $G'$ is an {\it isometric expansion} of $G$ with respect to an isometric cover $(G^1,G^0,G^2)$ of $G$ (notation $G'=\psi(G)$) if $G'$ is obtained from $G$ in the following way: \begin{itemize} \item replace each vertex $x$ of $V(G^1)\setminus V(G^2)$ by a vertex $x_1$ and replace each vertex $x$ of $V(G^2)\setminus V(G^1)$ by a vertex $x_2$; \item replace each vertex $x$ of $V(G^1)\cap V(G^2)$ by two vertices $x_1$ and $x_2$; \item add an edge between two vertices $x_i$ and $y_i,$ $i=1,2$ if and only if $x$ and $y$ are adjacent vertices of $G^i$, $i=1,2$; \item add an edge between any two vertices $x_1$ and $x_2$ such that $x$ is a vertex of $V(G^1)\cap V(G^2)$. \end{itemize} \begin{figure}[htb] \centering \includegraphics[width=.80\textwidth]{M_isom_exp} \caption{(a) The graph $M_*$. (b) The isometric expansion $M'_*$ of $M_*$.} \label{fig:isom_exp_M_*} \end{figure} In other words, $G'$ is obtained by taking a copy of $G^1$, a copy of $G^2$, supposing them disjoint, and adding an edge between any two twins, i.e., two vertices arising from the same vertex of $G^0$. The following result characterizes all partial cubes by isometric expansions: \begin{proposition} \label{pc-expansion} \cite{Ch_thesis,Ch_hamming} A graph is a partial cube if and only if it can be obtained by a sequence of isometric expansions from a single vertex. \end{proposition} We also need the following property of isometric expansions: \begin{lemma}\label{convex_expansion} \cite[Lemma 6]{ChKnMa} If $S$ is a convex subgraph of a partial cube $G$ and $G'$ is obtained from $G$ by an isometric expansion $\psi$, then $S':=\psi(S)$ is a convex subgraph of $G'$. \end{lemma} \begin{example} For partial cubes, the operation of isometric expansion can be viewed as the inverse to the operation of contraction of a $\Theta$-class. For example, the two-dimensional partial cube $M$ can be obtained from the two-dimensional partial cube $M_*$ (see Figure \ref{fig:M-Theta-class}(b)) via an isometric expansion. In Figure~\ref{fig:isom_exp_M_*} we present another isometric expansion $M'_*$ of $M_*$. By Proposition \ref{pc-expansion}, $M'_*$ is a partial cube but one can check that it is no longer two-dimensional. \end{example} Therefore, contrary to all partial cubes, the classes ${\mathcal F}(Q_{d+1})$ are not closed under arbitrary isometric expansions. In this subsection, we characterize the isometric expansions which preserve the class ${\mathcal F}(Q_{d+1})$. Let $G$ be isometrically embedded in the hypercube $Q_m=Q(X)$. Suppose that $G$ shatters the subset $Y$ of $X$. For a vertex $v_A$ of $Q(Y)$ (corresponding to a subset $A$ of $Y$), denote by $F(v_A)$ the set of vertices of the hypercube $Q_m$ which projects to $v_A$. In set-theoretical language, $F(v_A)$ consists of all vertices $v_B$ of $Q(X)$ corresponding to subsets $B$ of $X$ such that $B\cap Y=A$. Therefore, $F(v_A)$ is a subcube of dimension $m-|Y|$ of $Q_m$. Let $G(v_A)=G\cap F(v_A)$. Since $F(v_A)$ is a convex subgraph of $Q_m$ and $G$ is an isometric subgraph of $Q_m$, $G(v_A)$ is also an isometric subgraph of $Q_m$. Summarizing, we obtain the following property: \begin{lemma} \label{intersection_with_faces} If $G$ is an isometric subgraph of $Q_m=Q(X)$ which shatters $Y\subseteq X$, then for any vertex $v_A$ of $Q(Y)$, $G(v_A)$ is a nonempty isometric subgraph of $G$. \end{lemma} The following lemma establishes an interesting separation property in partial cubes: \begin{lemma} \label{isometric cover} If $(G^1,G^0,G^2)$ is an isometric cover of an isometric subgraph $G$ of $Q_m=Q(X)$ and $G^1$ and $G^2$ shatter the same subset $Y$ of $X$, then $G^0$ also shatters $Y$. \end{lemma} \begin{proof} To prove that $G^0$ shatters $Y$ it suffices to show that for any vertex $v_A$ of $Q(Y)$, $G^0\cap F(v_A)$ is nonempty. Since $G^1$ and $G^2$ both shatter $Q(Y)$, $G^1\cap F(v_A)$ and $G^2\cap F(v_A)$ are nonempty subgraphs of $G$. Pick any vertices $x\in V(G^1\cap F(v_A))$ and $y\in V(G^2\cap F(v_A))$. Then $x$ and $y$ are vertices of $G(v_A)$. Since by Lemma \ref{intersection_with_faces}, $G(v_A)$ is an isometric subgraph of $Q_m$, there exists a shortest path $P(x,y)$ of $Q_m$ belonging to $G(v_A)$. Since $(G^1,G^0,G^2)$ is an isometric cover of $G$, $P(x,y)$ contains a vertex $z$ of $G^0$. Consequently, $z\in V(G^0\cap F(v_A))$, and we are done. \end{proof} \begin{proposition} \label{expansion-Qd+1} Let $G'$ be obtained from $G\in {\mathcal F}(Q_{d+1})$ by an isometric expansion with respect to $(G^1,G^0,G^2)$. Then $G'$ belongs to ${\mathcal F}(Q_{d+1})$ if and only if $G^0$ has VC-dimension $\le d-1$. \end{proposition} \begin{proof} The fact that $G'$ is a partial cube follows from Proposition \ref{pc-expansion}. Let $E_{m+1}$ be the unique $\Theta$-class of $G'$ which does not exist in $G$. Then the halfspaces $(G')^-_{m+1}$ and $(G')^+_{m+1}$ of $G'$ are isomorphic to $G^1$ and $G^2$ and their boundaries $\partial (G')^-_{m+1}$ and $\partial (G')^+_{m+1}$ are isomorphic to $G^0$. If $G'$ belongs to ${\mathcal F}(Q_{d+1})$, by Proposition \ref{VCdim_pc} necessarily $G^0$ has VC-dimension $\le d-1$. Conversely, let $G^0$ be of VC-dimension $\le d-1$. Suppose that $G'$ has VC-dimension $d+1$. Since $G$ has VC-dimension $d$, this implies that any set $Y'$ of size $d+1$ shattered by $G'$ contains the element $m+1$. Let $Y=Y'\setminus \{ m+1\}$. The $(m+1)$th halfspaces $(G')^-_{m+1}$ and $(G')^+_{m+1}$ of $G'$ shatter the set $Y$. Since $(G')^-_{m+1}$ and $(G')^+_{m+1}$ are isomorphic to $G^1$ and $G^2$, the subgraphs $G^1$ and $G^2$ of $G$ both shatter $Y$. By Lemma \ref{isometric cover}, the subgraph $G^0$ of $G$ also shatters $Y$. Since $|Y|=d$, this contradicts our assumption that $G^0$ has VC-dimension $\le d-1$. \end{proof} Let us end this section with a useful lemma with respect to antipodal partial cubes: \begin{lemma}\label{lem:antipodal} If $G$ is a proper convex subgraph of an antipodal partial cube $H\in\mathcal{F}(Q_{d+1})$, then $G\in\mathcal{F}(Q_{d})$. \end{lemma} \begin{proof} Suppose by way of contradiction that $G$ has $Q_d$ as a pc-minor. Since convex subgraphs of $H$ are intersections of halfspaces, there exists a $\Theta$-class $E_i$ of $H$ such that $G$ is included in the halfspace $H_i^+$. Since $H$ is antipodal, the subgraph $-G \subseteq H_i^-$ consisting of antipodes of vertices of $G$ is isomorphic to $G$. As $G\subseteq H^+_i$, $-G$ and $G$ are disjoint. Since $G$ has $Q_d$ as a pc-minor, $-G$ also has $Q_{d}$ as a pc-minor: both those minors are obtained by contracting the same set $I$ of $\Theta$-classes of $H$; note that $E_i\notin I$. Thus, contracting the $\Theta$-classes from $I$ and all other $\Theta$-classes not crossing the $Q_d$ except $E_i$, we will get an antipodal graph $H'$, since antipodality is preserved by contractions. Now, $H'$ consists of two copies of $Q_d$ separated by $E_i$. Take any vertex $v$ in $H'$. Then there is a path from $v$ to $-v$ first crossing all $\Theta$-classes of the cube containing $v$ and then $E_i$, to finally reach $-v$. Thus, $-v$ is adjacent to $E_i$ and hence every vertex of $H'$ is adjacent to $E_i$. Thus $H'=Q_{d+1}$, contrary to the assumption that $H\in\mathcal{F}(Q_{d+1})$. \end{proof} \section{Gated hulls of 6-cycles} In this section, we prove that in two-dimensional partial cubes the gated hull of any 6-cycle $C$ is either $C$, or $Q^-_3$, or a maximal full subdivision of $K_n$. \subsection{Full subdivisions of $K_n$} A {\it full subdivision of $K_n$} (or {\it full subdivision} for short) is the graph $SK_n$ obtained from the complete graph $K_n$ on $n$ vertices by subdividing each edge of $K_n$ once; $SK_n$ has $n+\binom{n}{2}$ vertices and $n(n-1)$ edges. The $n$ vertices of $K_n$ are called {\it original} vertices of $SK_n$ and the new vertices are called {\it subdivision} vertices. Note that $SK_3$ is the 6-cycle $C_6$. Each $SK_n$ can be isometrically embedded into the $n$-cube $Q_n$ in such a way that each original vertex $u_i$ is encoded by the one-element set $\{ i\}$ and each vertex $u_{i,j}$ subdividing the edge $ij$ of $K_n$ is encoded by the 2-element set $\{ i,j\}$ (we call this embedding of $SK_n$ a {\it standard embedding}). If we add to $SK_n$ the vertex $v_{\varnothing}$ of $Q_n$ which corresponds to the empty set $\varnothing$, we will obtain the partial cube $SK^*_n$. Since both $SK_n$ and $SK^*_n$ are encoded by subsets of size $\le 2$, those graphs have VC-dimension 2. Consequently, we obtain: \begin{lemma} \label{SKn} For any $n$, $SK_n$ and $SK^*_n$ are two-dimensional partial cubes. \end{lemma} \begin{figure}[htb] \centering \includegraphics[width=.70\textwidth]{SK_4} \caption{(a) An isometric embedding of $SK_4$ into $Q_4$. (b) A standard embedding of $SK_4$. (c) A completion of $SK_4$ to $SK^*_4$.} \label{fig:representation_SK_4} \end{figure} \begin{example} Our running example $M$ contains two isometrically embedded copies of $SK_4$. In Figure~\ref{fig:representation_SK_4}(a)\&(b) we present two isometric embeddings of $SK_4$ into the 4-cube $Q_4$, the second one is the standard embedding of $SK_4$. The original and subdivision vertices are illustrated by squares and circles, respectively. Figure~\ref{fig:representation_SK_4}(c) describes the completion of $SK_4$ to $SK^*_4$. \end{example} \begin{lemma} \label{standardSKn} If $H=SK_n$ with $n \ge 4$ is an isometric subgraph of a partial cube $G$, then $G$ admits an isometric embedding into a hypercube such that the embedding of $H$ is standard. \end{lemma} \begin{proof} Pick any original vertex of $H$ as the base point $b$ of $G$ and consider the standard isometric embedding $\varphi$ of $G$ into $Q_m$. Then $\varphi(b)=\varnothing$. In $H$ the vertex $b$ is adjacent to $n-1\ge 3$ subdivision vertices of $H$. Then for each of those vertices $v_i, i=1,\ldots,n-1,$ we can suppose that $\varphi(v_i)=\{ i\}$. Each $v_i$ is adjacent in $H$ to an original vertex $u_i\ne b$. Since $H$ contains at least three such original vertices and they have pairwise distance 2, one can easily check that the label $\varphi(u_i)$ consists of $i$ and an element common to all such vertices, denote it by $n$. Finally, the label of any subdivision vertex $u_{i,j}$ adjacent to the original vertices $u_i$ and $u_j$ is $\{ i,j\}$. Now consider an isometric embedding $\varphi'$ of $G$ defined by setting $\varphi'(v)=\varphi(v)\Delta \{ n\}$ for any vertex $v$ of $G$. Then $\varphi'$ provides a standard embedding of $H$: $\varphi'(b)=\{ n\},$ $\varphi'(u_i)=\{ i\}$ for any original vertex $u_i$, and $\varphi'(v_i)=\{ i,n\}$ for any subdivision vertex $v_i$ adjacent to $b$ and $\varphi'(u_{i,j})=\{ i,j\}$ for any other subdivision vertex $u_{i,j}$. \end{proof} By Lemma \ref{standardSKn}, when a full subdivision $H=SK_n$ of a graph $G\in {\mathcal F}(Q_3)$ is fixed, we assume that $G$ is isometrically embedded in a hypercube so that $H$ is standardly embedded. We describe next the isometric expansions of $SK_n$ which result in two-dimensional partial cubes. An isometric expansion of a partial cube $G$ with respect to $(G^1,G^0,G^2)$ is called {\it peripheral} if at least one of the subgraphs $G^1,G^2$ coincides with $G^0$, i.e., $G^1\subseteq G^2$ or $G^2\subseteq G^1$. \begin{lemma} \label{expansionSKn} If $G'$ is obtained from $G:=SK_n$ with $n \ge 4$ by an isometric expansion with respect to $(G^1,G^0,G^2)$, then $G'\in {\mathcal F}(Q_3)$ if and only if this is a peripheral expansion and $G^0$ is an isometric tree of $SK_n$. \end{lemma} \begin{proof} The fact that an isometric expansion of $SK_n$, such that $G^0$ is an isometric tree, belongs to ${\mathcal F}(Q_3)$ follows from Proposition \ref{expansion-Qd+1} and Lemma \ref{SKn}. Conversely, suppose that $G'$ belongs to ${\mathcal F}(Q_3)$. By Proposition \ref{expansion-Qd+1}, $G^0$ has VC-dimension $\le 1$ and by Proposition \ref{virtual_isometric_tree} $G^0$ is a virtual tree. It suffices to prove that $G^1$ or $G^2$ coincides with $G^0$. Indeed, since $G^1$ and $G^2$ are isometric subgraphs of $SK_n$, this will also imply that $G^0$ is an isometric tree. We distinguish two cases. \medskip\noindent {\bf Case 1.} First, let $G^0$ contain two original vertices $u_{i}$ and $u_{j}$. Since $u_{i}$ and $u_{j}$ belong to $G^1$ and $G^2$ and those two subgraphs are isometric subgraphs of $G$, the unique common neighbor $u_{i,j}$ of $u_{i}$ and $u_{j}$ must belong to $G^1$ and $G^2$, and thus to $G^0$. If another original vertex $u_{k}$ belongs to $G^0$, then the four vertices $u_{i,j}, u_{i}, u_{j},u_{k}$ of $G^0$ shatter the set $\{ i,j\}$, contrary to the assumption that $G^0$ has VC-dimension $\le 1$ (Proposition \ref{expansion-Qd+1}). This implies that each other original vertex $u_{k}$ either belongs to $G^1\setminus G^2$ or to $G^2\setminus G^1$. If there exist original vertices $u_{k}$ and $u_{\ell}$ such that $u_{k}$ belongs to $G^1\setminus G^2$ and $u_{\ell}$ belongs to $G^2\setminus G^1$, then their unique common neighbor $u_{k,\ell}$ necessarily belongs to $G^0$. But in this case the four vertices $u_{i,j}, u_{i}, u_{j},u_{k,\ell}$ of $G^0$ shatter the set $\{ i,j\}$. Thus we can suppose that all other original vertices $u_{k}$ belong to $G^1\setminus G^2$. Moreover, for the same reason and since $G^1$ is an isometric subgraph of $G$, any vertex $u_{k,\ell}$ with $\{ k,\ell\}\ne \{ i,j\}$ also belongs to $G^1\setminus G^2$. Since $G^1$ is an isometric subgraph of $G$, for any $k\ne i,j$, the vertices $u_{i,k}, u_{j,k}$ belong to $G^1$. Therefore $G^1=G$ and $G^0=G^2$. Since $G^2$ is an isometric subgraph of $G$ and $G^0$ has VC-dimension $\le 1$, $G^0$ is an isometric subtree of $G$. \medskip\noindent {\bf Case 2.} Now, suppose that $G^0$ contains at most one original vertex. Let $A^1$ be the set of original vertices belonging to $G^1\setminus G^2$ and $A^2$ be the set of original vertices belonging to $G^2\setminus G^1$. First suppose that $|A^1|\ge 2$ and $|A^2|\ge 2$, say $u_{1},u_{2}\in A^1$ and $u_{3},u_{4}\in A^2$. But then the vertices $u_{1,3},u_{1,4},u_{2,3},u_{2,4}$ must belong to $G^0$. Since those four vertices shatter the set $\{ 1,3\}$, we obtain a contradiction that $G^0$ has VC-dimension $\le 1$. Hence, one of the sets $A^1$ or $A^2$ contains at most one vertex. Suppose without loss of generality that $A^1$ contains at least $n-2$ original vertices $u_{1},u_{2},\ldots, u_{n-2}$. First suppose that $G^1$ contains all original vertices. Then since $G^1$ is an isometric subgraph of $G$, each subdivision vertex $u_{i,j}$ also belongs to $G^1$. This implies that $G^1=G$ and we are done. Thus suppose that the vertex $u_{n}$ does not belong to $A^1$. Since $G^0$ contains at most one original vertex, one of the vertices $u_{n-1},u_{n}$, say $u_{n}$, must belong to $A^2$ (i.e., to $G^2\setminus G^1$). This implies that all vertices $u_{i,n}, i=1,\ldots,n-2$ belong to $G^0$. Since $n\ge 4$ and $u_{n}$ is the unique common neighbor of the vertices $u_{i,n}$ and $u_{j,n}$ with $i\ne j$ and $1\le i,j\le n-2$ and $G^1$ is an isometric subgraph of $G$, necessarily $u_{n}$ must be a vertex of $G^1$, contrary to our assumption that $u_{n}\in A^2$. This contradiction concludes the proof of the lemma. \end{proof} \begin{corollary}\label{SKnCopie} If $G \in \mathcal{F}(Q_3)$ and $G$ contains $SK_n$ with $n \ge 4$ as a pc-minor, then $G$ contains $SK_n$ as a convex subgraph. \end{corollary} \begin{proof} Suppose by way of contradiction that $G'$ is a smallest graph in $\mathcal{F}(Q_3)$ which contains $SK_n$ as a pc-minor but does not contain $SK_n$ as a convex subgraph. This means that any contraction of $G'$ along a $\Theta$-class of $G'$ that do not cross the $SK_n$ pc-minor, also contains this $SK_n$ as a pc-minor. We denote the resulting graph by $G$. Since $G\in \mathcal{F}(Q_3)$, by minimality choice of $G'$, $G$ contains $SK_n$ as a convex subgraph, denote this subgraph by $H$. Now, $G'$ is obtained from $G$ by an isometric expansion. By Lemma \ref{convex_expansion}, $H'=\psi(H)$ is a convex subgraph of $G'$. Since $G'\in \mathcal{F}(Q_3)$, by Lemma \ref{expansionSKn} this isometric expansion restricted to $H=SK_n$ is a peripheral expansion. This implies that the image of $H$ under this expansion is a convex subgraph $H'$ of $G'$ which contains a copy of $SK_n$ as a convex subgraph, and thus $G'$ contains a convex copy of $SK_n$. \end{proof} \begin{lemma} \label{6cycle} If $C=SK_3$ is an isometric 6-cycle of $G\in {\mathcal F}(Q_3)$, then $C$ is convex or its convex hull is $Q^-_3$. \end{lemma} \begin{proof} The convex hull of $C$ in $Q_m$ is a 3-cube $Q$ and $\conv(C)=Q\cap V(G)$. Since $G$ belongs to ${\mathcal F}(Q_3)$, $Q$ cannot be included in $G$. Hence either $\conv(C)=C$ or $\conv(C)=Q^-_3$. \end{proof} \subsection{Gatedness of full subdivisions of $K_n$} The goal of this subsection is to prove the following result: \begin{proposition} \label{gatedSKn} If $H=SK_n$ with $n \ge 4$ is a convex subgraph of $G\in {\mathcal F}(Q_3)$ and $H$ is not included in a larger full subdivision of $G$, then $H$ is a gated subgraph of $G$. \end{proposition} \begin{proof} The proof of Proposition \ref{gatedSKn} uses the results of previous subsection and two claims. \begin{claim} \label{convexSKn} If $H=SK_n$ with $n \ge 4$ is an isometric subgraph of $G\in {\mathcal F}(Q_3)$, then either $H$ extends in $G$ to $SK^*_n$ or $H$ is a convex subgraph of $G$. \end{claim} \begin{proof} Suppose by way of contradiction that $H=SK_n$ does not extend in $G$ to $SK^*_n$ however $H$ is not convex. Then there exists a vertex $v\in V(G)\setminus V(H)$ such that $v\in I(x,y)$ for two vertices $x,y\in V(H)$. First note that $x$ and $y$ cannot be both original vertices. Indeed, if $x=u_i$ and $y=u_j$, then in $Q_m$ the vertices $x$ and $y$ have two common neighbors: the subdivision vertex $u_{i,j}$ and $v_{\varnothing}$. But $v_{\varnothing}$ is adjacent in $Q_m$ to all original vertices of $H$, thus it cannot belong to $G$ because $H=SK_n$ does not extend to $SK^*_n$. Thus, further we can suppose that the vertex $x$ is a subdivision vertex, say $x=u_{i,j}$. We distinguish several cases depending of the value of $d(x,y)$. \medskip\noindent {\bf Case 1.} $d(x,y)=2$. This implies that $y=u_{i,k}$ is also a subdivision vertex and $x$ and $y$ belong in $H$ to a common isometric 6-cycle $C$. Since $v$ belongs to $\conv(C)$, Lemma \ref{6cycle} implies that $v$ is adjacent to the third subdivision vertex $z=u_{j,k}$ of $C$. Hence $v=\{ i,j,k\}$. Since $n\ge 4$, there exists $\ell\ne i,j,k$ such that $\{ \ell\}$ is an original vertex of $H$ and $\{ i,\ell\},\{ j,\ell\},$ and $\{ k,\ell\}$ are subdivision vertices of $H$. Contracting $\ell$, we will obtain a forbidden $Q_3$. \medskip\noindent {\bf Case 2.} $d(x,y)=3$. This implies that $y=u_k$ is an original vertex with $k\ne i,j$. Then again the vertices $x$ and $y$ belong in $H$ to a common isometric 6-cycle $C$. Since $v$ belongs to $\conv(C)$, Lemma \ref{6cycle} implies that either $v$ is adjacent to $u_i,u_j$, and $u_k$ or to $u_{i,j},u_{i,k}$, and $u_{j,k}$, which was covered by the Case 1. \medskip\noindent {\bf Case 3.} $d(x,y)=4$. This implies that $y=u_{ k,\ell}$ is a subdivision vertex with $k,\ell\ne i,j$. In view of the previous cases, we can suppose that $v$ is adjacent to $x$ or to $y$, say $v$ is adjacent to $x$. Let $Q$ be the convex hull of $\{ x,y\}$ in $Q_m$. Then $Q$ is a 4-cube and $x=\{ i,j\}$ has 4 neighbors in $Q$: $\{ i\}, \{ j\}, \{ i,j,k\}$ and $\{ i,j,\ell\}$. The vertices $\{ i\}, \{ j\}$ are original vertices of $H$. Thus suppose that $v$ is one of the vertices $\{ i,j,k\},\{ i,j,\ell\}$, say $v=\{ i,j,k\}$. But then $v$ is adjacent to $\{ j,k\}$, which is a subdivision vertex of $H$ and we are in the conditions of Case 1. Hence $H$ is a convex subgraph of $G$. \end{proof} \begin{claim} \label{claimSKn} If $H=SK_n$ with $n \ge 4$ is a convex subgraph of $G\in {\mathcal F}(Q_3)$ and $H$ is not included in a larger full subdivision in $G$, then the vertex $v_{\varnothing}$ of $Q_m$ is adjacent only to the original vertices $u_1,\ldots,u_n$ of $H$. \end{claim} \begin{proof} Since $H$ is convex, the vertex $v_{\varnothing}$ of $Q_m$ is not a vertex of $G$. Let $u_i=\{ i\}, i=1,\ldots,n$ be the original vertices of $H$. Suppose that in $Q_m$ the vertex $v_{\varnothing}$ is adjacent to a vertex $u$ of $G$, which is not included in $H$, say $u=\{ n+1\}$. Since $u$ and each $u_i$ has in $Q_m$ two common neighbors $v_{\varnothing}$ and $u_{i,n+1}=\{ i,n+1\}$ and since $G$ is an isometric subgraph of $Q_m$, necessarily each vertex $u_{i,n+1}$ is a vertex of $G$. Consequently, the vertices of $H$ together with the vertices $u,u_{1,n+1},\ldots,u_{n,n+1}$ define an isometric subgraph $H'=SK_{n+1}$ of $Q_m$. Since $v_{\varnothing}$ does not belong to $G$, by Claim \ref{convexSKn} $H'$ is convex, contrary to the assumption that $H$ is not included in a larger convex full subdivision of $G$. Consequently, the neighbors in $G$ of $v_{\varnothing}$ are only the original vertices $u_1,\ldots,u_n$ of $H$. \end{proof} Now, we prove Proposition \ref{gatedSKn}. Let $G\in {\mathcal F}(Q_3)$ be an isometric subgraph of the cube $Q_m$ in such that the embedding of $H$ is standard. Let $Q$ be the convex hull of $H$ in $Q_m$; $Q$ is a cube of dimension $n$ and a gated subgraph of $Q_m$. Let $v$ be a vertex of $G$ and $v_0$ be the gate of $v$ in $Q$. To prove that $H$ is gated it suffices to show that $v_0$ is a vertex of $H$. Suppose by way of contradiction that $H$ is not gated in $G$ and among the vertices of $G$ without a gate in $H$ pick a vertex $v$ minimizing the distance $d(v,v_0)$. Suppose that $v$ is encoded by the set $A$. Then its gate $v_0$ in $Q_m$ is encoded by the set $A_0:=A\cap \{ 1,\ldots,n\}$. If $|A_0|=1,2$, then $A_0$ encodes an original or subdivided vertex of $H$, therefore $v_0$ would belong to $H$, contrary to the choice of $v$. So, $A_0=\varnothing$ or $|A_0|>2$. First suppose that $A_0=\varnothing$, i.e., $v_0=v_{\varnothing}$. Since $v_\varnothing$ is adjacent only to the original vertices of $H$, by Claim \ref{claimSKn} all original vertices of $H$ have distance $k=d(v,v_{\varnothing})+1\ge 3$ to $v$. From the choice of $v$ it follows that $I(v,u_i)\cap I(v,u_j)=\{ v\}$ for any two original vertices $u_i$ and $u_j$, $i\ne j$. Indeed, if $I(v,u_i)\cap I(v,u_j)\ne \{ v\}$ and $w$ is a neighbor of $v$ in $I(v,u_i)\cap I(v,u_j)$, then $d(w,u_i)=d(w,u_j)=k-1$. Therefore the gate $w_0$ of $w$ in $Q$ has distance at most $k-2$ from $w$, yielding that $d(v,w_0)=k-1$. This is possible only if $w_0=v_0$. Therefore, replacing $v$ by $w$ we will get a vertex of $G$ whose gate $w_0=v_0$ in $Q$ does not belong to $H$ and for which $d(w,w_0)<d(v,v_0)$, contrary to the minimality in the choice of $v$. Thus $I(v,u_i)\cap I(v,u_j)=\{ v\}$. Let $A=\{ n+1,\ldots, n+k-1\}$. If $k=3$, then $v$ is encoded by $A=\{ n+1,n+2\}$. By Claim \ref{claimSKn}, any shortest path of $G$ from $u_i=\{ i\}$ to $v$ must be of the form $(\{ i\}, \{ i,\ell\}, \{ \ell\}, \{ n+1,n+2\})$, where $\ell\in \{ n+1,n+2\}$. Since we have at least four original vertices, at least two of such shortest paths of $G$ will pass via the same neighbor $\{ n+1\}$ or $\{ n+2\}$ of $v$, contrary to the assumption that $I(v,u_i)\cap I(v,u_j)=\{ v\}$ for any $u_i$ and $u_j$, $i\ne j$. If $k\ge 4$, let $G'=\pi_{n+1}(G)$ and $H'=\pi_{n+1}(H)$ be the images of $G$ and $H$ by contracting the edges of $Q_m$ corresponding to the coordinate $n+1$. Then $G'$ is an isometric subgraph of the hypercube $Q_{m-1}$ and $H'$ is a full subdivision isomorphic to $SK_n$ isometrically embedded in $G'$. Let also $v',v'_{\varnothing,}$ and $u'_i, i=1,\ldots,n,$ denote the images of the vertices $v, v_{\varnothing},$ and $u_i$ of $G$. Then $u'_1,\ldots, u'_n$ are the original vertices of $H'$. Notice also that $v'$ has distance $k-1$ to all original vertices of $H'$ and distance $k-2$ to $v'_{\varnothing}$. Thus in $G'$ the vertex $v'$ does not have a gate in $H'$. By the minimality in the choice of $v$ and $H$, either $H'$ is not convex in $G'$ or $H'$ is included in a larger full subdivision of $G'$. If $H'$ is not convex in $G'$, by Claim \ref{convexSKn} $v'_{\varnothing}$ must be a vertex of $G'$. Since $v_{\varnothing}$ is not a vertex of $G$, this is possible only if the set $\{ n+1\}$ corresponds to a vertex of $G$. But we showed in Claim \ref{claimSKn} that the only neighbors of $v_{\varnothing}$ in $G$ are the original vertices of $H$. This contradiction shows that $H'$ is a convex. Therefore, suppose that $H'$ is included in a larger full subdivision $H''=SK_{n+1}$ of $G'$. Denote by $u'_{\ell}=\{ \ell\}$ the original vertex of $H''$ different from the vertices $u'_i, i=1,\ldots,n$; hence $\ell\notin \{ 1,\ldots,n\}$. Since $u'_{\ell}$ is a vertex of $G'$ and in $Q_m$ the set $\{ \ell\}$ does not correspond to a vertex of $G$, necessarily the set $\{ n+1,\ell\}$ is a vertex of $G$ in $Q_m$. Therefore, we are in the conditions of the previous subcase, which was shown to be impossible. This concludes the analysis of case $A_0=\varnothing$. Now, suppose that $|A_0|\ge 3$ and let $A_0=\{ 1,2,3,\ldots,k\}$. This implies that the vertices $u_1,u_2,u_3$ are original vertices and $u_{1,2},u_{1,3},u_{2,3}$ are subdivision vertices of $H$. Since $H=SK_n$ with $n\ge 4$, $H$ contains an original vertex $u_{\ell}$ with $\ell\ge 4$, say $\ell=4$. But then the sets corresponding to the vertices $u_1,u_2,u_3,u_4,u_{1,2},u_{1,3},u_{2,3},$ and $v$ of $G$ shatter the set $\{ 1,2,3\}$, contrary to the assumption that $G\in {\mathcal F}(Q_3)$. This concludes the case $|A_0|\ge 3$. Consequently, for any vertex $v$ of $G$ the gate $v_0$ of $v$ in $Q$ belongs to $H$. This shows that $H$ is a gated subgraph of $G$ and concludes the proof of the proposition. \end{proof} \subsection{Gated hulls of 6-cycles}\label{gh-6cycle} The goal of this subsection is to prove the following result: \begin{proposition} \label{gatedhullC6} If $C$ is an induced (and thus isometric) 6-cycle of $G\in {\mathcal F}(Q_3)$, then the gated hull $\gate(C)$ of $C$ is either $C$, or $Q_3^-$, or a full subdivision. \end{proposition} \begin{proof} If $C$ is included in a maximal full subdivision $H=SK_n$ with $n\ge 4$, by Proposition \ref{gatedSKn} $H$ is gated. Moreover, one can directly check that any vertex of $H\setminus C$ must be included in the gated hull of $C$, whence $\gate( C )=H$. Now suppose that $C$ is not included in any full subdivision $SK_n$ with $n\ge 4$. By Lemma \ref{6cycle}, $S:=\conv(C)$ is either $C$ or $Q^-_3$. In this case we assert that $S$ is gated and thus $\gate(C)=\conv(C)$. Suppose that $G$ is a two-dimensional partial cube of smallest size for which this is not true. Let $v$ be a vertex of $G$ that has no gate in $S$ and is as close as possible to $S$, where $d_G(v,S)=\min \{ d_G(v,z): z\in S\}$ is the distance from $v$ to $S$. Given a $\Theta$-class $E_i$ of $G$, let $G':=\pi_i(G)$, $S':=\pi_i(S)$, and $C'=\pi_i(C)$. For a vertex $u$ of $G$, let $u':=\pi_i(u)$. Since any convex subgraph of $G$ is the intersection of halfspaces, if all $\Theta$-classes of $G$ cross $S$, then $S$ coincides with $G$, contrary to the choice of $G$. Thus $G$ contains $\Theta$-classes not crossing $S$. First suppose that there exists a $\Theta$-class $E_i$ of $G$ not crossing $S$ such that $S'$ is convex in $G'$. Since $G'\in {\mathcal F}(Q_3)$, by Lemma \ref{6cycle} either the 6-cycle $C'$ is convex or its convex hull in $G'$ is $Q^-_3$. Since the distance in $G'$ between $v'$ and any vertex of $S'$ is either the same as the distance in $G$ between $v$ and the corresponding vertex of $S$ (if $E_i$ does not separate $v$ from $S$) or is one less than the corresponding distance in $G$ (if $v$ and $S$ belong to complementary halfspaces defined by $E_i$), $S'$ is not gated in $G'$, namely the vertex $v'$ has no gate in $S'$. Therefore, if $S'=Q^-_3$, then contracting all $\Theta$-classes of $G'$ separating $S'$ from $v'$, we will get $Q_3$ as a pc-minor, contrary to the assumption that $G$ and $G'$ belong to $\mathcal{F}(Q_3)$. This implies that $S'=C'$ and thus that $S=C$. Moreover, by minimality of $G$, the 6-cycle $C'$ is included in a maximal full subdivision $H'=SK_n$ of $G'$. By Proposition \ref{gatedSKn}, $H'$ is a gated subgraph of $G'$. Let $w'$ be the gate of $v'$ in $H'$ (it may happen that $w'=v'$). Since $C'$ is not gated, necessarily $w'$ is not a vertex of $C'$. For the same reason, $w'$ is not adjacent to a vertex of $C'$. The graph $G$ is obtained from $G'$ by an isometric expansion $\psi_i$ (inverse to $\pi_i$). By Lemma \ref{expansionSKn}, $\psi_i$, restricted to $H'$, is a peripheral expansion along an isometric tree of $H'$. By Corollary \ref{SKnCopie}, $G$ contains an isometric subgraph isomorphic to $H'$. By the choice of $E_i$, $C$ does not cross $E_i$, and this implies that in $G$ the convex cycle $C$ is contained in a full subdivision of $K_n$, contrary to the choice of $C$. Now, suppose that for any $\Theta$-class $E_i$ of $G$ not crossing $S$, $S'$ is not convex in $G'$. Since $C'$ is an isometric 6-cycle of $G'$, $G'\in {\mathcal F} (Q_3)$, and the 6-cycle $C'$ is not convex in $G'$, by Lemma \ref{6cycle} we conclude that the convex hull of $C'$ in $G'$ is $Q^-_3$ and this $Q^-_3$ is different from $S'$. Hence $S'=C'$ and $S=C$. This implies that there exists a vertex $z'$ of $G'$ adjacent to three vertices $z'_1,z'_2,$ and $z'_3$ of $C'$. Let $z_1,z_2,z_3$ be the three preimages in $C$ of the vertices $z'_1,z'_2,z'_3$. Let also $y,z$ be the preimages in the hypercube $Q_m$ of the vertex $z'$. Suppose that $y$ is adjacent to $z_1,z_2,z_3$ in $Q_m$. Since $C'$ is the image of the convex 6-cycle of $G$, this implies that $y$ is not a vertex of $G$ while $z$ is a vertex of $G$. Since $G$ is an isometric subgraph of $Q_m$, $G$ contains a vertex $w_1$ adjacent to $z$ and $z_1$, a vertex $w_2$ adjacent to $z$ and $z_2$, and a vertex $w_3$ adjacent to $z$ and $z_3$. Consequently, the vertices of $C$ together with the vertices $z,w_1,w_2,w_3$ define a full subdivision $SK_4$, contrary to our assumption that $C$ is not included in such a subdivision. This shows that the convex hull of the 6-cycle $C$ is gated. \end{proof} \section{Convex and gated hulls of long isometric cycles} In the previous section we described the structure of gated hulls of 6-cycles in two-dimensional partial cubes. In this section, we provide a description of convex and gated hulls of {\it long isometric cycles}, i.e., of isometric cycles of length $\ge 8$. We prove that convex hulls of long isometric cycles are disks, i.e., the region graphs of pseudoline arrangements. Then we show that all such disks are gated. In particular, this implies that convex long cycles in two-dimensional partial cubes are gated. \subsection{Convex hulls of long isometric cycles} A two-dimensional partial cube $D$ is called a \emph{pseudo-disk} if $D$ contains an isometric cycle $C$ such that $\conv(C)=D$; $C$ is called the {\it boundary} of $D$ and is denoted by $\partial D$. If $D$ is the convex hull of an isometric cycle $C$ of $G$, then we say that $D$ is a pseudo-disk of $G$. Admitting that $K_1$ and $K_2$ are pseudo-disks, the class of all pseudo-disks is closed under contractions. The main goal of this subsection is to prove the following result: \begin{proposition} \label{prop:disks} A graph $D\in {\mathcal F}(Q_3)$ is a pseudo-disk if and only if $D$ is a disk. In particular, the convex hull $\conv(C)$ of an isometric cycle $C$ of any graph $G\in {\mathcal F}(Q_3)$ is an $\AOM$ of rank $2$. \end{proposition} \begin{proof} The fact that disks are pseudo-disks follows from the next claim: \begin{claim}\label{lem:AOMdisk} If $D\in\mathcal{F}(Q_3)$ is a disk, then $D$ is the convex hull of an isometric cycle $C$ of $D$. \end{claim} \begin{proof} By definition, $D$ is the region graph of an arrangement $\mathcal A$ of pseudolines. The cycle $C$ is obtained by traversing the unbounded cells of the arrangement in circular order, i.e., $C=\partial D$. This cycle $C$ is isometric in $D$ because the regions corresponding to any two opposite vertices $v$ and $-v$ of $C$ are separated by all pseudolines of $\mathcal A$, thus $d_D(v,-v)=|\mathcal A|$. Moreover, $\conv(C)=D$ because for any other vertex $u$ of $D$, any pseudoline $\ell\in \mathcal A$ separates exactly one of the regions corresponding to $v$ and $-v$ from the region corresponding to $u$, whence $d_D(v,u)+d_D(u,-v)=d_D(v,-v)$. \end{proof} The remaining part of the proof is devoted to prove that any pseudo-disk is a disk. Let $D$ be a pseudo-disk with boundary $C$. Let $A_D:=\{v\in D: v \text{ has an antipode}\}$. As before, for a $\Theta$-class $E_i$ of $D$, by $D^+_i$ and $D^-_i$ we denote the complementary halfspaces of $D$ defined by $E_i$. \begin{claim}\label{lem:cycle} If $D$ is a pseudo-disk with boundary $C$, then $A_D=C$. \end{claim} \begin{proof} Clearly, $C\subseteq A_D$. To prove $A_D\subseteq C$, suppose by way of contradiction that $v,-v$ are antipodal vertices of $D$ not belonging to $C$. Contract the $\Theta$-classes until $v$ is adjacent to a vertex $u\in C$, say via an edge in class $E_i$ (we can do this because all such classes crosses $C$ and by Lemma \ref{contraction-ChKnMa}(ii) their contraction will lead to a disk). Let $u\in D^+_i$ and $v\in D^-_i$. Since $D=\conv(C)$, the $\Theta$-class $E_i$ crosses $C$. Let $xy$ and $zw$ be the two opposite edges of $C$ belonging to $E_i$ and let $x,z\in D^+_i, y,w\in D^-_i$. Let $P,Q$ be two shortest paths in $D^-_i$ connecting $v$ with $y$ and $w$, respectively. Since the total length of $P$ and $Q$ is equal to the shortest path of $C$ from $x$ to $z$ passing through $u$, the paths $P$ and $Q$ intersect only in $v$. Extending $P$ and $Q$, respectively within $D^-_i\cap C$ until $-u$, yields shortest paths $P', Q'$ that are crossed by all $\Theta$-classes except $E_i$. Therefore, both such paths can be extended to shortest $(v,-v)$-paths by adding the edge $-u-v$ of $E_i$. Similarly to the case of $v$, there are shortest paths $P'', Q''$ from the vertex $-v\in D^+_i$ to the vertices $x,z\in C\cap D^+_i$. Again, $P''$ and $Q''$ intersect only in $-v$. Let $E_j$ be any $\Theta$-class crossing $P$ and $E_k$ be any $\Theta$-class crossing $Q$. We assert that the set $S:=\{ u,v,x,y,z,w,-u,-v\}$ of vertices of $D$ shatter $\{ i,j,k\}$, i.e., that contracting all $\Theta$-classes except $E_i,E_j$, and $E_k$ yields a forbidden $Q_3$. Indeed, $E_i$ separates $S$ into the sets $\{ u,x,-v,z\}$ and $\{ v,y,-u,w\}$, $E_j$ separates $S$ into the sets $\{ x,y,-v,-u\}$ and $\{ u,v,z,w\}$, and $E_k$ separates $S$ into the sets $\{ u,v,x,y\}$ and $\{ -v,-u,z,w\}$. This contradiction shows that $A_D\subseteq C$, whence $A_D=C$. \end{proof} \begin{claim}\label{lem:affine} If $D$ is a pseudo-disk with boundary $C$, then $D$ is an affine partial cube. Moreover, there exists an antipodal partial cube $D'\in \mathcal{F}(Q_{4})$ containing $D$ as a halfspace. \end{claim} \begin{proof} First we show that $D$ is affine. Let $u,v\in D$. Using the characterization of affine partial cubes provided by \cite[Proposition 2.16]{KnMa} we have to show that for all vertices $u,v$ of $D$ one can find $w,-w\in A_D$ such that the intervals $I(w,u)$ and $I(v,-w)$ are not crossed by the same $\Theta$-class of $D$. By Claim~\ref{lem:cycle} this is equivalent to finding such $w,-w$ in $C$. Let $I$ be the index set of all $\Theta$-classes crossing $I(u,v)$. Without loss of generality assume that $u\in D_i^+$ (and therefore $v\in D^-_i$) for all $i\in I$. We assert that $(\bigcap_{i\in I}D_i^+)\cap C\neq \emptyset$. Then any vertex from this intersection can play the role of $w$. For $i\in I$, let $C^+_i=C\cap D^+_i$ and $C^-_i=C\cap D^-_i$; $C^+_i$ and $C^-_i$ are two disjoint shortest paths of $C$ covering all vertices of $C$. Viewing $C$ as a circle, $C^+_i$ and $C^-_i$ are disjoint arcs of this circle. Suppose by way of contradiction that $\bigcap_{i\in I}C_i^+=\bigcap_{i\in I}D_i^+\cap C=\emptyset$. By the Helly property for arcs of a circle, there exist three classes $i,j,k\in I$ such that the paths $C^+_i,C^+_j,$ and $C^+_k$ pairwise intersect, together cover all the vertices and edges of the cycle $C$, and all three have empty intersection. This implies that $C$ is cut into 6 nonempty paths: $C_i^+\cap C_j^+\cap C_k^-$, $C_i^+\cap C_j^-\cap C_k^-$, $C_i^+\cap C_j^-\cap C_k^+$, $C_i^-\cap C_j^-\cap C_k^+$, $C_i^-\cap C_j^+\cap C_k^+$, and $C_i^-\cap C_j^+\cap C_k^-$. Recall also that $u\in D_i^+\cap D_j^+\cap D_k^+$ and $v\in D_i^-\cap D_j^-\cap D_k^-$. But then the six paths partitioning $C$ together with $u,v$ will shatter the set $\{ i,j,k\}$, i.e., contracting all $\Theta$-classes except $i,j,k$ yields a forbidden $Q_3$. Consequently, $D$ is an affine partial cube, i.e., $D$ is a halfspace of an antipodal partial cube $G$, say $D=G^+_i$ for a $\Theta$-class $E_i$. Suppose that $G$ can be contracted to the 4-cube $Q_4$. If $E_i$ is a coordinate of $Q_4$ (i.e., the class $E_i$ is not contracted), since $D=G^+_i$, we obtain that $D$ can be contracted to $Q_3$, which is impossible because $D\in {\mathcal F}(Q_3)$. Therefore $E_i$ is contracted. Since the contractions of $\Theta$-classes commute, suppose without loss of generality that $E_i$ was contracted last. Let $G'$ be the partial cube obtained at the step before contracting $E_i$. Let $D'$ be the isometric subgraph of $G'$ which is the image of $D$ under the performed contractions. Since the property of being a pseudo-disk is preserved by contractions, $D'$ is a pseudo-disk, moreover $D'$ is one of the two halfspaces of $G'$ defined by the class $E_i$ restricted to $G'$. Analogously, by Lemma \ref{antipodal-contraction} antipodality is preserved by contractions, whence $G'$ is an antipodal partial cube such that $\pi_i(G')=Q_4$. This implies that $G'$ was obtained from $H:=Q_4$ by an isometric antipodal expansion $(H^1,H^0,H^2)$. Notice that one of the isometric subgraphs $H^1$ or $H^2$ of the 4-cube $H$, say $H_1$ coincides with the disk $D'':=\pi_i(D')$. Since $H$ is antipodal, by \cite[Lemma 2.14]{KnMa}, $H_0$ is closed under antipodes in $Q_4$ and $-(H_1\setminus H_0)=H_2\setminus H_0$. Since $H_0$ is included in the isometric subgraph $H_1=D''$ of $H$, $H_0$ is closed under antipodes also in $D''$. By Claim~\ref{lem:cycle} we obtain $H_0=A_{D''}=\partial D''$. Consequently, $H_0$ is an isometric cycle of $H=Q_{4}$ that separates $Q_{4}$ in two sets of vertices. However, no isometric cycle of $Q_4$ separates the graph. \end{proof} \begin{figure}[htb] \centering \includegraphics[width=.25\textwidth]{OM} \caption{An OM containing $Q_3^-$ as a halfspace.} \label{fig:OM} \end{figure} If $D\notin\mathcal{F}(Q_3)$ is the convex hull of an isometric cycle, then $D$ is not necessarily affine, see $X_4^5$ in Figure~\ref{fig:COMobstructions}. On the other hand, $SK_4\in\mathcal{F}(Q_3)$ is affine but is not a pseudo-disk. Let us introduce the distinguishing feature. \begin{claim}\label{lem:diskAOM} If $D$ is a pseudo-disk with boundary $C$, then $D$ is a disk, i.e., the region graph of a pseudoline arrangement. \end{claim} \begin{proof} By Claim~\ref{lem:affine} we know that $D$ is the halfspace of an antipodal partial cube $G$. Suppose by contradiction that $G$ is not an $\OM$. By~\cite{KnMa} $G$ has a minor $X$ from the family $\mathcal{Q}^{-}$. Since the members of this class are non-antipodal, to obtain $X$ from $G$ not only contractions but also restrictions are necessary. We perform first all contractions $I$ to obtain a pseudo-disk $D':=\pi_I(D)\in \mathcal{F}(Q_3)$ that is a halfspace of the antipodal graph $G':=\pi_I(G)$. By the second part of Claim~\ref{lem:affine} we know that $G'\in\mathcal{F}(Q_4)$. Now, since $G'$ contains $X$ as a proper convex subgraph, by Lemma~\ref{lem:antipodal} we get $X\in\mathcal{F}(Q_3)$. Since $SK_4$ is the only member of the class $\mathcal{Q}^-$ containing $SK_4$ as a convex subgraph, by Proposition~\ref{prop:excludedminors}, we obtain $X=SK_4$. Assume minimality in this setting, in the sense that any further contraction destroys all copies of $X$ present in $D'$. We distinguish two cases. First, suppose that there exists a copy of $X$ which is a convex subgraph of $D'$. Let $n\geq 4$ be maximal such that there is a convex $H=SK_n$ in $D'$ extending a convex copy of $X$. By Proposition~\ref{gatedSKn}, $H$ is gated. If $H\ne D'$, there exists a $\Theta$-class $E_i$ of $D'$ not crossing $H$. Contracting $E_i$, by Lemma \ref{contraction-ChKnMa}(iii) we will obtain a gated full subdivision $\pi_i(H)=SK_n$ contrary to the minimality in the choice of $D'$. Therefore $D'=H=SK_n$, but it is easy to see that all $SK_n, n\ge 4,$ are not pseudo-disks, a contradiction. Now, suppose that no copy of $X$ is a convex subgraph of $D'$. Since $G'$ contains $X$ as a convex subgraph, $D'$ is a halfspace of $G'$ (say $D'=(G')^+_i$) defined by a $\Theta$-class $E_i$, and $G'$ is an antipodal partial cube, we conclude that $E_i$ crosses all convex copies $H$ of $X=SK_4$. Then $E_i$ partitions $H$ into a 6-cycle $C$ and a $K_{1,3}$ such that all edges between them belong to $E_i$. The antipodality map of $G'$ maps the vertices of $(G')^+_i$ to vertices of $(G')^-_i$ and vice-versa. Therefore in $D'$ there must be a copy of $K_{1,3}$ and a copy of $C=C_6$, and both such copies belong to the boundary $\partial (G')^+_i$. The antipodality map is also edge-preserving. Therefore, it maps edges of $E_i$ to edges of $E_i$ and vertices of $(G')^+_i\setminus \partial (G')^+_i$ to vertices of $(G')^-_i\setminus \partial (G')^-_i$. Consequently, all vertices of $\partial (G')^-_i$ have antipodes in the pseudo-disk $D'=(G')^+_i$ and their antipodes also belong to $\partial (G')^+_i$. This and Claim~\ref{lem:cycle} imply that $\partial (G')^+_i\subset A_{D'}=\partial D'$. Therefore the isometric cycle $\partial D'$ contains an isometric copy of $C_6$, whence $\partial D'=C_6$. Since $\partial D'$ also contains the leafs of a $K_{1,3}$ we conclude that the pseudo-disk $D'$ coincides with $Q^-_3$. However, the only antipodal partial cube containing $Q_3^-$ as a halfspace is depicted in Figure~\ref{fig:OM} and it is an $\OM$, leading to a contradiction. \end{proof} Note that Claim~\ref{lem:diskAOM} generalizes Lemma~\ref{6cycle}. Together with Claim~\ref{lem:AOMdisk} it yields that pseudo-disks are disks, i.e., tope graphs of $\AOM$s of rank two, concluding the proof of Proposition \ref{prop:disks}. \end{proof} \subsection{Gated hulls of long isometric cycles} By Proposition \ref{prop:disks} disks and pseudo-disks are the same, therefore, from now on we use the name ``disk'' for both. We continue by showing that in two-dimensional partial cubes all disks with boundary of length $>6$ are gated. \begin{proposition} \label{gatedcycle} If $D$ is a disk of $G\in {\mathcal F}(Q_3)$ and $|\partial D|>6$, then $D$ is a gated subgraph of $G$. In particular, convex long cycles of $G$ are gated. \end{proposition} \begin{proof} Let $G$ be a minimal two-dimensional partial cube in which the assertion does not hold. Let $D$ be a non-gated disk of $G$ whose boundary $C:=\partial D$ is a long isometric cycle. Let $v$ be a vertex of $G$ that has no gate in $D$ and is as close as possible to $D$, where $d_G(v,D)=\min \{ d_G(v,z): z\in D\}$. We use some notations from the proof of \cite[Proposition 1]{ChKnMa}. Let $P_v:=\{ x\in D: d_G(v,x)=d_G(v,D)\}$ be the \emph{metric projection} of $v$ to $D$. Let also $R_v:=\{ x\in D: I(v,x)\cap D=\{ x\}\}.$ Since $D$ is not gated, $R_v$ contains at least two vertices. Obviously, $P_v\subseteq R_v$ and the vertices of $R_v$ are pairwise nonadjacent. We denote the vertices of $P_v$ by $x_1,\ldots,x_k$. For any $x_i\in P_v$, let $v_i$ be a neighbor of $v$ on a shortest $(v,x_i)$-path. By the choice of $v$, each $v_i$ has a gate in $D$. By the definition of $P_v$, $x_i$ is the gate of $v_i$ in $D$. This implies that the vertices $v_1,\ldots,v_k$ are pairwise distinct. Moreover, since $x_i$ is the gate of $v_i$ in $D$, for any two distinct vertices $x_i,x_j\in P_v$, we have $d_G(v_i,x_i)+d_G(x_i,x_j)=d_G(v_i,x_j)\le 2+d_G(v_j,x_j)$. Since $d_G(x_i,v_i)=d_G(x_j,v_j)$, necessarily $d_G(x_i,x_j)=2$. We assert that any three distinct vertices $x_j,x_k,x_\ell \in P_v$ do not have a common neighbor. Suppose by way of contradiction that there exists a vertex $x$ adjacent to $x_j,x_k,x_\ell$. Then $x$ belongs to $D$ by convexity of $D$ and $x_j,x_k,x_\ell\in I(x,v)$ since $x_j,x_k,x_\ell \in P_v$. Let $E_j$ be the $\Theta$-class of the edge $v_jv$ and let $C_k$ be the cycle of $G$ defined by a $(v,x_j)$-shortest path $P$ passing via $v_j$, the 2-path $(x_j,x,x_k)$, and a shortest $(x_k,v)$-path $Q$ passing via $v_k$. Then $E_j$ must contain another edge of $C_k$. Necessarily this cannot be an edge of $P$. Since $v$ is a closest vertex to $D$ without a gate, this edge cannot be an edge of $Q$. Since $x_j\in I(x,v)$, this edge is not $xx_j$. Therefore the second edge of $E_j$ in $C_k$ is the edge $xx_k$. This implies that $v$ and $x_k$ belong to the same halfspace defined by $E_j$, say $G^+_j$, and $v_j$ and $x$ belong to its complement $G^-_j$. Using an analogously defined cycle $C_{\ell}$, one can show that the edge $xx_{\ell}$ also belong to $E_j$, whence the vertices $x_k$ and $x_{\ell}$ belong to the same halfspace $G^+_j$. Since $x\in I(x_k,x_{\ell})$ and $x\in G_j^-$, we obtain a contradiction with convexity of $G^+_j$. Therefore, if $x_j,x_k,x_\ell \in P_v$, then $\conv(x_j,x_k,x_\ell)$ is an isometric 6-cycle of $D$. In particular, this implies that each of the intervals $I(x_j,x_k),I(x_k,x_{\ell}), I(x_j,x_{\ell})$ consists of a single shortest path. Next we show that $|P_v|\le 3$. Suppose by way of contradiction that $|P_v|\ge 4$ and pick the vertices $x_1,x_2,x_3,x_4\in P_v$. Let $H$ be the subgraph of $D$ induced by the union of the intervals $I(x_j,x_k)$, with $j,k\in \{ 1,2,3,4\}$. Since these intervals are 2-paths intersecting only in common end-vertices, $H$ is isomorphic to $SK_4$ with $x_1,x_2,x_3,x_4$ as original vertices. Since $D$ is a two-dimensional partial cube, one can directly check that $H$ is an isometric subgraph of $D$. Since the intervals $I(x_j,x_k)$ are interiorly disjoint paths, $H=SK_4$ cannot be extended to $SK_4^*$. By Claim \ref{convexSKn}, $H=SK_4$ is a convex subgraph of $D$. Since $D$ is an AOM of rank 2 and thus a COM of rank 2, by Proposition \ref{prop:excludedminors}, $D$ cannot contain $SK_4$ as a pc-minor. This contradiction shows that $|P_v|\le 3$. Let $S:= \conv(P_v)$. Since $|P_v|\le 3$ and $d_G(x_j,x_k)=2$ for any two vertices $x_j,x_k$ of $P_v$, there exists at most three $\Theta$-classes crossing $S$. Since the length of the isometric cycle $C$ is at least 8, there exists a $\Theta$-class $E_i$ crossing $C$ (and $D$) and not crossing $S$. We assert that $v$ and the vertices of $P_v$ belong to the same halfspace defined by $E_i$. Indeed, if $E_i$ separates $v$ from $S$, then for any $j$, $E_i$ has an edge on any shortest $(v_j,x_j)$-path. This contradicts the fact that $x_j$ is the gate of $v_j$ in $D$. Consequently, $v$ and the set $S$ belong to the same halfspace defined by $E_i$. Consider the graphs $G':=\pi_i(G)$, $D':=\pi_i(D)$ and the cycle $C':=\pi_i(C)$. By Lemma \ref{contraction-ChKnMa}(i), $D'$ is a disk with boundary $C'$ (and thus an $\AOM$) of the two-dimensional partial cube $G'$. Notice that the distance in $G'$ between $v'$ and the vertices $x'_j$ of $P_v$ is the same as the distance between $v$ and $x_j$ in $G$ and that the distance between $v'$ and the images of vertices of $R_v\setminus P_v$ may eventually decrease by 1. This implies that $D'$ is not gated. By minimality of $G$, this is possible only if $C'$ is a 6-cycle. In this case, by Proposition \ref{gatedhullC6}, we conclude that $D'$ is included in a maximal full subdivision $H'=SK_n$, which is a gated subgraph of $G'$. The graph $G$ is obtained from $G'$ by an isometric expansion $\psi_i$ (inverse to $\pi_i$). By Lemma \ref{expansionSKn}, $\psi_i$, restricted to $H'$, is a peripheral expansion along an isometric tree of $H'$. This implies that in $G$ the convex $\AOM$ $D$ is contained in a full subdivision of $K_n$, contrary to the assumption that $D$ is the convex hull of the isometric cycle $C$ of length at least 8. \end{proof} Summarizing Propositions \ref{gatedhullC6}, \ref{prop:disks}, and \ref{gatedcycle}, we obtain the following results: \begin{theorem} \label{FS+AOM} Let $G$ be a two-dimensional partial cube and $C$ be an isometric cycle of $G$. If $C=C_6$, then the gated hull of $C$ is either $C$, $Q^-_3$, or a maximal full subdivision. If otherwise $C$ is long, then $\conv(C)$ is a gated disk. \end{theorem} \begin{corollary} Maximal full subdivisions, convex disks with long cycles as boundaries (in particular, long convex cycles) are gated subgraphs in two-dimensional partial cubes. \end{corollary} \section{Completion to ample partial cubes} In this section, we prove that any partial cube $G$ of VC-dimension 2 can be completed to an ample partial cube $G^{\top}$ of VC-dimension 2. We perform this completion in two steps. First, we canonically extend $G$ to a partial cube $G\urcorner\in {\mathcal F}(Q_3)$ not containing convex full subdivisions. The resulting graph $G\urcorner$ is a COM of rank 2: its cells are the gated cycles of $G$ and the 4-cycles created by extensions of full subdivisions. Second, we transform $G\urcorner$ into an ample partial cube $(G\urcorner)\ulcorner\in {\mathcal F}(Q_3)$ by filling each gated cycle $C$ of length $\ge 6$ of $G$ (and of $G\urcorner$) by a planar tiling with squares. Here is the main result of this section and one of the main results of the paper: \begin{theorem}\label{thm:extendtoample} Any $G\in\mathcal{F}(Q_3)$ can be completed to an ample partial cube $G^{\top}:=(G\urcorner)\ulcorner\in\mathcal{F}(Q_3)$. \end{theorem} \subsection{Canonical completion to two-dimensional COMs} The {\it 1-extension} graph of a partial cube $G\in {\mathcal F}(Q_3)$ of $Q_m$ is a subgraph $G'$ of $Q_m$ obtained by taking a maximal by inclusion convex full subdivision $H=SK_n$ of $G$ such that $H$ is standardly embedded in $Q_m$ and adding to $G$ the vertex $v_{\varnothing}$. \begin{lemma} \label{1extension} If $G'$ is the 1-extension of $G\in {\mathcal F}(Q_3)$ and $G'$ is obtained with respect to the maximal by inclusion convex full subdivision $H=SK_n$ of $G$, then $G'\in {\mathcal F}(Q_3)$ and $G$ is an isometric subgraph of $G'$. Moreover, any convex full subdivision $SK_{r}$ with $r\ge 3$ of $G'$ is a convex full subdivision of $G$ and any convex cycle of length $\ge 6$ of $G'$ is a convex cycle of $G$. \end{lemma} \begin{proof} Let $G$ be an isometric subgraph of $Q_m$. To show that $G'$ is an isometric subgraph of $Q_m$ it suffices to show that any vertex $v$ of $G$ can be connected in $G'$ with $v_{\varnothing}$ by a shortest path. By Proposition \ref{gatedSKn} $H$ is a gated subgraph of $G$ and the gate $v_0$ of $v$ in $Q=\conv(H)$ belongs to $H$. This means that if $v$ is encoded by the set $A$ and $v_0$ is encoded by the set $A_0=A\cap \{ 1,\ldots,n\}$, then either $A_0=\{ i\}$ or $A_0=\{ i,j\}$ for an original vertex $u_i$ or a subdivision vertex $u_{i,j}$. This means that $d(v,v_0)=d(v,u_i)=|A|-1$ in the first case and $d(v,v_0)=d(v,u_{i,j})=|A|-2$ in the second case. Since $d(v,v_{\varnothing})=|A|$, we obtain a shortest $(v,v_{\varnothing})$-path in $G'$ first going from $v$ to $v_0$ and then from $v_0$ to $v_{\varnothing}$ via an edge or a path of length 2 of $H$. This establishes that $G'$ is an isometric subgraph of $Q_m$. Since any two neighbors of $v_{\varnothing}$ in $H$ have distance 2 in $G$ and $v_{\varnothing}$ is adjacent in $G$ only to the original vertices of $H$, we also conclude that $G$ is an isometric subgraph of $G'$. Now we will show that $G'$ belongs to ${\mathcal F}(Q_3)$. Suppose by way of contradiction that the sets corresponding to some set $S$ of 8 vertices of $G'$ shatter the set $\{ i,j,k\}$. Since $G\in {\mathcal F}(Q_3)$, one of the vertices of $S$ is the vertex $v_{\varnothing}$: namely, $v_{\varnothing}$ is the vertex whose trace on $\{ i,j,k\}$ is $\varnothing$. Thus the sets corresponding to the remaining 7 vertices of $S$ contain at least one of the elements $i,j,k$. Now, since $H=SK_n$ with $n\ge 4$, necessarily there exists an original vertex $u_{\ell}$ of $H$ with $\ell\notin \{ i,j,k\}$. Clearly, $u_{\ell}$ is not a vertex of $S$. Since the trace of $\{ \ell\}$ on $\{ i,j,k\}$ is $\varnothing$, replacing in $S$ the vertex $v_{\varnothing}$ by $u_{\ell}$ we will obtain a set of 8 vertices of $G$ still shattering the set $\{ i,j,k\}$, contrary to $G\in {\mathcal F}(Q_3)$. It remains to show that any convex full subdivision of $G'$ is a convex full subdivision of $G$. Suppose by way of contradiction that $H'=SK_r, r\ge 3,$ is a convex full subdivision of $G'$ containing the vertex $v_{\varnothing}$. By Claim \ref{claimSKn}, in $G'$ $v_{\varnothing}$ is adjacent only to the original vertices of $H$. Hence, if $v_{\varnothing}$ is an original vertex of $H'$ then at least two original vertices of $H$ are subdivision vertices of $H'$ and if $v_{\varnothing}$ is a subdivision vertex of $H'$ then the two original vertices of $H'$ adjacent to $v_{\varnothing}$ are original vertices of $H$. In both cases, denote those two original vertices of $H$ by $x=u_i$ and $y=u_j$. Since $H'$ is convex and $u_{i,j}$ is adjacent to $u_i$ and $u_j$, $u_{i,j}$ must belong to $H'$. But this implies that $H'$ contains the 4-cycle $(x=u_i,v_{\varnothing},y=u_j,u_{i,j})$, which is impossible in a convex full subdivision. In a similar way, using Claim \ref{claimSKn}, one can show that any convex cycle of length $\ge 6$ of $G'$ is a convex cycle of $G$. \end{proof} Now, suppose that we consequently perform the operation of 1-extension to all gated full subdivisions and to the occurring intermediate partial cubes. By Lemma \ref{1extension} all such isometric subgraphs of $Q_m$ have VC-dimension 2 and all occurring convex full subdivisions are already convex full subdivisions of $G$. After a finite number of 1-extension steps (by the Sauer-Shelah-Perles lemma, after at most $\binom{m}{\leq 2}$ 1-extensions), we will get an isometric subgraph $G\urcorner$ of $Q_m$ such that $G\urcorner\in {\mathcal F}(Q_3)$, $G$ is an isometric subgraph of $G\urcorner$, and all maximal full subdivisions $SK_n$ of $G\urcorner$ are included in $SK^*_n$. We call $G\urcorner$ the {\it canonical 1-completion} of $G$. We summarize this result in the following proposition: \begin{proposition}\label{prop:extendtoCOM} If $G\in {\mathcal F}(Q_3)$ is an isometric subgraph of the hypercube $Q_m$, then after at most $\binom{m}{\leq 2}$ 1-extension steps, $G$ can be canonically completed to a two-dimensional COM $G\urcorner$ and $G$ is an isometric subgraph of $G\urcorner$. \end{proposition} \begin{proof} To prove that $G\urcorner$ is a two-dimensional COM, by second assertion of Proposition \ref{prop:excludedminors} we have to prove that $G\urcorner$ belongs to $\mathcal{F}(Q_3, SK_4)=\mathcal{F}(Q_3)\cap \mathcal{F}(SK_4)$. The fact that $G\urcorner$ belongs to $\mathcal{F}(Q_3)$ follows from Lemma \ref{1extension}. Suppose now that $G\urcorner$ contains $SK_4$ as a pc-minor. By Corollary \ref{SKnCopie}, $G\urcorner$ contains a convex subgraph $H$ isomorphic to $SK_4$. Then $H$ extends in $G\urcorner$ to a maximal by inclusion $SK_n$, which we denote by $H'$. Since $G\urcorner\in \mathcal{F}(Q_3)$ and $H$ does not extend to $SK^*_4$, $H'$ does not extend to $SK^*_n$ either. By Claim \ref{convexSKn} and Proposition \ref{gatedSKn} applied to $G\urcorner$, we conclude that $H'$ is a convex and thus gated subgraph of $G\urcorner$. Applying the second assertion of Lemma \ref{1extension} (in the reverse order) to all pairs of graphs occurring in the construction transforming $G$ to $G\urcorner$, we conclude that $H'$ is a convex and thus gated full subdivision of $G$. But this is impossible because all maximal full subdivisions $SK_n$ of $G\urcorner$ are included in $SK^*_n$. This shows that $G\urcorner$ belongs to $\mathcal{F}(SK_4)$, thus $G\urcorner$ is a two-dimensional COM. That $G$ is isometrically embedded in $G\urcorner$ follows from Lemma \ref{1extension} and the fact that if $G$ is an isometric subgraph of $G'$ and $G'$ is an isometric subgraph of $G''$, then $G$ is an isometric subgraph of $G''$. \end{proof \subsection{Completion to ample two-dimensional partial cubes} Let $G\in {\mathcal F}(Q_3)$, $C$ a gated cycle of $G$, and $E_j$ a $\Theta$-class crossing $C$. Set $C:=(v_1,v_2,\ldots, v_{2k})$, where the edges $v_{2k}v_1$ and $v_kv_{k+1}$ are in $E_j$. The graph $G_{C,E_j}$ is defined by adding a path on vertices $v_{2k}=v'_1, \ldots, v'_{k}=v_{k+1}$ and edges $v_iv'_i$ for all $2\leq i\leq k-1$. Let $C'=(v'_1,\ldots,v'_k,v_{k+2},\ldots, v_{2k-1})$. Then we recursively apply the same construction to the cycle $C'$ and we call the resulting graph a \emph{cycle completion} of $G$ \emph{along a gated cycle $C$}; see Figure~\ref{fig:extendcycle} for an illustration. Proposition \ref{prop:extendcycle} establishes the basic properties of this construction, in particular it shows that the cycle completion along a gated cycle is well defined. \begin{figure}[htb] \centering \includegraphics[width=.80\textwidth]{extendcycleBW} \caption{(a) $G_{C,E_j}$ is obtained by adding the white vertices to a graph $G$ with a gated cycle $C=(v_1,v_2,\ldots, v_{8})$. (b) A cycle completion of $G$ along the cycle $C=(v_1,v_2,\ldots, v_{8})$.} \label{fig:extendcycle} \end{figure} \begin{proposition}\label{prop:extendcycle} Let $G$ be a partial cube, $C$ a gated cycle of $G$, and $E_j$ a $\Theta$-class crossing $C$. \begin{enumerate}[(1)] \item \label{cond1:prop_extendcycle} $G_{C,E_j}$ is a partial cube and $G$ is an isometric subgraph of $G_{C,E_j}$, \item \label{cond2:prop_extendcycle} $C'=(v'_1,\ldots,v'_k,v_{k+2},\ldots, v_{2k-1})$ is a gated cycle, \item \label{cond3:prop_extendcycle} If $G\in {\mathcal F}(Q_3)$, then so is $G_{C,E_j}$, \item \label{cond4:prop_extendcycle} If $G$ contains no convex $SK_n$, then neither does $G_{C,E_j}$. \end{enumerate} \end{proposition} \begin{proof} To prove \eqref{cond1:prop_extendcycle}, notice that the $\Theta$-classes of $G$ extend to $G_{C,E_j}$ is a natural way, i.e., edges of the form $v_iv'_i$ for all $2\leq i\leq k-1$ belong to $E_j$, while an edge $v'_iv'_{i+1}$ belongs to the $\Theta$-class of the edge $v_iv_{i+1}$ for all $1\leq i\leq k-1$. Clearly, among the old vertices distances have not changed and the new vertices are embedded as an isometric path. If $w\in C$ and $u\in C'$ is a new vertex, then it is easy to see that there is a shortest path using each $\Theta$-class at most once. In fact, since $w$ is at distance at most one from $C'$ it has a gate in $C'$, i.e., the path only uses $E_j$. Finally, let $v$ be an old vertex of $G \setminus C$, $w$ be its gate in $C$, and $u$ be a new vertex, i.e., $u\in G_{C, E_j} \setminus G$. Let $P$ be a path from $v$ to $u$ that is a concatenation of a shortest $(v,w)$-path $P_1$ and a shortest $(w,u)$-path $P_2$. Since $C$ is gated and all $\Theta$-classes crossing $P_2$ also cross $C$, the $\Theta$-classes of $G$ crossing $P_1$ and the $\Theta$-classes crossing $P_2$ are distinct. Since $P_1$ and $P_2$ are shortest paths, the $\Theta$-classes in each of two groups are also pairwise different. Consequently, $P$ is a shortest $(v,u)$-path and thus $G_{C,E_j}$ is a partial cube. Finally, $G$ is an isometric subgraph of $G_{C,E_j}$ by construction. To prove \eqref{cond2:prop_extendcycle}, let $v \in G \setminus C'$. If $v \in G \setminus C$, let $w$ be its gate in $C$. Thus there is a shortest $(v,w)$-path which does not cross the $\Theta$-classes crossing $C$. Suppose that $w \notin C'$, otherwise we are done. Then there exists a vertex $w'$ such that the edge $ww'$ belongs to $E_j$. Since $E_j$ crosses $C$ and not $C'$, $w'$ is the gate of $v$ in $C'$. If $v \in C \setminus C'$, using the previous argument, there exists an edge $vv'$ belonging to $E_j$ and we conclude that $v'$ is the gate of $v$ in $C'$. To prove \eqref{cond3:prop_extendcycle}, suppose by way of contradiction that $G_{C,E_j}$ has a $Q_3$ as a pc-minor. Then there exists a sequence $s$ of restrictions $\rho_s$ and contractions $\pi_s$ such that $s(G) = Q_3$. Recall that restrictions and contractions commute in partial cube \cite{ChKnMa}. Hence, we get a graph $G'=\pi_s(G)$ which contains a convex $Q_3$. Thus, this pc-minor $Q_3$ can be obtained by contractions. Clearly, $E_j$ must be among the uncontracted classes, because $\pi_j(G_{C,E_j})=\pi_j(G)$. Furthermore, if only one other $\Theta$-class of $C$ is not contracted in $G_{C,E_j}$, then contraction will identify all new vertices with (contraction) images of old vertices and again by the assumption $G\in {\mathcal F}(Q_3)$ we get a contradiction. Thus, the three classes that constitute the copy of $Q_3$ are $E_j$ and two other classes say $E_j', E_j''$ of $C$. Thus, the augmented $C$ yields a $Q_3^-$ in the contraction of $G_{C,E_j}$, but the last vertex of the $Q_3$ comes from a part of $G$. In other words, there is a vertex $v\in G$, such that all shortest paths from $v$ to $C$ cross $E_j$, $E_j'$, or $E_j''$. This contradicts that $C$ was gated, establishing that $G_{C,E_j}\in {\mathcal F}(Q_3)$. To prove \eqref{cond4:prop_extendcycle}, suppose by way of contradiction that $G_{C,E_j}$ contains a convex $SK_n$. Since $SK_n$ has no 4-cycles nor vertices of degree one, the restrictions leading to $SK_n$ must either include $E_j$ or the class of the edge $v_1v_2$ or $v_{2k-1}v_{2k}$. The only way to restrict here in order to obtain a graph that is not a convex subgraph of $G$ is restricting to the side of $E_j$, that contains the new vertices. But the obtained graph cannot use new vertices in a convex copy of $SK_n$ because they form a path of vertices of degree two, which does not exist in a $SK_n$. Thus $G_{C,E_j}$ does not contain a convex $SK_n$. \end{proof} Propositions \ref{prop:extendtoCOM} and \ref{prop:extendcycle} allow us to prove Theorem \ref{thm:extendtoample}. Namely, applying Proposition \ref{prop:extendtoCOM} to a graph $G \in \mathcal{F}(Q_3)$, we obtain a two-dimensional COM $G\urcorner$, i.e. a graph $G\urcorner\in\mathcal{F}(Q_3,SK_4)$. Then, we recursively apply the cycle completion along gated cycles to the graph $G\urcorner$ and to the graphs resulting from $G\urcorner$. By Proposition~\ref{prop:extendcycle} \eqref{cond3:prop_extendcycle}, \eqref{cond4:prop_extendcycle}, all intermediate graphs belong to $\mathcal{F}(Q_3,SK_4)$, i.e. they are two-dimensional COMs. This explain why we can recursively apply the cycle completion construction cycle-by-cycle. Since this construction does not increase the VC-dimension, by Sauer-Shelah lemma after a finite number of steps, we will get a graph $(G\urcorner)\ulcorner\in\mathcal{F}(Q_3,SK_4)$ in which all convex cycles must be gated (by Propositions \ref{gatedhullC6} and \ref{gatedcycle}) and must have length $4$. This implies that $(G\urcorner)\ulcorner\in\mathcal{F}(C_6)$. Consequently, $(G\urcorner)\ulcorner\in\mathcal{F}(Q_3,C_6)$ and by Proposition \ref{prop:excludedminors} the final graph $G^{\top}=(G\urcorner)\ulcorner$ is a two-dimensional ample partial cube. This completes the proof of Theorem \ref{thm:extendtoample}. For an illustration, see Figure \ref{fig:completionM}. \begin{figure}[htb] \centering \includegraphics[width=.40\textwidth]{M_completion} \caption{An ample completion $M^{\top}$ of the running example $M$.} \label{fig:completionM} \end{figure} \begin{remark} One can generalize the construction in Proposition~\ref{prop:extendcycle} by replacing a gated cycle $C$ by a gated $\AOM$ that is the convex hull of $C$, such that all its convex cycles are gated. In a sense, this construction captures the set of all possible extensions of the graph $G$. \end{remark} \section{Cells and carriers} This section uses concepts and techniques developed for COMs \cite{BaChKn} and for hypercellular graphs \cite{ChKnMa}. Let ${\mathcal C}(G)$ denote the set of all convex cycles of a partial cube $G$ and let ${\mathbf C}(G)$ be the 2-dimensional cell complex whose 2-cells are obtained by replacing each convex cycle $C$ of length $2j$ of $G$ by a regular Euclidean polygon $[C]$ with $2j$ sides. It was shown in \cite{KlSh} that the set ${\mathcal C}(G)$ of convex cycles of any partial cube $G$ constitute a basis of cycles. This result was extended in \cite[Lemma 13]{ChKnMa} where it has been shown that the 2-dimensional cell complex ${\mathbf C}(G)$ of any partial cube $G$ is simply connected. Recall that a cell complex $\bf X$ is {\it simply connected} if it is connected and if every continuous map of the 1-dimensional sphere $S^1$ into $\bf X$ can be extended to a continuous mapping of the (topological) disk $D^2$ with boundary $S^1$ into $\bf X$. Let $G$ be a partial cube. For a $\Theta$-class $E_i$ of $G$, we denote by $N(E_i)$ the \emph{carrier} of $E_i$ in ${\mathbf C}(G)$, i.e., the subgraph of $G$ induced by the union of all cells of ${\mathbf C}(G)$ crossed by $E_i$. The carrier $N(E_i)$ of $G$ splits into its positive and negative parts $N^+(E_i):=N(E_i)\cap G^+_i$ and $N^-(E_i):=N(E_i)\cap G^-_i$, which we call {\it half-carriers}. Finally, call $G^+_i\cup N^-(E_i)$ and $G^-_i\cup N^+(E_i)$ the {\it extended halfspaces} of $E_i$. By Djokovi\'c's Theorem \ref{Djokovic}, halfspaces of partial cubes $G$ are convex subgraphs and therefore are isometrically embedded in $G$. However, this is no longer true for carriers, half-carriers, and extended halfspaces of all partial cubes. However this is the case for two-dimensional partial cubes: \begin{proposition} \label{carriers} If $G\in {\mathcal F}(Q_3)$ and $E_i$ is a $\Theta$-class of $G$, then the carrier $N(E_i)$, its halves $N^+(E_i), N^-(E_i)$, and the extended halfspaces $G^+_i\cup N^-(E_i), G^-_i\cup N^+(E_i)$ are isometric subgraphs of $G$, and thus belong to ${\mathcal F}(Q_3)$. \end{proposition} \begin{proof} Since the class ${\mathcal F}(Q_3)$ is closed under taking isometric subgraphs, it suffices to show that each of the mentioned subgraphs is an isometric subgraph of $G$. The following claim reduces the isometricity of carriers and extended halfspaces to isometricity of half-carriers: \begin{claim} \label{half-carrier} Carriers and extended halfspaces of a partial cube $G$ are isometric subgraphs of $G$ if and only if half-carriers are isometric subgraphs of $G$. \end{claim} \begin{proof} One direction is implied by the equality $N^+(E_i):=N(E_i)\cap G^+_i$ and the general fact that the intersection of a convex subgraph and an isometric subgraph of $G$ is an isometric subgraph of $G$. Conversely, suppose that $N^+(E_i)$ and $N^-(E_i)$ are isometric subgraphs of $G$ and we want to prove that the carrier $N(E_i)$ is isometric (the proof for $G^+_i\cup N^-(E_i)$ and $G^-_i\cup N^+(E_i)$ is similar). Pick any two vertices $u,v\in N(E_i)$. If $u$ and $v$ belong to the same half-carrier, say $N^+(E_i)$, then they are connected in $N^+(E_i)$ by a shortest path and we are done. Now, let $u\in N^+(E_i)$ and $v\in N^-(E_i)$. Let $P$ be any shortest $(u,v)$-path of $G$. Then necessarily $P$ contains an edge $u',v'$ with $u'\in \partial G^+_i\subseteq N^+(E_i)$ and $v'\in \partial G^-_i\subseteq N^-(E_i)$. Then $u,u'$ can be connected in $N^+(E_i)$ by a shortest path $P'$ and $v,v'$ can be connected in $N^-(E_i)$ by a shortest path $P''$. The path $P'$, followed by the edge $u'v'$, and by the path $P''$ is a shortest $(u,v)$-path included in $N(E_i)$. \end{proof} By Claim \ref{half-carrier} it suffices to show that the half-carriers $N^+(E_i)$ and $N^-(E_i)$ of a two-dimensional partial cube $G$ are isometric subgraphs of $G$. By Proposition \ref{prop:extendtoCOM}, $G$ is an isometric subgraph of its canonical COM-extension $G\urcorner$. Moreover from the construction of $G\urcorner$ it follows that the carrier $N(E_i)$ and its half-carriers $N^+(E_i)$ and $N^-(E_i)$ are subgraphs of the carrier $N\urcorner(E_i)$ and its half-carriers $N\urcorner^+(E_i), N\urcorner^-(E_i)$ in the graph $G\urcorner$. By \cite[Proposition 6]{BaChKn}, carriers and their halves of COMs are also COMs. Consequently, $N\urcorner^+(E_i)$ and $N\urcorner^-(E_i)$ are isometric subgraphs of $G\urcorner$. Since the graph $G\urcorner$ is obtained from $G$ via a sequence of 1-extensions, it easily follows that any shortest path $P\subset N\urcorner^+(E_i)$ between two vertices of $N^+(E_i)$ can be replaced by a path $P'$ of the same length lying entirely in $N^+(E_i)$. Therefore $N^+(E_i)$ is an isometric subgraph of the partial cube $N\urcorner^+(E_i)$, thus the half-carrier $N^+(E_i)$ is also an isometric subgraph of $G$. \end{proof} A partial cube $G=(V,E)$ is a {\it 2d-amalgam} of two-dimensional partial cubes $G_1=(V_1,E_1), G_2=(V_2,E_2)$ both isometrically embedded in the cube $Q_m$ if the following conditions are satisfied: \begin{itemize} \item[(1)] $V=V_1\cup V_2, E=E_1\cup E_2$ and $V_2\setminus V_1,V_1\setminus V_2,V_1\cap V_2\ne \varnothing;$ \item[(2)] the subgraph $G_{12}$ of $Q_m$ induced by $V_1\cap V_2$ is a two-dimensional partial cube and each maximal full subdivision $SK_n$ of $G_{12}$ is maximal in $G$; \item[(3)] $G$ is a partial cube. \end{itemize} As a last ingredient for the next proposition we need a general statement about COMs. \begin{lemma}\label{lem:rankofcell} If $G$ is a COM and the cube $Q_d$ is a pc-minor of $G$, then there is an antipodal subgraph $H$ of $G$ that has $Q_d$ as a pc-minor. \end{lemma} \begin{proof} By \cite[Lemma 6.2.]{KnMa}, if $H$ is an antipodal subgraph of a COM $G$ and $G'$ is an expansion of $G$, then the expansion $H'$ of $H$ in $G'$ is either antipodal as well or is peripheral, where the latter implies that $H'$ contains $H$ as a convex subgraph. In either case $G'$ contains an antipodal subgraph, that has $H$ as minor. Since $Q_d$ is antipodal, considering a sequence of expansions from $Q_d=G_0, \ldots G_k=G$ every graph at an intermediate step contains an antipodal subgraph having $Q_d$ as a minor. \end{proof} \begin{proposition} \label{amalgam} Two-dimensional partial cubes are obtained via successive 2d-amalgamations from their gated cycles and gated full subdivisions. Conversely, the 2d-amalgam of two-dimensional partial cubes $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ of $Q_m$ is a two-dimensional partial cube of $Q_m$ in which every gated cycle or gated full subdivision belongs to at least one of the two constituents. \end{proposition} \begin{proof} Let $G=(V,E)$ be a two-dimensional partial cube which is not a single cell. We can suppose that $G$ is 2-connected, otherwise we can do an amalgam along an articulation vertex. We assert that $G$ contains two gated cells intersecting in an edge. Since the intersection of two gated sets is gated and any cell does not contain any proper gated subgraph except vertices and edges, the intersection of any two cells of $G$ is either empty, a vertex, or an edge. If the last case never occur, since any convex cycle of $G$ is included in a single cell, any cycle of $G$ containing edges of several cells (such a cycle exists because $G$ is 2-connected) cannot be written as a modulo 2 sum of convex cycles. This contradicts the result of \cite{KlSh} that the set of convex cycles of any partial cube $G$ constitute a basis of cycles. Pick two gated cells $C_1$ and $C_2$ intersecting in an edge $e$. Let $E_i$ be a $\Theta$-class crossing $C_1$ and not containing $e$. Since $C_2$ is gated, $C_2$ is contained in one of the halfspaces $G^+_i$ or $G^-_i$, say $C_2\subseteq G^+_i$. Notice also that $C_2$ is not included in the carrier $N(E_i)$. Set $G_1:=G^-_i\cup N^+(E_i)$ and $G_2:=G^+_i$. By Proposition \ref{carriers}, $G_1,G_2,$ and $G_1\cap G_2=N^+(E_i)$ are two-dimensional partial cubes, thus $G$ is a 2d-amalgam of $G_1$ and $G_2$. Conversely, suppose that a partial cube $G$ is a 2d-amalgam of two-dimensional partial cubes $G_1$ and $G_2$. Consider the canonical COM completions $G_1\urcorner$ and $G_2\urcorner$ of $G_1$ and $G_2$, which are in $\mathcal{F}(Q_3)$ by the Lemma \ref{1extension}. Then $G_1\urcorner\cap G_2\urcorner$ coincides with $G_{12}\urcorner$. Therefore, by \cite[Proposition 7]{BaChKn} this provides a COM $G'$, which is a COM amalgam of $G_1\urcorner$ and $G_2\urcorner$ along $G_{12}\urcorner$ without creating new antipodal subgraphs. Using the Lemma \ref{lem:rankofcell}, we deduce that $G'\in {\mathcal F}(Q_3)$. Since the graph $G$ is isometrically embedded in $G'$, $G\in {\mathcal F}(Q_3)$, which concludes the proof. \end{proof} The 2-dimensional cell complex ${\mathbf C}(G)$ of a partial cube $G$ is simply connected but not contractible even if $G$ is two-dimensional. However, for a two-dimensional partial cube $G$ there is a simple remedy: one can consider the {\it (combinatorial) cell complex} having gated cycles and gated full subdivisions of $G$ as cells. However, since full subdivisions cannot be directly represented by Euclidean cells, this complex does not have a direct geometric meaning. One possibility is to replace each gated full subdivision $SK_n$ by a regular Euclidean simplex with sides of length 2 and each gated cycle by a regular Euclidean polygon. Denote the resulting polyhedral complex by ${\mathbf X}(G)$. Notice that two cells of ${\mathbf X}(G)$ can intersect in an edge of a polygonal cell or in a half-edge of a simplex. This way, with each two-dimensional partial cube $G$ we associate a polyhedral complex ${\mathbf X}(G)$ which may have cells of arbitrary dimensions. Alternatively, one can associate to $G$ the cell complex ${\mathbf C}(G\urcorner)$ of the canonical COM completion $G\urcorner$ of $G$. Recall that in ${\mathbf C}(G\urcorner)$, each gated cycle of $G$ is replaced by a regular Euclidean polygon and each gated full subdivision $SK_n$ of $G$ is extended in $G\urcorner$ to $SK^*_n$ and this correspond to a bouquet of squares in ${\mathbf C}(G\urcorner)$. Thus ${\mathbf C}(G\urcorner)$ is a 2-dimensional cell complex. \begin{corollary} \label{contractible} If $G\in {\mathcal F}(Q_3)$, then the complexes ${\mathbf X}(G)$ and ${\mathbf C}(G\urcorner)$ are contractible. \end{corollary} \begin{proof} That ${\mathbf C}(G\urcorner)$ is contractible follows from the fact that $G\urcorner$ is a two-dimensional COM (Proposition \ref{prop:extendtoCOM}) and that the cell complexes of COMs are contractible (Proposition 15 of \cite{BaChKn}). The proof that ${\mathbf X}(G)$ is contractible uses the same arguments as the proof of \cite[Proposition 15]{BaChKn}. We prove the contractibility of ${\mathbf X}(G)$ by induction on the number of maximal cells of ${\mathbf X}(G)$ by using the gluing lemma \cite[Lemma 10.3]{Bj} and Proposition \ref{carriers}. By the gluing lemma, if $\bf X$ is a cell complex which is the union of two contractible cell complexes ${\bf X}_1$ and ${\bf X}_2$ such that their intersection ${\bf X}_1\cap {\bf X}_2$ is contractible, then $\bf X$ is contractible. If ${\mathbf X}(G)$ consists of a single maximal cell, then this cell is either a polygon or a simplex, thus is contractible. If ${\mathbf X}(G)$ contains at least two cells, then by the first assertion of Proposition \ref{amalgam} $G$ is a 2d-amalgam of two-dimensional partial cubes $G_1$ and $G_2$ along a two-dimensional partial cube $G_{12}$. By induction assumption, the complexes ${\mathbf X}(G_1)$, ${\mathbf X}(G_1)$, and ${\mathbf X}(G_{12})={\mathbf X}(G_1)\cap {\mathbf X}(G_2)$ are contractible, thus ${\mathbf X}(G)$ is contractible by gluing lemma. \end{proof} \section{Characterizations of two-dimensional partial cubes} The goal of this section is to give a characterization of two-dimensional partial cubes, summarizing all the properties established in the previous sections: \begin{theorem} \label{characterization} For a partial cube $G=(V,E)$ the following conditions are equivalent: \begin{itemize} \item[(i)] $G$ is a two-dimensional partial cube; \item[(ii)] the carriers $N(E_i)$ of all $\Theta$-classes of $G$, defined with respect to the cell complex ${\mathbf C}(G)$, are two-dimensional partial cubes; \item[(iii)] the hyperplanes of $G$ are virtual isometric trees; \item[(iv)] $G$ can be obtained from the one-vertex graph via a sequence $\{ (G_i^1,G^0_i,G^2_i): i=1,\ldots,m\}$ of isometric expansions, where each $G^0_i, i=1,\ldots,m$ has VC-dimension $\le 1$; \item[(v)] $G$ can be obtained via 2d-amalgams from even cycles and full subdivisions; \item[(vi)] $G$ has an extension to a two-dimensional ample partial cube. \end{itemize} Moreover, any two-dimensional partial cube $G$ satisfies the following condition: \begin{itemize} \item[(vii)] the gated hull of each isometric cycle of $G$ is a disk or a full subdivision. \end{itemize} \end{theorem} \begin{proof} The implication (i)$\Rightarrow$(ii) is the content of Proposition \ref{carriers}. To prove that (ii)$\Rightarrow$(iii) notice that since $N(E_i)$ is a two-dimensional partial cube, by Propositions \ref{VCdim_pc} and \ref{virtual_isometric_tree} it follows that the hyperplane of the $\Theta$-class $E_i$ of $N(E_i)$ is a virtual isometric tree. Since this hyperplane of $N(E_i)$ coincides with the hyperplane $H_i$ of $G$, we deduce that all hyperplanes of $G$ are virtual isometric trees, establishing (ii)$\Rightarrow$(iii). The implication (iii)$\Rightarrow$(i) follows from Propositions \ref{VCdim_pc} and \ref{virtual_isometric_tree}. The equivalence (i)$\Leftrightarrow$(iv) follows from Proposition \ref{expansion-Qd+1}. The equivalence (i)$\Leftrightarrow$(v) follows from Proposition \ref{amalgam}. The implication (i)$\Rightarrow$(vi) follows from Theorem \ref{thm:extendtoample} and the implication (vi)$\Rightarrow$(i) is evident. Finally, the implication (i)$\Rightarrow$(vii) is the content of Theorem \ref{FS+AOM}. \end{proof} Note that it is not true that if in a partial cube $G$ the convex hull of every isometric cycle is in $\mathcal{F}(Q_3)$, then $G\in\mathcal{F}(Q_3)$; see $X_2^4$ in Figure~\ref{fig:COMobstructions}. However, we conjecture that the condition (vii) of Theorem \ref{characterization} is equivalent to conditions (i)-(vi): \begin{conjecture} Any partial cube $G$ in which the gated hull of each isometric cycle is a disk or a full subdivision is two-dimensional. \end{conjecture} \section{Final remarks} In this paper, we provided several characterizations of two-dimensional partial cubes via hyperplanes, isometric expansions and amalgamations, cells and carriers, and gated hulls of isometric cycles. One important feature of such graphs is that gated hulls of isometric cycles have a precise structure: they are either full subdivisions of complete graphs or disks, which are plane graphs representable as graphs of regions of pseudoline arrangements. Using those results, first we show that any two-dimensional partial cube $G$ can be completed in a canonical way to a COM $G\urcorner$ of rank $2$ and that $G\urcorner$ can be further completed to an ample partial cube $G^{\top}:=(G\urcorner)\ulcorner$ of VC-dimension $2$. Notice that $G$ is isometrically embedded in $G\urcorner$ and that $G\urcorner$ is isometrically embedded in $G^{\top}$. This answers in the positive (and in the strong way) the question of \cite{MoWa} for partial cubes of VC-dimension $2$. However, for Theorem~\ref{thm:extendtoample} it is essential that the input is a partial cube: Figure~\ref{fig:T2} presents a (non-isometric) subgraph $Z$ of $Q_4$ of VC-dimension $2$, such that any ample partial cube containing $Z$ has VC-dimension $3$. Therefore, it seems to us interesting and nontrivial to solve the question of \cite{RuRuBa} and \cite{MoWa} {\it for all (non-isometric) subgraphs of hypercubes of VC-dimension $2$} (alias, for arbitrary set families of VC-dimension 2). \begin{figure} \centering \includegraphics[width=.25\textwidth]{T2} \caption{A subgraph $Z$ of $Q_4$ of VC-dimension $2$, such that any ample partial cube containing $Z$ has VC-dimension $3$.} \label{fig:T2} \end{figure} It is also important to investigate the completion questions of \cite{MoWa} and \cite{RuRuBa} for all partial cubes from ${\mathcal F}(Q_{d+1})$ (i.e., for partial cubes of VC-dimension $\le d$). For this, it will be interesting to see which results for partial cubes from ${\mathcal F}(Q_3)$ can be extended to graphs from ${\mathcal F}(Q_{d+1})$. We have the impression, that some of the results on disks can be extended to balls; a partial cube is a \emph{$d$-ball} if $G\in\mathcal{F}(Q_{d+1})$ and $G$ contains an isometric antipodal subgraph $C\in\mathcal{F}(Q_{d+1})$ such that $G=\conv(C)$. With this is mind, one next step would be to study the class $\mathcal{F}(Q_{4})$. \section*{Acknowledgements} We are grateful to the anonymous referees for a careful reading of the paper and numerous useful comments and improvements. This work was supported by the ANR project DISTANCIA (ANR-17-CE40-0015). The second author moreover was supported by the Spanish \emph{Ministerio de Econom\'ia, Industria y Competitividad} through grant RYC-2017-22701. \bibliographystyle{siamplain}
2024-02-18T23:40:08.542Z
2021-05-20T02:17:35.000Z
algebraic_stack_train_0000
1,444
22,401
proofpile-arXiv_065-7131
\section{Introduction} Let $p$ be a prime number and $G$ a compact Lie group. In \cite{quillen-1971}, Quillen defined a homomorphism, \[ q_G \colon H^{*}(BG;\mathbb{Z}/p)\to \lim_{A\in \mathfrak{A}} H^{*}(BA;\mathbb{Z}/p), \] where $\mathfrak{A}$ is a category of elementary abelian $p$-subgroups of $G$, and proved that $q_G$ is an $F$-isomorphism. In particular, an element in its kernel is nilpotent. For $p=2$, an element in the image of $q_G$ is not nilpotent, so the nilradical of $H^{*}(BG;\mathbb{Z}/2)$ is exactly the kernel of $q_G$. In \cite{kono-yagita-1993}, Kono and Yagita showed that $q_G$ is not injective for $p=2$, $G=\mathrm{Spin}(11), E_7$ by showing the existence of a nonzero nilpotent element in $H^{*}(BG;\mathbb{Z}/2)$. For an odd prime number $p$, Adams conjectured that $q_G$ is injective for all compact connected Lie groups. Adams' conjecture still remains as an open problem as of today. On the other hand, for a compact connected Lie group $G$, there exists a maximal torus $T$. Let $W$ be the Weyl group $N(T)/T$. We denote by $H^{*}(BT;\mathbb{Z})^W$ the ring of invariants of $W$. We denote by $\mathrm{Tor}$ the torsion part of $H^{*}(BG;\mathbb{Z})$. Then, the inclusion map of $T$ induces a homomorphism, \[ \iota_T^{*}\colon H^{*}(BG;\mathbb{Z})/\mathrm{Tor}\to H^{*}(BT;\mathbb{Z})^W. \] Borel showed that $\iota_T^*$ is injective. In \cite{feshbach-1981}, Feshbach gave a criterion for $\iota_T^*$ to be surjective, hence an isomorphism. If $H_{*}(G;\mathbb{Z})$ has no odd torsion, $\iota_T^*$ is surjective if and only if the $E_\infty$-term of the mod $2$ Bockstein spectral sequence of $BG$, \[ H^{*}(BG;\mathbb{Z})/\mathrm{Tor}\otimes \mathbb{Z}/2, \] has no nonzero nilpotent element. Feshbach also showed that for $G=\mathrm{Spin}(12)$, the $E_\infty$-term of the mod $2$ Bockstein spectral sequence of $BG$ has a nonzero nilpotent element. As for spin groups $\mathrm{Spin}(n)$, Benson and Wood \cite{benson-wood-1995} computed the ring of invariants of the Weyl group and they showed that $\iota_T^*$ is not surjective if and only if $n\geq 11$ and $n \equiv 3, 4, 5 \mod 8$. However, as in the case of Adams' conjecture, for an odd prime $p$, no example of a compact connected Lie group $G$ such that the $E_\infty$-term of the mod $p$ Bockstein spectral sequence of $BG$ has a nonzero nilpotent element is known. So, nilpotent elements in the cohomology of classifying spaces of compact connected Lie groups are interesting subject for study. However, no example of a compact connected Lie group $G$ such that $H^{*}(BG;\mathbb{Z}/2)$ has a nonzero nilpotent element is known except for spin groups and the exceptional Lie group $E_7$. The purpose of this paper is to give a simpler example. First, we define a compact connected Lie group $G$. Let us consider the three fold product $SU(2)^3$ of the special unitary groups $SU(2)$. Its center is an elementary abelian $2$-group $(\mathbb{Z}/2)^3$. Let $\Gamma$ be the kernel of the group homomorphism $\det\colon (\mathbb{Z}.2)^3 \to \mathbb{Z}/2$ defined by $\det(a_1, a_2, a_3)=a_1a_2a_3$. We define $G$ to be $SU(2)^3/\Gamma$. Next, we state our results saying that $G=SU(2)^3/\Gamma$ satisfies the required conditions. Since $SU(2)^3/(\mathbb{Z}/2)^3=SO(3)^3$, we have the following fibre sequence: \[ B\mathbb{Z}/2 \to BG \stackrel{\pi}{\longrightarrow} BSO(3)^3. \] Let $\pi_i\colon BSO(3)^3 \to BSO(3)$ be the projection onto the $i^{\mathrm{th}}$ factor. Let $w_2$, $w_3$ be the generators of $H^2(BSO(3);\mathbb{Z}/2)$, $H^3(BSO(3);\mathbb{Z}/2)$, respectively. Let $w_k'=\pi^*(\pi_1^{*}(w_k))$ and $w_k''=\pi^*(\pi_2^{*}(w_k))$. Let $w_{16}$ be the Stiefel-Whitney class $w_{16}(\rho)$ of a real representation $\rho\colon G\to O(16)$. We will give the definition of $\rho$ in Section~\ref{section:2}. Let $f_5$, $f_9$, $g_4$, $g_7$, $g_8$ be polynomials defined by \begin{align*} f_5&=w_2'w_3''+w_2''w_3', \\ f_9& =w_3'^2 w_3''+w_3''^2w_3', \\ g_4&=w_2'w_2'', \\ g_7&=w_2'w_2''(w_3'+w_3''), \\ g_8 &=w_3'w_3''(w_2'+w_2''), \end{align*} respectively. Then, our results are stated as follows: \begin{theorem} \label{theorem:1.1} The mod $2$ cohomology ring of $BG$ is \[ \mathbb{Z}/2[ w_2', w_2'', w_3', w_3'', w_{16}]/(f_5, f_9) \] and its nilradical is generated by $ g_7$, $g_8$. \end{theorem} \begin{theorem} \label{theorem:1.2} The $E_\infty$-term of the mod $2$ Bockstein spectral sequence of $BG$ is \[ \mathbb{Z}/2[w_2'^{2}, w_2''^{2}, w_{16}] \otimes \Delta(g_4, g_8), \] where $\Delta(g_4, g_8)$ is the vector space over $\mathbb{Z}/2$ spanned by $1$, $g_4$, $g_8$ and $g_4g_8$. Its nilradical is generated by $g_8$, \end{theorem} The rank of $SU(2)^3/\Gamma$ is $3$. If the rank of a compact connected Lie group is lower than $3$, then it is homotopy equivalent to one of $T$, $SU(2)$, $T^2$, $T \times SU(2)$, $SU(2)\times SU(2)$, $SU(3)$, $G_2$ or their quotient groups by their central subgroups. For such a compact connected Lie group, the mod $2$ cohomology ring of its classifying space is a polynomial ring, so that it has no nonzero nilpotent element. Thus our example is an example of the lowest rank. We hope our results shed some light on Adams' conjecture since on the contrary to spin groups we have odd primary analogue of the group $SU(2)^3/\Gamma$. Let $\Gamma_2$ is the kernel of the determinant homomorphism $\det\colon (S^1)^3 \to S^1$. Consider the quotient group $ U(p)^3/\Gamma_2. $ It is the odd primary counter part as the group $U(2)^3/\Gamma_2$ is the central extension of the group $SU(2)^3/\Gamma$ by $S^1$. But that's another story and we wish to deal with the group $U(p)^3/\Gamma_2$ in another paper. In what follows, we assume that $G$ is the compact connected Lie group $SU(2)^3/\Gamma$. We also denote the mod $2$ cohomology ring of $X$ by $H^{*}(X)$ rather than $H^{*}(X;\mathbb{Z}/2)$. This paper is organized as follows: In Section~\ref{section:2}, we compute the Leray-Serre spectral sequence associated with the fibre sequence \[ B\mathbb{Z}/2 \stackrel{\iota}{\longrightarrow} BG \stackrel{\pi}{\longrightarrow} BSO(3)^3 \] to compute the mod $2$ cohomology ring $H^{*}(BG)$ and prove Theorem~\ref{theorem:1.1}. In Section~\ref{section:3}, we compute the $Q_0$-cohomology of $H^{*}(BG)$ to complete the proof of Theorem~\ref{theorem:1.2}. \section{The mod $2$ cohomology ring}\label{section:2} In this section, we compute the mod $2$ cohomology ring of $BG$ by computing the Leray-Serre spectral sequence associated with the fibre sequence \[ B\mathbb{Z}/2 \stackrel{\iota}{\longrightarrow} BG \stackrel{\pi}{\longrightarrow} BSO(3)^3. \] First, we recall the mod $2$ cohomology rings of $BSO(3)$ and $BSO(3)^3$. Let $Q_i$ be the Milnor operation \[ Q_i\colon H^k(X) \to H^{k+2^{i+1}-1}(X) \] defined inductively by \[ Q_0=\mathrm{Sq}^1, \quad Q_{i+1}= \mathrm{Sq}^{2^{i+1}} Q_i + Q_i \mathrm{Sq}^{2^{i+1}} \] for $i\geq 0$. The mod $2$ cohomology ring of $BSO(3)$ is a polynomial ring generated by two elements $w_2$, $w_3$ of degree $2$, $3$, respectively, i.e. \[ H^{*}(BSO(3))=\mathbb{Z}/2[w_2, w_3]. \] The action of Steenrod squares is given by the Wu formula. However, what we use in this paper is the action of $Q_0$, $Q_1$ and $Q_2$ only. It is given by \begin{align*} Q_0(w_2)&=w_3, \\ Q_1(w_2)&=w_2w_3, \\Q_2(w_2)&= w_2^3 w_3+w_3^3. \end{align*} Recall that $\pi_i\colon BSO(3)^3\to BSO(3)$ ($i=1,2,3$) is the projection onto the $i^{\mathrm{th}}$ factor. By abuse of notation, we define elements $w_k'$, $w_k''$, $w_k'''$ ($k=2, 3$) in $H^{*}(BSO(3)^3)$ by $ w_k'=\pi_1^*(w_k)$, $w_k''=\pi_2^*(w_k)$, $w_k'''=\pi_3^*(w_k). $ Let use define elements $v_2$, $v_3$ by \begin{align*} v_2&=w_2'+w_2''+w_2''', \\ v_3&=w_3'+w_3''+w_3''', \end{align*} and ideals $I_1$, $I_2$ by \begin{align*} I_1&=(v_2, v_3), \\ I_2&=(v_2, v_3, Q_1(v_2)). \end{align*} Again, by abuse of notation, we consider $f_5$, $f_9$ as polynomials $w_2'w_3''+w_2''w_3'$, $w_3'^2w_3''+w_3''w_3'$ in $H^{*}(BSO(3)^3)$, respectively. Then, by direct calculations, we have \begin{align*} Q_0 v_2&=v_3, \\ Q_1 v_2&\equiv f_5 \quad \mod I_1, \\ Q_2v_2&\equiv f_9 \quad \mod I_2. \end{align*} Now, we compute the Leray-Serre spectral sequence. The $E_2$-term is given by \[ E_2^{p,q}=H^p(BSO(3)^3) \otimes H^q(B\mathbb{Z}/2), \] so that \[ E_2=\mathbb{Z}/2[w_2', w_2'', v_2, w_3', w_3'', v_3, u_1], \] where $u_1$ is the generator of $H^{1}(B\mathbb{Z}/2)$. The first nontrivial differential is $d_2$. Let $\iota_i\colon SU(2) \to SU(2)^3$ be the inclusion map to the $i^{\mathrm{th}}$ factor, that is, \[ \iota_{1}(g)=(g, 1, 1),\quad \iota_{2}(g)=(1, g, 1), \quad \iota_{3}(g)=(1, 1, g). \] Then, they induce the following commutative diagrams. \[ \begin{diagram} \node{B\mathbb{Z}/2} \arrow{e,t}{=} \arrow{s} \node{B\mathbb{Z}/2} \arrow{s} \\ \node{BSU(2)} \arrow{e,t}{\iota_i} \arrow{s} \node{BG} \arrow{s} \\ \node{BSO(3)} \arrow{e,t}{\iota_i} \node{BSO(3)^3.} \end{diagram} \] Since the differential in the Leray-Serre spectral sequence associated with the fibre sequence \[ B\mathbb{Z}/2\to BSU(2) \to BSO(3) \] is \[ d_2(u_1)=w_2, \] we have \[ d_2(u_1)=v_2 \] in the Leray-Serre spectral sequence for $H^{*}(BG)$. To compute the higher differentials, we use the transgression theorem. In order to use the transgression theorem, we use the following lemma on Milnor operations and Steenrod squares. \begin{lemma}\label{lemma:2.1} For $x\in H^{2}(X)$ and $k\geq 1$, we have \[ Q_k(x)= \mathrm{Sq}^{2^k}\cdots \mathrm{Sq}^{2^0} (x) \] \end{lemma} \begin{proof} For $k \geq 2$, by the definition of $Q_{i+1}$ and the unstable condition, we have \begin{align*} Q_{k}(x)&=\mathrm{Sq}^{2^{k}} Q_{k-1} (x)+ Q_{k-1} \mathrm{Sq}^{2^{k}} (x) \\ &=\mathrm{Sq}^{2^{k}} Q_{k-1} (x). \end{align*} Suppose $k=1$. By the unstable condition, we have $\mathrm{Sq}^2(x)=x^2$. By the Cartan formula, we have $\mathrm{Sq}^1 (x^2)=0$. Hence, we have \begin{align*} Q_{1}(x)&=\mathrm{Sq}^{2} Q_{0} (x)+ Q_{0}{\mathrm{Sq}^2}(x) \\ &=\mathrm{Sq}^{2} Q_0 (x). \qedhere \end{align*} \end{proof} From $d_2(u_1)=v_2$ and the action of $Q_0$, $Q_1$, $Q_2$ on $v_2$, by Lemma~\ref{lemma:2.1} and the transgression theorem, we have \begin{align*} d_3(u_1^2)&= v_3, \\ d_5(u_1^4)&=f_5, \\ d_9(u_1^8)&=f_9. \end{align*} It is easy to see that \begin{align*} E_3&=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', v_3, u_1^2], \\ E_4&=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', u_1^4], \\ E_6&=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', u_1^8]/(f_5), \end{align*} In $\mathbb{Z}/2[w_2', w_2'', w_3', w_3'']$, the sequence $ f_5$, $f_9 $ is a regular sequence since their greatest common divisor is $1$. Therefore, we have \[ E_{10}=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', u_1^{16}]/(f_5,f_9). \] In order to prove that the spectral sequence collapses at the $E_{10}$-level, we consider the Stiefel-Whitney class of a real representation \[\rho\colon G \to O(16). \] Let us define the representation $\rho$. Since \[ SO(4)=SU(2)\times_{\mathbb{Z}/2} SU(2), \] we may regard \[ G=SU(2) \times_{\mathbb{Z}/2} (SU(2)\times_{\mathbb{Z}/2} SU(2)) = SU(2) \times_{\mathbb{Z}/2} SO(4) \] as a subgroup of \[ SO(4) \times_{\mathbb{Z}/2} SO(4) \] by regarding the first factor $SU(2)$ as the subgroup of $SO(4)$ in the usual manner. Let \[ \varphi\colon SO(4) \times SO(4) \to O(16) \] be the real representation given by \[ (g_1, g_2) m= g_1 m g_2^{-1} \] where $(g_1, g_2) \in SO(4) \times SO(4)$ and $m$ is a $4\times 4$ matrix with real coefficients. Then, $\varphi$ induced a $16$-dimensional real representation \[ \varphi'\colon SO(4)\times_{\mathbb{Z}/2} SO(4) \to O(16) \] We define the representation $\rho$ to be the restriction of $\varphi'$ to $G$. \begin{proposition} The Stiefel-Whitney class $w_{16}(\rho)$ of the real representation $\rho$ is not decomposable in $H^{*}(BG)$. It is represented by $u_1^{16}$ in the Leray-Serre spectral sequence. \end{proposition} \begin{proof} Let $\iota\colon \mathbb{Z}/2\to G$ be the inclusion map of the center $\mathbb{Z}/2$. The restriction of $\rho$ to the center of $G$ is $16\lambda$ where $\lambda$ is the nontrivial $1$ dimensional real representation of $\mathbb{Z}/2$. So, the Stiefel-Whitney class $w_{16}(\rho \circ \iota)$ is nonzero. If $u_1^{16}$ supports a nontrivial differential in the Leray-Serre spectral sequence then, up to degree $\leq 16$, $H^{*}(BG)$ is generated by $w_2', w_2'', w_3', w_3''$. However, since $\iota$ factors through $BSU(2)^2$, the induced homomorphism sends $w_2', w_2'', w_3', w_3''$ to zero. So, $w_{16}(\rho\circ \iota)$ is zero. It is a contradiction. Therefore, $u_1^{16}$ is a permanent cycle in the Leray-Serre spectral sequence and it is represented by $w_{16}(\rho)$. \end{proof} Hence, the spectral sequence collapses at the $E_{10}$-level, that is, $E_\infty=E_{10}$ and we obtain the first half of Theorem~\ref{theorem:1.1}. \begin{proposition} \label{proposition:2.3} We have \[ H^{*}(BG)=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', w_{16}]/(f_5, f_{9}), \] where $w_{16}$ is the Stiefel-Whitney class $w_{16}(\rho)$. \end{proposition} In order to prove the second half of Theorem~\ref{theorem:1.1}, let us define a ring homomorphism \[ \eta \colon \mathbb{Z}/2[w_2', w_2'', w_3', w_3'', w_{16}]\to \mathbb{Z}/2[w_2', w_2'', u, w_{16}] \] by $\eta(w_2')=w_2'$, $\eta(w_2'')=w_2''$, $\eta(w_3')=w_2' u$, $\eta(w_3'')=w_2''u$, $\eta(w_{16})=w_{16}$. It induces the following ring homomorphism \[ \eta'\colon H^{*}(BG)\to \mathbb{Z}/2[w_2', w_2'', u , w_{16}]/(u^3 w_2'w_2''(w_2'+w_2'')). \] Let \[ R_0=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', w_{16}]. \] From Proposition~\ref{proposition:2.3}, using the fact that $f_5$, $f_9$ is a regular sequence in $R_0$, we have the Poincar\'{e} series of $H^{*}(BG)$, \[ PS(H^{*}(BG), t)=\dfrac{(1-t^5)(1-t^9)}{(1-t^2)^2(1-t^3)^2(1-t^{16})}. \] On the other hand, it is also easy to see that the image of $\eta'$ is spanned by monomials \[ u^\ell w_2'^m w_2''^nw_{16}^k, \] where $k$ ranges over all non-negative integers, for $\ell=0,1,2$, $(m, n)$ satisfies the condition $m+n\geq \ell$, and for $\ell\geq 3$, $(m,n)$ satisfies one of the following conditions: $m\geq \ell$, $n=0$ or $m=1$, $n\geq \ell-1$ or $m=0$, $n\geq \ell$. Thus, the Poincar\'{e} series $PS(\mathrm{Im}\, \eta', t)$ is \[ \dfrac{1}{1-t^{16}}\left( \dfrac{1}{(1-t^2)^2}+ t\left( \dfrac{1}{(1-t^2)^2}-1\right)+t^2 \left( \dfrac{1}{(1-t^2)^2}-1-2t^2\right) +\sum_{\ell=3}^\infty\dfrac{3t^{3\ell}}{1-t^2}\right). \] By computing this, we have \[ PS(H^{*}(BG),t)=PS(\mathrm{Im}\, \eta', t). \] Thus, $\eta'$ is injective. In view of this injective homomorphism $\eta'$, it is easy to see that elements $g_7$, $g_8$ corresponding to $t w_2'w_2''(w_2'+w_2'')$, $t^2 w_2'w_2''(w_2'+w_2'')$, respectively, are nilpotent. So we obtain the following second half of Theorem~\ref{theorem:1.1}. \begin{proposition}\label{proposition:2.4} The nilradical of $H^{*}(BG)$ is the ideal generated by two elements $g_7$ and $g_8$. \end{proposition} \section{The mod $2$ Bockstein spectral sequence}\label{section:3} In this section, in order to show that the mod $2$ Bockstein spectral sequence of $BG$ collapses at the $E_2$-level and to compute its $E_\infty$-term, we compute the $Q_0$-cohomology, i.e. \[ H^{*}(H^{*}(BG),Q_0)=\mathrm{Ker}\, Q_0/\mathrm{Im}\, Q_0, \] of $H^{*}(BG)$. First, we recall the action of $Q_0$ on $H^{*}(BG)$. The action of $Q_0$ on $w_2', w_2'', w_3', w_3''$ is clear from that on $H^{*}(BSO(3))$. We need to determine the action of $Q_0$ on $w_{16}$. \begin{proposition} In $H^{*}(BG)$, we have $Q_0(w_{16})=0$. \end{proposition} \begin{proof} The generator $w_{16}$ is defined as the Stiefel-Whitney class $w_{16}(\rho)$ of the $16$-dimensional real representation $\rho\colon G\to O(16)$. Hence, $w_{17}(\rho)=0$. Since $BG$ is simply-connected, we also have $w_1(\rho)=0$. By the Wu formula $\mathrm{Sq}^1 w_{16}(\rho)=w_{17}(\rho)+w_1(\rho)w_{16}(\rho)$, we have the desired result. \end{proof} Let \[ R_0=\mathbb{Z}/2[w_2', w_2'', w_3', w_3'', w_{16}]. \] We consider the action of $Q_0$ on $w_2'$, $w_2''$, $w_3'$, $w_3''$, $w_{16}$ in $R_0$ by \[ Q_0(w_2')=w_3', \quad Q_0(w_2'')=w_3'', \quad Q_0(w_3')=0, \quad Q_0(w_3'')=0, \quad Q_0(w_{16})= 0. \] Let \[ R_1=R_0/(f_5), \quad R_2=R_0/(f_5, f_9). \] It is clear that $R_2=H^{*}(BG)$ and $H^*(H^{*}(BG), Q_0)=H^*(R_2, Q_0)$. We will prove the following Proposition~\ref{proposition:3.2} at the end of this section. \begin{proposition} \label{proposition:3.2} We have \[ H^{*}(R_2, Q_0)=\mathbb{Z}/2[w_2'^2, w_2''^2, w_{16}]\otimes \Delta(g_4, g_8). \] \end{proposition} The $E_1$-term of the mod $2$ Bockstein spectral sequence of $BG$ is the mod $2$ cohomology ring of $BG$ and $d_1$ is $Q_0$. Since, by Proposition~\ref{proposition:3.2}, the $E_2$-term has no nonzero odd degree element, the spectral sequence collapses at the $E_2$-level. It is also clear that $g_4^2=w_2'^2w_2''^2\not=0$, $g_8^2=0$ from Theorem~\ref{theorem:1.1}. Now, we complete the proof of Theorem~\ref{theorem:1.2} by proving Proposition~\ref{proposition:3.2}. \begin{proof}[Proof of Proposition~\ref{proposition:3.2}] We start with $H^*(R_0, Q_0)$. It is clear that \[ H^*(R_0,Q_0)=\mathbb{Z}/2[w_2'^2, w_2''^2, w_{16}]. \] We denote by $(-)\times a$ the multiplication by $a$, Consider a short exact sequence \[ 0 \to R_0 \stackrel{(-)\times f_5}{\longrightarrow} R_0 \to R_1 \to 0. \] Since $Q_0$ commutes with $f_5$, this short exact sequence induces the long exact sequences in $Q_0$-cohomology: \[ \cdots \to H^{i}(R_0,Q_0) \to H^{i}(R_1, Q_0) \stackrel{\delta_4}{\longrightarrow} H^{i-4}(R_0,Q_0) \to \cdots \] Since $H^{odd}(R_0,Q_0)=0$, this long exact sequence splits into short exact sequences: \[ 0 \to H^{2i}(R_0;Q_0) \to H^{2i}(R_1, Q_0) \stackrel{\delta_4}{\longrightarrow} H^{2i-4}(R_0,Q_0) \to 0 \] and $H^{odd}(R_1, Q_0)=0$. Since $Q_0 g_4=f_5$ in $R_0$, $g_4$ is nonzero in $R_1$ and $\delta_4(g_4)=1$. Therefore, we have \[ H^{*}(R_1,Q_0)=\mathbb{Z}/2[w_2'^2, w_2''^2, w_{16}]\otimes \Delta(g_4). \] Next, let us consider a short exact sequence \[ 0 \to R_1 \stackrel{(-)\times f_9}{\longrightarrow} R_1 \to R_2 \to 0. \] As above, since $H^{odd}(R_1, Q_0)=\{ 0\}$, we have short exact sequences \[ 0 \to H^{2i}(R_1;Q_0) \to H^{2i}(R_2, Q_0) \stackrel{\delta_8}{\longrightarrow} H^{2i-8}(R_1,Q_0) \to 0 \] and $H^{odd}(R_2, Q_0)=\{0\}$. Since $Q_0g_8=f_9$, we obtain the desired result \[ H^{*}(R_2, Q_0)=\mathbb{Z}/2[w_2'^2, w_2''^2, w_{16}]\otimes \Delta(g_4, g_8). \qedhere \] \end{proof} \begin{bibdiv} \begin{biblist} \bib{benson-wood-1995}{article}{ author={Benson, D. J.}, author={Wood, Jay A.}, title={Integral invariants and cohomology of $B{\rm Spin}(n)$}, journal={Topology}, volume={34}, date={1995}, number={1}, pages={13--28}, issn={0040-9383}, doi={10.1016/0040-9383(94)E0019-G}, } \bib{feshbach-1981}{article}{ author={Feshbach, Mark}, title={The image of $H^{\ast} (BG,\,{\bf Z})$ in $H^{\ast} (BT,\,{\bf Z})$ for $G$ a compact Lie group with maximal torus $T$}, journal={Topology}, volume={20}, date={1981}, number={1}, pages={93--95}, issn={0040-9383}, doi={10.1016/0040-9383(81)90015-X}, } \bib{kono-yagita-1993}{article}{ author={Kono, Akira}, author={Yagita, Nobuaki}, title={Brown-Peterson and ordinary cohomology theories of classifying spaces for compact Lie groups}, journal={Trans. Amer. Math. Soc.}, volume={339}, date={1993}, number={2}, pages={781--798}, issn={0002-9947}, doi={10.2307/2154298}, } \bib{quillen-1971}{article}{ author={Quillen, Daniel}, title={The spectrum of an equivariant cohomology ring. I, II}, journal={Ann. of Math. (2)}, volume={94}, date={1971}, pages={549--572; ibid. (2) 94 (1971), 573--602}, issn={0003-486X}, doi={10.2307/1970770}, } \end{biblist} \end{bibdiv} \end{document}
2024-02-18T23:40:08.743Z
2019-06-12T02:12:22.000Z
algebraic_stack_train_0000
1,461
3,565
proofpile-arXiv_065-7173
\section{Introduction} The need for data integration from various sources is growing in information systems in order to improve data quality and reusability of the data e.g. for retrieval or data analysis. The procedure of finding records in a database that correspond to the same entity (e.g. files, publications, data sets, ...) across another data set is typically called record linkage. Record linkage has been used in different domains~\cite{christen2012data,herzog2007data}. The application of record linkage in the domain of bibliographic data is known as citation matching or reference matching. High quality citation data for research publications is the basis for areas like bibliometrics but also for integrated digital libraries (DL). Citation data are valuable since they show the linkage between publications. The extraction of reference information from full text is called citation extraction. One key challenge for the aforementioned tasks is to match the extracted reference information to given DLs. The process of mapping an extracted reference string to one entity of a given DL is called citation matching \cite{lawrence1999digital}. Proper citation matching is an essential step for every citation analysis \cite{moed2006citation} and the improvement of citation matching leads to a higher quality of bibliometric studies. In the DL context, citation data is one important source of effective information retrieval, recommendation systems and knowledge discovery processes \cite{mayr-IJDL2018}. Despite the widely acknowledged benefits of citation data, the open access to them is still insufficient. Some commercial companies such as Clarivate Analytics, Elsevier or Google possess citation data in large-scale and use them to provide services for their users. Recently, some initiatives and projects e.g. the "Open Citations" project or the "Initiative for Open Citations" focus on publishing citation data openly\footnote{\url{https://i4oc.org/}}. The "Extraction of Citations from PDF Documents" - EXCITE\footnote{\url{http://excite.west.uni-koblenz.de/website/}} project is one of these projects. The aim of EXCITE is extracting and matching citations from social science publications \cite{koerner2017} and making more citation data available to researchers. With respect to this objective, a set of algorithms for information extraction and matching has been developed focusing on social science publications in the German language. The shortage of citation data for the international and German social sciences is well known to researchers in the field and has itself often been subject to academic studies \cite{moed2006citation}. This paper is dedicated to the step of citation matching in the EXCITE pipeline and the responsible algorithm for this task is called EXmatcher\footnote{\url{https://github.com/exciteproject/EXmatcher}}. For the matching task in EXCITE, different target databases/DLs are defined a) sowiport \cite{Hienert2015}, b) GESIS Search\footnote{\url{https://search.gesis.org/}} and c) Crossref\footnote{\url{https://search.crossref.org}}. The matching target for the study in this paper is solely sowiport. Sowiport contains bibliographic metadata records of more than 9 million references on publications and research projects in the social sciences.\\ This paper makes the following contributions: \begin{itemize} \item Introduction of a gold standard for the citation matching task, \item Evaluate the effect of different inputs in the citation matching steps and \item Investigate the effect of the utilization of reference segmentation probabilities as features in the citation matching procedure. \end{itemize} The remainder of this paper is structured as follows. In section 2, we organize the related work around the concepts of the record linkage pipeline known from~\cite{christen2012data}. Section 3 describes the set-up of citation matching in the EXCITE project. Section 4 is about creating a citation matching gold standard corpus and evaluation of our algorithm with different configurations. Finally, section 5 summarizes the key outcomes of our improvements on citation matching. \section{Related Work} Christen et al.~\cite{christen2012data} suggested general steps for the matching process after reviewing different matching approaches: \textbf{(1)} Input pre-processing, \textbf{(2)} Blocking technique, \textbf{(3)} Feature extraction and classification. EXmatcher also follows these steps for citation matching and considers different input configurations to investigate their affects. In the following, we organize the related work according to these steps. \subsection{Input pre-processing} As the first step, input data need to be pre-processed in a way that it becomes proper for the matching algorithm. To identify similar strings during all parts of the matching process a common method to increase robustness is to normalize the input strings. A simple normalization is to lowercase the input string and remove punctuation and stop words. If an algorithm depends on reference segments for matching, this data need to be extracted from the reference strings. PDFX \cite{constantin2013pdfx}, Exparser~\cite{jcdlyezd}, GROBID \cite{lopez2009grobid}, ParsCit \cite{councill2008parscit} are few examples of tools that perform reference segmentation. Wellner et al.~\cite{wellner2004integrated} investigated the effect of extraction probabilities on citation matching by the consideration of different number of best Viterbi segmentations. EXmatcher considers only the best Viterbi segmentation and uses the probability of each segment in feature vector provided for a binary classifier regarding the citation matching task. Phonetic function is another technique used in this step and the common idea behind all phonetic encoding functions is that they attempt to convert a string into a code based on pronunciation~\cite{christen2006comparison}. Phonetic algorithms are mainly used for name segments. Pre-processing functions can also be used in other steps. For example, data which has been prepared by phonetic functions can be considered as blocking keys in the indexing step since indexing brings similar values together. These techniques can also be used in the feature extraction step to generate vectors of features for classifiers. This encoding process is often language dependent. Soundex algorithm was developed by Russell and Odell in 1918 ~\cite{odell1918soundex} for English language pronunciation. Phonex~\cite{lait1996assessment}, NYSIIS~\cite{taft1970name}, and Cologne functions~\cite{postel1969kolner} are some other examples of phonetic functions. The Cologne phonetics is based on the Soundex phonetic algorithm and is optimized to match the German language. We also used Cologne phonetic function in our implementation since our main focus was working on German language papers. \subsection{Blocking Technique} The next step is the blocking technique in order to decrease the number of pairs required to be compared. Imagine, we need to match a set of $n$ references extracted from publications to a bibliographic database with $m$ entries. In a naive way, comparisons of every reference with every entry in the database are required which results in a complexity of $n\times m$. Considering a set of 100,000 references and a database with 10 million bibliographic entries this results in $10^{12}$ comparisons. In the blocking approach, we split target and source into blocks of data depending on a common attribute or combination of the attributes. After finding corresponding blocks in source and target we reduce the number of necessary comparisons to the number of combinations between corresponding blocks. For example, we have a reference with "2001" as publication year, so in this case, it is not necessary to compare this reference with the entire records in the target database. We only could compare to the entries in the block of records published in 2001. In related works (e.g.,~\cite{olensky2016evaluation}), even they use blocks related to one year before and after that year. Several blocking or indexing techniques have been introduced till now~\cite{christen2012data}. As an example, D-Dupe~\cite{kang2008interactive} is a tool implemented for data matching and network visualizations. D-Dupe implemented an indexing technique based on standard blocking~\cite{fellegi1969theory}. Hernandez et al. suggested a sorted neighborhood approach~\cite{hernandez1995merge,hernandez1998real}. This technique, instead of generating a key for each block, sorts the data for matching based on a 'sorting key'. In suffix or q-gram based indexing approaches there is a higher chance to have correct matches in a same block since the idea behind them are for handling different forms of entities and errors. In the citation matching field, Fedoryszak et al. presented a blocking method based on hash functions~\cite{fedoryszak2014efficient}. Another research field deals with the identification of efficient blocking keys. Koo et al. tried to find the best combination of citation record fields~\cite{koo2011effects} that helps increase citation matching performance. \subsection{Feature Extraction and Classification} In the third step of the citation matching process, each candidate record pair (i.e, reference (string and related segments) and each retrieved item by blocking) are compared using a variety of attributes and comparison functions. The output of this step is a feature vector for each pair. In the final step, each compared candidate record pair is classified into one of the classes (i.e., match, non-match) using the related feature vector. Comparison functions such as Jaro-Winkler, Jaccard, or Levenshtein are often used for analyzing textual values. As an example, the D-Dupe tool includes string comparison functions such as Levenshtein distance, Jaro, Jaccard, and Monge-Elkan~\cite{kang2008interactive}. For the classification step of citation matching the reference can be represented by a reference string or by extracted segments. Also a combination of both is possible as we show in this work. Foufoulas et al.~\cite{foufoulas2017high} suggested an algorithm which matches reference strings without reference segmentation. Their approach first tries to detect the reference section by some heuristic and then attempts to identify the title of a record in the target repository in the reference section. Finally, it validates this match with more metadata of the record in the target repository. Their title detection and citation validation steps are mostly based on the combination of simple search and comparison functions. One of classification approaches is a threshold-based one. In this type, the similarity between vectors of two items will be calculated (e.g., by using cosine similarity algorithm) and if the similarity score is higher than a predefined threshold, then two items are matched. A rule-based classification employs some rules for classification~\cite{cohen2000data,hernandez1998real,naumann2010introduction}. These rules consist of a combination of smaller parts and the link between these parts are logical "AND", "OR" and "NOT" operands. These rules define the similarity of pairs. In the optimal case, each rule in a set of rules should have a high precision and recall~\cite{mining2006data}. More strict or specific rules usually have high precision, while general or easy rules often have low precision but high recall. The iterative, rule-based citation matching algorithm of CWTS (Center for Science and Technology Studies)~\cite{olensky2016evaluation} relies on a series of matching rules. These rules are applied iterative in decreasing order of strictness. The citation matching algorithm starts with the most restrictive matching rules (e.g., exact match on first author, publication year, publication title, volume number, starting page number, and DOI). Afterward, it proceeds with less restrictive matching rules (e.g. match on Soundex encoding of the last name of the first author, publication year plus or minus one, volume number, and starting page number). The less restrictive matching rules allow for various types of inaccuracies in the bibliographic fields of cited references. In all rules, the Levenshtein distance is used to match the publication name of a cited reference to the publication name of a cited article. Viewing probabilistic record linkage from a Bayesian perspective has also been discussed by Fortini et al.~\cite{fortini2001bayesian} and Herzog et al.~\cite{herzog2007data}. If training data are available, then a supervised classification approach can be employed. Many binary classification techniques have been introduced~\cite{mining2006data,mitchell1997artificial}, and many of these techniques are used for matching. Decision tree is one of these supervised classification techniques~\cite{mining2006data}. As an example, Cochinwala et al.~\cite{cochinwala2001efficient} build a training set and trained a Regression Tree (CART) classifier~\cite{Breiman1984Classification} for data matching. TAILOR tool~\cite{elfeky2002tailor} for data matching uses e.g. a ID3 decision tree. The Support Vector Machine (SVM) classification algorithm~\cite{Vapnik2000} is based on the idea of mapping the input data of the classifier into a higher dimensional vector space using a kernel function. This is done to be able to separate samples for the target classes using a hyperplane even if it is not possible in the lower dimension. SVM as a large margin classifier optimizes during training time through maximization of the distance between training samples and hyperplane. Fedoryszak et al.~\cite{fedoryszak2013large} presented a citation matching solution using Apache Hadoop. Their algorithm is based on reference segments and also uses SVM algorithm to confirm the status of items (i.e., match or not match) based on the created features. \section{Matching Procedure in EXCITE} \subsection{Input Data for Matching} \label{sssec:DPMEXcite} For matching we used two types of information from each reference, the raw reference strings and structured information (i.e. segments). The segmentation is done with Exparser\footnote{\url{https://github.com/exciteproject/Exparser}} which is a CRF-based algorithm \cite{jcdlyezd}. The output of Exparser includes a probability for each predicted segment. This information is taken into account as an additional information to enhance the results of the matching procedure. To enhance the results for the publication year information we extract year mentions from the raw reference strings with a regular expression independently from the parser. We also remove extra characters in the year segment (e.g. b in 1989b). As a last pre-processing step we have combined volume and issue to one segment called number because during parsing the issue was often recognized as a volume and vice versa. \subsection{Blocking Step} \label{sssec:bsu} We used the search platform Solr\footnote{\url{http://lucene.apache.org/solr/}} for blocking. For each reference, EXmatcher retrieves the corresponding block with the help of blocking queries. The whole blocking procedure is described in Algorithm 1. \begin{algorithm} \KwData{Pre-processed reference $r$, indexed bibliographic database $D$, cutoff parameter $c$} \KwResult{A set of suggested matching records $S$} \BlankLine Generate a query set $Q$ based on segments or reference string\; Initialize an empty suggestion set $S$\; \ForEach{query q in query set $Q$}{ Retrieve ranked result list with query $q$\ as $Rl$\; \If{size of result list $Rl$ $\geq$ 0}{ Cut off ranked result list at position $c$\; Join reduced list $Rl$ to $S$\; } } \caption{Blocking step for matching in EXmatcher} \end{algorithm} First, queries are formulated with the help of the parsed segments and the raw reference strings. Therefore we used the operators OR and AND from the Solr query syntax\footnote{\url{https://lucene.apache.org/solr/guide/6\_6/query-syntax-and-parsing.html}}. Additional we use fuzzy search ($\sim$-operator) which reflects a fuzzy string similarity search based on the Levenshtein distance. The output of the blocking step is a ranked lists of retrieved items from the target database. The items are ranked by the Lucene score based on tf/idf\footnote{https://lucene.apache.org/core/7\_0\_0/core/index.html?org/apache/lucene/search/\\similarities/TFIDFSimilarity.html}. To get the best trade off between retrieving all possible matching items and the reduction of necessary comparisons in the following classification task we identified two opportunities for influence. One is varying the query and select the best query formulation. The other is the selection of a cut off threshold which determines how many of the retrieved items per query are used for further processing. As it is mentioned, firstly, queries out of segment combinations should be generated. For six segments (i.e., 1-Author, 2-Title, 3-Year, 4-Page, 5-Number (Volume/Issue), 6-Source) this results in a maximum of 63 segment combinations. For each query generated based on one of combinations needs to have one correct information about each of the segments queried. For example, if year of publication and authors' names are used, one of the author names has to be correct and also the year of publication. For title and source we used a fuzzy query on the whole segment string. For numbers at least one found number have to be in the volume or the issue field of the record in our database. To exclude not well performing segment combinations for query generation we measure the precision at one of the queries on our gold data. We only select segment combinations where at least 60\% of the retrieved items are a correct match. This reduces the number of maximum combinations we consider for query generation by 25\% to 48. As an alternative strategy we generated queries only from the reference strings without using information from the segmentation. This strategy tries to deal with the problem that title information is often not correctly identified during segmentation. But since the title is the most effective field for matching, the following approach is used which can act independently from the quality of the segmentation. For this we consider all token of the reference string as potentially including title information. The idea is to formulate a bigram search of the whole reference string. The resulting query leads to results which at least need to include one bigram of the reference string in the title field. But the more bigrams of the reference string are included in the title, the more preferred results are. Therefore a query based on only these bigrams of the reference string will be added to the set of queries. In addition, to increase the precision, a query based on year and bigrams of the reference string will also be considered. For this the year information is taken into account which is extracted with a regular expression. The effect on blocking for the two strategies of query generation and even a comparison with a mixture of both strategies is described in~\ref{ssec:evaluation-blocking}. \subsection{Classification for citation matching task} \label{sssec:cfcmt} After retrieving candidates for matches with our blocking procedure we need to decide which of the found candidate our system identify as a match. I.e. the decision if the retrieved item is a match and hence the reference and the entry in the database are representing the same entity. For this we train and evaluate a binary classifier which is able to judge a pair of reference and match candidate as match or non match. It is worth noting, that our approach is able to handle duplicates in our reference database. The crucial step for building this classifier is feature selection. We combine features generated from the raw reference string and from the segmentation. One novelty of our approach is to test the usefulness of utilizing the certainty of our parser for the detected segments as an additional input feature for our classifier. The output of Exparser contains for each token of all segments a probability value reflecting the certainty of the model. If we have a high probability for a segment, the chance of having a wrong predicted label is low. Therefore, we expect that the usage of features reflecting this probabilities will have a noticeable effect on the performance of citation matching. The first group of features is based on the comparison of the reference segments and the retrieved items in the blocking step: \begin{itemize} \item Some example of features based on the author segment: \begin{enumerate} \item Levenshtein score (phono-code and exact), \item Segmentation probability of first author (surname) \end{enumerate} \item Some example of features based on titles and source: \begin{enumerate} \item Jaccard score (including segmentation probabilities), \item Levenshtein score (token and letter level) \end{enumerate} \item Some example of features based on numbers, pages, and publication year: \begin{enumerate} \item Jaccard score, and \item Segmentation probability \end{enumerate} \end{itemize} An example for the usage of the probability is the extended version of the Jaccard score for author names. The Jaccard similarity for the last names is the intersection of last names over the union of the set of last names in two records. If the size of the intersection of the last names of two records is 2 and the size of the union of them is 4, then the Jaccard score would be 0.5 ((1+1)/4). Our enhanced metric uses the extracted probabilities as weights in the intersection. If probabilities of these items in the intersection of lasts names are 0.8 and 0.9, then the new Jaccard score would be 0.42 ((0.8+0.9)/4). For the creation of the features of the second group all information based on segmentation is excluded. These are features based only on the comparison of the raw reference string with the information of the retrieved record. You can find some examples of this group in the following list: \begin{enumerate} \item Longest common sub-string of title and reference string, and \item Occurrence of the abbreviation of the source field (e.g., journal abbreviation) in index in reference string. \end{enumerate} \section{Evaluation} \label{SS:Evaluation} \subsection{Gold Standard for Matching Algorithm} \label{SS:gsfmE} The computation of off-line evaluation metrics such as precision, recall and F-measure need ground truths. A manually checked gold standard was generated to assess the performance of the algorithms. For creating this gold standard, we applied a simple matching algorithm based on blocking on a randomly selected set of reference strings from the EXCITE corpus. The EXCITE corpus contains SSOAR\footnote{\url{https://www.gesis.org/ssoar/home/}} corpus (about 35k), SOJ:Springer Online Journal Archives 1860-2001 corpus (about 80k), and Sowiport papers (about 116K). We used queries based on different combinations of title, author and publication year segments and considered the top hit in the retrieved blocks based on the Solr score. The result was a document id (from sowiport) for each reference, if the approach could find any match. In the second step, these ids detected by the matching algorithm were completed by duplication information in sowiport to reach a list of all candidates of match items for each references. Afterwards, a trained human assessor checked the results. If the previous step leads to an empty result set the assessor was asked to retrieve a record based on manually curated search queries. These manual queries used the information from the correct segments by manually extracting them from the reference strings. If the corresponding item was found, it was added to the gold standard. It also appeared, that not only one match was found, but also duplicates. In this case the duplicates where also added as matching items. When matching items are found in the previous step, the assessor checked this list to remove wrong items and add missing items. The result of this process is a corpus containing 816 reference strings. 517 of these items have at least one matched item in sowiport. We published this corpus and a part of sowiport data (18,590 bibliographic items) openly for interested researchers in our Github repository\footnote{\url{https://github.com/exciteproject/EXgoldstandard/tree/master/Goldstandard\_EXmatcher}}. \subsection{Evaluation of Blocking Step} \label{ssec:evaluation-blocking} In this evaluation, three different configurations for input of blocking (i.e., 1- using \textit{only reference strings}, 2- using \textit{only reference segments}, and 3- the \textit{combination of reference segments and strings}) were examined. In addition, the effect of the consideration of different numbers of top items from the blocking step was checked. Fig.~\ref{fig:precisionExmatchdup} shows that the precision curve of blocking based on reference strings is higher than the two other configurations. This is not a big surprise because using only reference strings in our approach means focusing on the title and year fields (which it is explained in section~\ref{sssec:DPMEXcite} and section~\ref{sssec:bsu}) and the usage of these two fields has a high precision score to retrieve items. On the one hand by considering more items of the blocking list the precision is decreasing. On the other hand the recall shown in Fig.~\ref{fig:recallExmatchdup} reach a score higher than 0.9 after the consideration of the 4 top items of blocking. The highest recall has been achieved using the combination of reference strings and segments. Surprisingly, the curve of reference strings become closer to the combination of reference string and segments by consideration of more top items in blocking and almost reach to that in number 14. All these three curves become almost steady after consideration of 11 top retrieved items for each blocking query. \begin{figure}[h!] \centering \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth]{images/blocking-precision.png} \caption{Precision of blocking} \label{fig:precisionExmatchdup} \end{minipage}% \begin{minipage}{.465\textwidth} \centering \includegraphics[width=\textwidth]{images/blocking-recall.png} \caption{Recall of blocking} \label{fig:recallExmatchdup} \end{minipage} \end{figure} Since we have another step after blocking which improve the precision, the important point in blocking is keep recall score high and at the same time shrinking the number of items for comparison. The precision of these three curves were not significantly different, therefore, the combination of reference strings and segments is picked in blocking step to generate input for the evaluation of classification step. For the number of top items in blocking, which are used for further processing in our pipeline, five is selected because considering more then five items is not leading to a higher recall value. The selected configuration leads to a number of 1 to 39 retrieved items per reference. The average number was 14 records with a standard deviations of 6.5. For the 816 references of the gold standard 10,997 match candidates are generated with our configuration. For each pair of reference and corresponding match candidate in our reference database sowiport we know if it is a match or not based on our gold standard. In these 10,997 pairs, 1,026 (9.3\%) are correct matches and 9,971 (90.7\%) are no matches. After blocking, the number of reference strings which have at least one correct match is 507, and 302 references are without any correct pair. It means only ten references (1.2\%) which have at least one match in the gold standard could not pass blocking successfully, i.e. blocking step could not suggest any correct match for them. \subsection{Evaluation of Classification Step} In this section we present the results of the classification task. We applied ten-fold cross validation for testing different classifier and feature combinations. Blocking generated results for 809 references were split into ten separated groups and their related pairs placed in the related group to form the ten folds for cross validation. Table~\ref{classeval} contains precision, recall and f-measure for our compared configurations. \begin{table*}[h!] \centering \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Ref\_String & Ref\_Segments & Seg\_probability & SVM & Random Forest & Precision & Recall & F1 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & - & 0.947 * & 0.904 & 0.925 * \\ \hline \checkmark & \checkmark & \checkmark & - & \checkmark & 0.938 & 0.906 & 0.921 \\ \hline \hline \checkmark & \checkmark & - & \checkmark & - & 0.941 & 0.908 * & 0.924 \\ \hline \checkmark & \checkmark & - & -& \checkmark & 0.923 & 0.899 & 0.910 \\ \hline \hline \hline - & \checkmark & \checkmark & \checkmark & - & 0.942 & 0.865 & 0.901 \\ \hline - & \checkmark & \checkmark & - & \checkmark & 0.918 & 0.874 & 0.895 \\ \hline \hline - & \checkmark & - & \checkmark & - & 0.836 & 0.869 & 0.852 \\ \hline - & \checkmark & - & - & \checkmark & 0.876 & 0.883 & 0.879 \\ \hline \hline \hline \checkmark & - & - & \checkmark & - & 0.843 & 0.903 & 0.871 \\ \hline \checkmark & - & - & - & \checkmark & 0.879 & 0.855 & 0.866 \\ \hline \end{tabular} \caption{Evaluation macro-metrics of different classifiers including duplicate matches for each reference string - the highest value in each column is marked by * symbol.} \label{classeval} \end{table*} The results show that the SVM classifier using the combination of reference strings and segment features with considering segmentation probability has the highest F1 and precision scores. The second highest F1 score is related to the SVM classifier which uses the combination input features but this time without the segmentation probabilities. The interpretation of data is that using the combination of inputs (reference segments and strings) has the main impact on the accuracy scores. The average number of references in different folds based on their number of correct predictions of the classifier with the highest F1 score are shown in Fig.~\ref{fig:duppre}. \begin{figure} \centering \begin{minipage}[t]{.47\textwidth} \includegraphics[width=\textwidth]{images/numberofgoldstandard.png} \caption{Frequency of references in gold standard with the number of matches in target database sowiport} \label{fig:numberofgoldstandard} \end{minipage}% \hfill \begin{minipage}[t]{.47\textwidth} \centering \includegraphics[width=\textwidth]{images/matching-avg-predict.png} \caption{Average number of references in folds with true prediction of match class} \label{fig:duppre} \end{minipage} \end{figure} In most real world scenarios it is only necessary to find exactly one match in a bibliographic database. Because of this we evaluate our matching algorithm again. For this evaluation only one correct match have to be found for each reference. Regarding this purpose, we pick the highest probability generated by the classifier among match pairs for each reference \footnote{The threshold for decision between two classes would be the 0.5 - default threshold for the SVM classifier in scikit-learn python package.}. For this evaluation, we used the combination of features based on reference strings and segments as the input (including segments probabilities). In this case, average precision and recall scores for SVM algorithm are 0.97 and 0.92. For random forest algorithm, average precision and recall scores are 0.96 and 0.93. To calculate a final score for the complete pipeline, also 10 references which could not pass blocking step should be considered. Since the consideration of these items changes the number of false negatives, we see the effect on the recall score. Consequently, recall in the pipeline with SVM would be 0.913 and the pipeline using the Random Forest classifier would be 0.917. These evaluation scores are included in Table~\ref{my-labeSvm67}. \begin{table}[h!] \centering \caption{10-fold cross-validation of SVM classifier regarding finding only one match for each reference} \label{my-labeSvm67} \begin{tabular}{|l|c|c|c|c|c|c|} \hline & Precision & Recall & F1 & Precision-Pipeline & Recall-Pipeline & F1-Pipeline \\ \hline SVM & 0.972 & 0.926 & 0.948 & 0.972 & 0.913 & 0.941 \\ \hline Random & 0.967 & 0.931 & 0.948 & 0.967 & 0.917 & 0.941 \\ \hline \end{tabular} \end{table} \section{Discussion and Conclusions} In this paper, we explained our approach for handling the task of citation matching in the EXCITE project. The implemented algorithm (EXmatcher) follows the classic solution for this task which contains three steps (i.e., 1- data normalization and cleaning, 2- blocking, 3- feature vector creation and classification). We analyzed the impact of different inputs (i.e., reference strings, segments and the combination of both) on the performance of our citation matching algorithm. In addition, we investigated the benefit of using segments probabilities in the citation matching task. The segmentation probabilities are considered directly and as weights for creating specific features for the classifier of EXmatcher. Using the combination of reference strings and segments as input with a SVM classification outperforms the other configurations in terms of F1 and precision scores. Segments probabilities have a good impact on the precision score when the citation matching algorithm uses segments as input. For example, in the configuration of using only segments as the input and using SVM, segments probabilities can improve the precision about 11\% (Table~\ref{classeval}). The combination of reference strings and segments can also cover the effect of considering segments probabilities. It means including/excluding segments probability doesn't affect the accuracy when citation matching algorithm uses the combination of two input data. The effect of utilizing different classifiers on the result are very depended on other parameters in the citation matching configuration such as input types (i.e. reference strings, segments or both) and the consideration of segment probabilities. The combination of reference strings and segments as the input for citation matching shows a higher recall than using each of them alone. But still 10 references which have at least one match couldn't pass the blocking step with using the combination the both data. One reason of this incident was that in generating queries, EXmatcher combines the information from reference strings and information from reference segments in one query and links them with OR logical operand. Decreasing the number of failed in blocking step leads to a higher recall. One solution could be to send queries based on reference strings and based on segments in different queries and then the algorithm combines the retrieved items. Also more items can be extracted from reference strings input (such as pages, issue, volume and DOI) with some rule based steps and used in the blocking. The citation matching approach which has been described and evaluated in this paper is implemented in a demonstrator which connects all important steps from reference extraction, reference segmentation and matching in the EXCITE toolchain (see \cite{hosseini2019} \url{http://excite.west.uni-koblenz.de/excite}).
2024-02-18T23:40:08.914Z
2019-06-12T02:11:52.000Z
algebraic_stack_train_0000
1,470
5,498
proofpile-arXiv_065-7229
\section{Introduction} In recent years, the use of wavefunction-based post-Kohn--Sham or post-Hartree--Fock methods to solve problems in materials science has proliferated. \cite{muller_wavefunction-based_2012} This is in part driven by an interest in obtaining precise energies (accurate to within 1mHa) for complex systems using hierarchies of methods found in quantum chemistry such as coupled cluster theory. While growing in popularity, wavefunction methods have yet to see widespread adoption, in large part due to their significant computational cost scaling with system size. This is especially of note in coupled cluster theory using a plane wave basis, and as a result, some authors are seeking methods to control finite size errors in order to run calculations using smaller system sizes.~\cite{gruber_applying_2018} Finite size errors arise when attempts are made to simulate an infinite system Hamiltonian with a periodic supercell containing a necessarily finite particle number.\cite{fraser_finite-size_1996,drummond_finite-size_2008} The finite size of a supercell places a limitation on the minimum momenta in Fourier sums (e.g., with a cubic box of length $L$, the smallest momentum transfer is $2\pi/L$). These limitations ultimately lead to errors in the correlation energy; \cite{gruber_applying_2018,ruggeri_correlation_2018} this has been attributed to long range van der Waals forces. \cite{gruber_applying_2018,gruber_ab_2018} Since these finite size errors are large and slowly converging with increasing supercell size, {which has been analyzed in detail for coupled cluster theory, \cite{mcclain_gaussian-based_2017}} there has been significant interest in developing wavefunction methods with reduced computational cost to circumvent finite size error and allow the treatment of larger supercells. These include embedding methods,\cite{sun_quantum_2016} such as density matrix embedding,\cite{knizia_density_2012,knizia_density_2013,bulik_electron_2014,bulik_density_2014,ricke_performance_2017,zheng_cluster_2017,pham_can_2018} wavefunction-in-DFT embedding,\cite{henderson_embedding_2006,tuma_treating_2006,gomes_calculation_2008,sharifzadeh_all-electron_2009,huang_quantum_2011,libisch_embedded_2014, manby_simple_2012,goodpaster_accurate_2014,chulhai_projection-based_2018} electrostatic embedding,\cite{hirata_fast_2005,dahlke_electrostatically_2007,hirata_fast_2008,leverentz_electrostatically_2009,bygrave_embedded_2012} QM/MM-inspired schemes,\cite{shoemaker_simomm:_1999,sherwood_quasi:_2003,herschend_combined_2004,beran_predicting_2010,chung_oniom_2015} and others.\cite{eskridge_local_2018,lan_communication:_2015,rusakov_self-energy_2019,voloshina_embedding_2007,masur_fragment-based_2016} Local correlation methods\cite{collins_energy-based_2015,usvyat_periodic_2018} such as fragment-based schemes,\cite{gordon_fragmentation_2012,li_generalized_2007,li_cluster--molecule_2016,rolik_general-order_2011,li_divide-and-conquer_2004,kobayashi_alternative_2007,kristensen_locality_2011,ghosh_noncovalent_2010,kitaura_fragment_1999,fedorov_extending_2007,netzloff_ab_2007,ziolkowski_linear_2010} incremental methods,\cite{stoll_correlation_1992,paulus_method_2006,friedrich_fully_2007,stoll_approaching_2012,friedrich_incremental_2013,voloshina_first_2014,kallay_linear-scaling_2015,fertitta_towards_2018} and heirarchical methods,\cite{deev_approximate_2005,manby_extension_2006,nolan_calculation_2009,collins_ab_2011} break the system into smaller subsystems, then extrapolate or stitch together the energies. Some methods take advantage of range separation\cite{toulouse_adiabatic-connection_2009,bruneval_range-separated_2012,shepherd_range-separated_2014} or other distance-based schemes\cite{spencer_efficient_2008,maurer_efficient_2013,kats_sparse_2013,kats_speeding_2016,ayala_extrapolating_1999} to reduce computational cost. In addition to work on developing or modifying electronic structure methods, much work on reducing the cost of wavefunction methods has been focused on modifying basis sets in order to accelerate convergence and decrease computation time. Local orbital methods have been popular,\cite{pisani_local-mp2_2005,ayala_atomic_2001,usvyat_periodic_2015,werner_fast_2003,flocke_natural_2004,werner_scalable_2015,rolik_efficient_2013,forner_coupled-cluster_1985,schutz_low-order_2000,neese_efficient_2009,sun_gaussian_2017,booth_plane_2016,blum_ab_2009,subotnik_local_2005} often based on the local ansatz of Pulay and Saebo\cite{saebo_local_1993} or Stollhoff and Fulde.\cite{stollhoff_local_1977} Other common methods include progressive downsampling,\cite{shimazaki_brillouin-zone_2009,hirata_fast_2009,ohnishi_logarithm_2010} downfolding,\cite{purwanto_frozen-orbital_2013} use of explicitly-correlated basis sets\cite{adler_local_2009,shiozaki_communications:_2010,gruneis_explicitly_2013,usvyat_linear-scaling_2013,gruneis_efficient_2015} or natural orbitals,\cite{gruneis_natural_2011} and tensor manipulations.\cite{hohenstein_tensor_2012,benedikt_tensor_2013,hummel_low_2017,peng_highly_2017,motta_efficient_2018} Discussion of the details and relative merits of these methods is beyond the scope of this paper; for a review, we direct the interested reader to Refs. \onlinecite{huang_advances_2008,muller_wavefunction-based_2012,beran_modeling_2016,andreoni_coupled_2018}. However, there has been some work on developing corrections for finite size errors.\cite{fraser_finite-size_1996,kent_finite-size_1999,kwee_finite-size_2008,drummond_finite-size_2008,holzmann_theory_2016,liao_communication:_2016} Many-body methods can sometimes be integrated to the thermodynamic limit (TDL),\cite{gell-mann_correlation_1957,nozieres_correlation_1958,onsager_integrals_1966,bishop_electron_1982,bishop_electron_1978,bishop_overview_1991,ziesche_selfenergy_2007} allowing for the derivation of analytic finite-size correction expressions.\cite{chiesa_finite-size_2006} Several studies from the last year have particular relevance to our work here. Gr{\"u}eneis \emph{et al.}\cite{gruber_applying_2018,gruber_ab_2018} employed a grid integration within periodic coupled cluster for \emph{ab initio} Hamiltonians with applications to various solids. In another study, Alavi \emph{et al.}\cite{ruggeri_correlation_2018} devised a novel extrapolation relationship that links different electron gas calculations through the density parameter. Both of these papers use a technique known as twist averaging to try to remove finite size error. Twist averaging is a method that attempts to control finite size errors by first offsetting the $k$-point grid by a small amount, ${\bf k}_s$, and then averaging over all possible offsets.\cite{lin_twist-averaged_2001} { We refer to ${\bf k}_s$ here as a twist angle. One of the main purposes of twist averaging is to provide for a smoother extrapolation to the thermodynamic limit by reducing severe energy fluctuations as the particle number varies. } When performed with a fixed particle number and box length, this process is referred to as twist averaging in the canonical ensemble, which is what we study here. When employed in stochastic methods, such as variational Monte Carlo,\cite{lin_twist-averaged_2001} diffusion Monte Carlo\cite{drummond_finite-size_2008} or full configuration interaction quantum Monte Carlo,\cite{ruggeri_correlation_2018,shepherd_quantum_2013} the grid can be stochastically sampled at the same time as the main stochastic algorithm, and both stochastic error and error in twist-averaging related to approximate integration can be removed at the same time. As a result, the scaling with the number of twist angles sampled is extremely modest. Unfortunately, the same cost savings cannot be realized for deterministic methods. In this case, in order to achieve a reasonable estimate for the average, one must use a large number of individual energy calculations. This results in the cost scaling linearly with the number of twist angles used, {although the lessening of finite size effects with rising electron number would alleviate this scaling to some extent.\cite{mcclain_gaussian-based_2017}} Here, we seek to remedy the linear scaling of twist averaging for deterministic methods by devising a way to provide an energy that is as accurate as twist-averaging, but with single-calculation cost. {\color{black} In principle, it is possible to find a single twist angle which exactly reproduces the total twist-averaged energy by recognizing that it is an integral of the energy over the twist angles for a system. This was the same logic used in analysis by Baldereschi to find a special $k$-point\cite{baldereschi_mean-value_1973} and has been used by others in the QMC community to find a special twist angle.\cite{dagrada_exact_2016,Rajagopal_quantum_1994,Rajagopal_variational_1995} We are motivated similarly and wish to find a single twist angle that yields an energy approximately equal to the full twist-averaged energy for CCD and related wavefunction methods.} { We take advantage of the similarity between the MP2 and CCD correlation energy expressions, using the much cheaper MP2 method to find a single twist angle that produces a system with the most similar number of allowed excitations to the twist-averaged system. We refer to this set of allowed excitations as the `connectivity'. We then use this twist angle to calculate the CCD energy, which is in good agreement with the fully twist-averaged CCD energy. Finally, we compare our energies to those obtained using one twist angle at the Baldereschi point.\cite{baldereschi_mean-value_1973} } { We do not seek to completely remedy the whole of the finite size error, instead noting that other authors have come up with corrections or extrapolations that can be used after twist-averaging is applied.\cite{chiesa_finite-size_2006,drummond_quantum_2009,gruber_ab_2018}} \section{Twist averaging \& Connectivity} Both continuum/real-space and basis-set twist averaging have been used effectively in quantum Monte Carlo calculations;\cite{lin_twist-averaged_2001,drummond_finite-size_2008,ruggeri_correlation_2018,shepherd_quantum_2013} however, twist averaging remains relatively rare in coupled cluster calculations. In \reffig{fig:TADemonstration}, the total $\Gamma$-point CCD energy ($N=38$ to $N=922$) and twist-averaged CCD energy ($N=38$ to $N=294$) are plotted alongside the extrapolation to the TDL for the uniform electron gas ($0.609(3)$ Ha/electron, where the error in the last digit is in parentheses). The CCD calculation is performed in a finite basis that is analogous to a minimal basis.\cite{shepherd_many-body_2013} The $\Gamma$-point energy is highly non-monotonic; it does not fit well with the extrapolation. The twist-averaged data shows a much better fit with the extrapolation, resulting in a better estimate of the TDL. The drawback of twist averaging, however, is that it costs $N_\mathrm{s}\,\mathcal{O}\mathrm{[CCD]}$ for $N_\mathrm{s}$ twist angles (here, 100). The twist-averaged energy becomes too costly to calculate with CCD for system sizes above 294 electrons. \begin{figure} \includegraphics[width=0.49\textwidth,height=\textheight,keepaspectratio]{./Figure1.pdf} \caption{Comparison between the twist-averaged (TA) CCD energy and the $\Gamma$-point CCD energy for a uniform electron gas with $r_s=1.0$ as the system size changes (up to $N=294$ and $N=922$, respectively). In general, an extrapolation (here, red line) is performed to calculate the TDL energy. Twist averaging makes this extrapolation easier, because the noise around the extrapolation is smaller, leading to a smaller extrapolation error. Twist averaging is performed over 100 twist angles. Standard errors are calculated in the normal fashion for twist averaging, $\sigma\approx\sqrt{\mathrm{Var}(E_{\mathrm{CCD}}({{\bf k}_s})) / N_s}$ (are too small to be shown on the graph, on average 0.2 mHa/el).} \label{fig:TADemonstration} \end{figure} Figure \ref{fig:TADemonstration} is a clear statement of the problem we wish to resolve here. { Twist averaging resolves some finite size errors that are present at an individual particle number $N$, and allows for improved extrapolation to the thermodynamic limit. } That said, the scaling with the number of twist angles is cost-prohibitive. We aim to develop an approximation to twist averaging that gives comparable accuracy at a fraction of the cost. We begin by analyzing how the Hartree-Fock energy and the MP2 correlation energy are modified by twist averaging. This analysis then allows us to build an algorithm that produces CCD twist-averaged accuracy/results for only MP2 cost. \subsection{Hartree-Fock and single-particle eigenvalues} { \begin{figure} \includegraphics[width=0.49\textwidth,height=\textheight,keepaspectratio]{./Figure2.pdf} \caption{{The degeneracy pattern in the energy levels of the $\Gamma$-point calculation can be identified by plotting the HF eigenvalues are plotted in ascending order. Here, we show $N=14, 54$ two systems that are closed shell at the $\Gamma$-point. Averaging the eigenvalues in the manner described in the text removes these degeneracies. The gap between the eigenvalues themselves and across the band gap goes to zero as the TDL is approached, giving rise to the metallic character of the gas.}} \label{fig:AverageEigenvalues} \end{figure} } A finite-sized electron gas at the $\Gamma$-point is only closed-shell at certain so-called magic numbers, which are determined by the symmetry of the lattice (for example $N=2$, 14, 38, and 54). { One of the reasons that the $\Gamma$-point calculations are so noisy (\reffig{fig:TADemonstration}) is that there are degeneracies in the HF eigenvalues, which can be seen in \reffig{fig:AverageEigenvalues} and has long been recognized. \cite{drummond_quantum_2009} This can be partially remedied by modifying the Hartree--Fock eigenvalues. The starting-point for this is} writing the HF energy as follows: \begin{equation} E_\mathrm{HF}({\bf k}_s)= \sum_{i} T_i ({\bf k}_s)- \frac{1}{2} \sum_{ij} v_{ijji}({\bf k}_s) \label{eq:fullHF} \end{equation} where $T_i$ is the kinetic energy of orbital $i$ and $v_{ijji}$ is the exchange integral between electrons in orbitals $i$ and $j$. Here, we have included the explicit form of the dependence on the twist angle, ${\bf k}_s$. The twist-averaged energy is found by summing \refeq{eq:fullHF} over all possible ${\bf k}_s$: \begin{equation} \langle E_\mathrm{HF} \rangle_{\bf{k}_s} = \frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}} \sum_{i} T_i ({\bf k}_s) - \frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}}\frac{1}{2} \sum_{ij} v_{ijji}({\bf k}_s) \end{equation} where $N_s$ indicates the number of twist angles used. Swapping the sums yields: \begin{equation} \langle E_\mathrm{HF} \rangle_{\bf{k}_s} = \sum_{i} \left[\frac{1}{N_\mathrm{s}} \sum_{{\bf k}_s}^{N_\mathrm{s}} T_i ({\bf k}_s) \right]- \frac{1}{2} \sum_{ij} \left[ \frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}} v_{ijji}({\bf k}_s) \right] . \label{eq:ks_sums} \end{equation} Therefore, twist averaging the HF energy is numerically identical to twist averaging the individual matrix elements: \begin{equation} \langle E_\mathrm{HF} \rangle_{\bf{k}_s} = \sum_{i} \langle T_i \rangle_{\bf{k}_s} - \frac{1}{2} \sum_{ij} \langle v_{ijji} \rangle_{\bf{k}_s} \end{equation} Overall, then, we can use twist-averaged HF eigenvalues in place of twist-averaging the HF energy, obtaining a more reasonable density of states \reffig{fig:AverageEigenvalues}. We will use this in our subsequent scheme. \subsection{Beyond Hartree--Fock} { The above approach does not generalize to correlated theories because they have more complex energy expressions. For example, averaging the second-order M{\o}ller-Plesset theory (MP2) correlation energy over all possible twist angles can be written: \begin{equation} \langle E_\mathrm{corr} \rangle_{\bf{k}_s}=\frac{1}{N_\mathrm{s}}\sum_{{\bf k}_s}^{N_\mathrm{s}} \frac{1}{4}\sum_{ijab} \bar{t}_{ijab}({\bf k}_s) \bar{v}_{ijab} ({\bf k}_s), \label{eq:Mp2} \end{equation} where $i$ and $j$ refer to occupied orbitals and $a$ and $b$ refer to unoccupied orbitals. The symbols $\bar{v}$ and $\bar{t}$ refer to the antisymmetrized electron-repulsion integral and amplitude respectively. For MP2: \begin{equation} \bar{t}_{ijab} ({\bf k}_s) \bar{v}_{ijab} ({\bf k}_s)= \frac{ |\bar{v}_{ijab}({\bf k}_s)|^2 }{\epsilon_i({\bf k}_s)+\epsilon_j({\bf k}_s)-\epsilon_a({\bf k}_s)-\epsilon_b({\bf k}_s)} \end{equation}} { Even though MP2 diverges in the thermodynamic limit, the energy expression (\refeq{eq:Mp2}) has a similar structure to coupled cluster theory, the random phase approximation, and even full configuration interaction quantum Monte Carlo. As such, we can make generalized observations using the MP2 energy expression, and then use these observations to derive a scheme to find an optimal $k_s$ twist angle that works for all of these methods.} \subsection{The connectivity approach} The MP2 correlation energy can vary substantially as the twist angle is changed. For example, in the $N=14$ electron system with a basis set of $M=38$ orbitals, the MP2 energy can vary between $-0.0171$ Ha/electron to $-0.0001$ Ha/electron. This arises, in particular, because the number of low-momentum excitations (minimum $|{\bf k}_i-{\bf k}_a|$) will vary significantly. Since the contribution of each excitation to the MP2 sum is $|{\bf k}_i-{\bf k}_a|^{-4}$, there is a rapid decay of an excitation's contribution to the correlation energy beyond the minimum vector. { This effect arises because, when the twist angle is changed,} different orbitals now fall into the occupied ($ij$) space, and different orbitals fall into the virtual ($ab$) space. This changes the value of the sum over both occupied and virtual orbitals, since many individual terms in the sum are now substantively different. We illustrate this using a diagram in the Supplementary Information. By contrast, the integrals themselves do not change; { to show this, the integral can be written: {\color{black} \begin{equation} v_{ijab}=\frac{4\pi}{L^3} \frac{1}{({\bf k}_i-{\bf k}_a)^2} \delta_{{\bf k}_i-{\bf k}_a , {\bf k}_b-{\bf k}_j} \delta_{\sigma_i \sigma_a}\delta_{\sigma_j \sigma_b} . \label{eq:ERIs} \end{equation} } The Kronecker deltas, $\delta$, ensure that momentum and spin symmetry (denoted $\sigma$) are conserved.} On changing ${\bf k}_p \rightarrow {\bf k}_p+{\bf k}_s$ for all ${\bf k}$'s, the difference in the denominator here does not change, since $({\bf k}_i+{\bf k}_s-{\bf k}_a-{\bf k}_s)^2=({\bf k}_i-{\bf k}_a)^2$. {\color{black} In general, our calculations were set up using details which can be found in our prior work e.g. Ref. \onlinecite{shepherd_convergence_2012}}. At this stage, we conjecture that \emph{if} one of the mechanisms by which twist averaging is affecting the MP2 energy (and other correlation energies) is to smooth out the inconsistent contributions between different momenta, \emph{then} it might be possible for us to find a `special twist angle' where the number of low-momentum states for that single twist angle is a good match to the average number of momentum states across all twist angles. { Further, we will show this special twist angle is transferable to other, more sophisticated methods such as coupled cluster doubles theory. } To find this special twist angle, we proceed as follows: \begin{enumerate} \item For a given twist angle ${\bf k}_s$, loop over the same $ijab$ as the MP2 sum $\sum_{ijab}$. For each $ijab$ set: \begin{enumerate} \item Determine the momentum transfer $x=|{\bf n}_i-{\bf n}_a|^2$ where ${\bf n}_a$ is the integer equivalent of the quantum number: ${\bf k}_a=\frac{2\pi}{L}{\bf n}_a$. \item Increment a histogram element $h_x$ by one. \end{enumerate} \item Create a vector ${\bf h}$, whose elements are $h_x$, which correspond to the number of of $v_{ijab}$ matrix elements with magnitude $\frac{1}{\pi L}\frac{1}{x}$ that are encountered during the MP2 sum. \item Average ${\bf h}$ over all twist angles, yielding $\langle {\bf h} \rangle_{\bf{k}_s}$ \item Loop over the twist angles again, and find the single ${\bf h}$ (and corresponding twist angle) that best matches $\langle {\bf h} \rangle_{\bf{k}_s}$ using: \begin{equation} \min_{\bf{k}_s} \sum_x \frac{1}{x^2} \left( h_x - \langle h_x \rangle_{\bf{k}_s} \right)^2 \end{equation} The weight term $1/x^2$ was chosen empirically to diminish the contributions of large numbers of high-momentum weights that contribute relatively little to the energy. \end{enumerate} { Looking at \refeq{eq:Mp2}, there are two ways to proceed. We could either use this special ${\bf k}_s$ for all aspects of the calculation (e.g. for both the integral evaluation and the eigenvalue difference), or we could use the special ${\bf k}_s$ for the integral only, and twist-average the eigenvalues before performing the CCD calculation. We found that the latter was more numerically effective for $N=14$ and decided to use this approach to generate the results presented here. In general, though, for larger systems it does not make a large difference.} In practice, we implemented this algorithm within an MP2 and CCD code; we call the MP2 calculation at each twist angle and then the CCD calculation once at the end. For the remainder of this work, we will call this application of the above algorithm the ``connectivity scheme," referencing the idea that the pattern of non-zero matrix elements $v_{ijab}$ resembles a connected network. \section{Results} {We demonstrate the effectiveness of this algorithm for coupled cluster calculations on the uniform electron gas in \reffig{fig:results}. In general, our results as show that the connectivity scheme works for different electron numbers, basis sets, and $r_s$ values. Furthermore, evaluation of the connectivity scheme is approximately 100x cheaper than twist averaging. } \begin{figure} \begin{center} \vspace{-1cm} \subfigure[\mbox{}]{% \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure3a.pdf} \label{subfig:diffN} } \subfigure[\mbox{}]{% \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure3b.pdf} \label{subfig:diffM} } \subfigure[\mbox{}]{% \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure3c.pdf} \label{subfig:diffRs} } \caption{All energies shown represent the difference in correlation energy between the $\Gamma$-point and the relevant calculation, since, by design, the Hartree-Fock energy is identical between the connectivity scheme and standard twist averaging (TA). The connectivity scheme delivers comparable corrections to the correlation energy (relative to the $\Gamma$-point) when compared with twist averaging across a wide range of (a) electron numbers (using a minimal basis set, where $M \approx 2N$, as mentioned in Ref. \onlinecite{shepherd_many-body_2013} and tabulated in the Supplementary Information), (b) different basis sets ($M=36-2838$ orbitals, with $N=54$ electrons), and (c) $r_s$ values (0.01 -- 50.0 a.u., with $N=54$ electrons). Twist averaging is performed over 100 twist angles. Standard errors are calculated in the normal fashion for twist averaging, $\sigma\approx\sqrt{\mathrm{Var}(E_{\mathrm{CCD}}({{\bf k}_s})) / N_s}$. }\label{fig:results} \end{center} \end{figure} In \reffig{subfig:diffN}, we compare the connectivity scheme to full twist-averaging for CCD calculations on the uniform electron gas. Energy differences from the $\Gamma$-point energy are plotted for each electron number. Our results show that the connectivity scheme delivers comparable accuracy (mean absolute deviation = 0.3 mHa/electron) to twist averaging, with the benefit of being much faster to compute. The connectivity scheme is substantially cheaper than the twist-averaging scheme: the $N=294$ twist-averaged calculation, for example, costs $58$ hours, which is about the same time it takes to run the $N=922$ connectivity scheme calculation. A complete set of timings is provided in the Supplementary Information. In \reffig{subfig:diffM}, we compare our connectivity scheme to full twist-averaging over a range of basis set sizes ($M= 36 - 2838$ orbitals) for 54 electrons. In \reffig{subfig:diffRs}, we compare the connectivity scheme to full twist-averaging over a range of $r_s$ values ($0.01 -50.0$ a.u.) for 54 electrons. In both cases there is good agreement between the two methods for all system sizes, proving that the connectivity scheme delivers good accuracy when compared with twist averaging for a range of both basis set sizes (mean absolute deviation $<$ 0.35 mHa/electron) and $r_s$ values (mean absolute deviation $<$ 0.25 mHa/electron) at a decreased cost. \begin{figure} \includegraphics[width=0.5\textwidth,height=\textheight,keepaspectratio]{./Figure4.pdf} \label{subfig:TDL1} \caption{Connectivity scheme CCD correlation energies for electron numbers up to $N=922$ for $r_s=1.0$ in the uniform electron gas (yellow triangles). We fit 10 points (dotted red line) to the function $E=a+bN^{-1}$, as proposed by other authors; \cite{drummond_finite-size_2008} we then use this fit to extrapolate to the thermodynamic limit.} \label{fig:TDLresults} \end{figure} In \reffig{fig:TDLresults}, we show the extrapolation of our connectivity scheme CCD correlation energy to the thermodynamic limit for the $r_s=1.0$ uniform electron gas. We perform calculations up to $N=922$ electrons, and fit these results to the equation $E=a+bN^{-1}$, as proposed by other authors. \cite{drummond_finite-size_2008} We then use this fit to extrapolate the correlation energy to the thermodynamic limit. We also performed the same extrapolation for the twist-averaged data set up to $N=294$ electrons (not shown). The extrapolations predict the TDL energy to be $-0.0340(8)$ Ha/electron for the connectivity scheme and $-0.033(4)$ Ha/electron for the twist-averaged scheme, a difference of $0.001(4)$ Ha/electron. The numbers in parentheses are errors in the final digit. These agree within error, and the connectivity scheme has an improved error due to having more data points. {Next, we demonstrate how to use this method to obtain a complete basis set and thermodynamic limit estimate for the uniform electron gas. Connectivity scheme CCD energies were collected for the $N=54$ electron system with basis sets varying from $M=922$ to $M=2838$ orbitals, and for systems with electron numbers varying between $N=162$ to $610$, with $M\approx 4N$. These data allow us to extrapolate to both the complete basis set limit and the thermodynamic limit by using the numerical approach set out in our previous work.\cite{shepherd_communication:_2016} This yields an energy that is 0.0566(6), with the error in parentheses resulting from the extrapolations; this is in good agreement with our prior estimate with significantly less error.\cite{shepherd_communication:_2016} For more details the reader is referred to the Supplementary Information. } { Finally, in \reffig{fig:BPCompresults}, we compare the CCD energies from full twist-averaging, our connectivity scheme, and performing single calculation using the Baldereschi point as a twist angle. This point, first developed for insulators, is well known for the role it played in developing efficient thermodynamic integrations\cite{baldereschi_mean-value_1973,chadi_special_1973,cunningham_special_1974,monkhorst_special_1976} and was subsequently used for twist-averaging as the center-point of uniform grid twist-averaging by Drummond \emph{et al.}\cite{drummond_quantum_2009}. At higher electron numbers ($N\geq 162$) the difference between BP and the TA energies falls below 1mHa/electron as all of the approaches converge to the same energy. At small electron numbers, however, the Baldereschi point significantly deviates from the twist-averaged energy, while the connectivity scheme is a much better approximation.} \begin{figure} \includegraphics[width=0.4\textwidth,height=\textheight,keepaspectratio]{./Figure5.pdf} \label{subfig:BPComp} \caption{{All energies shown reflect the difference in correlation energy between the $\Gamma$-point and the relevant calculation. The connectivity algorithm delivers comparable corrections to the correlation energy (relative to the $\Gamma$-point) when compared with twist averaging across a wide range of electron numbers. The Baldereschi point only delivers comparable corrections to the correlation energy (relative to the $\Gamma$-point) at higher electron numbers ($N \geq 162$) when compared with twist averaging. Twist averaging is performed over 100 twist angles. Standard errors are calculated in the normal fashion for twist averaging, $\sigma\approx\sqrt{\mathrm{Var}(E_{\mathrm{CCD}}({{\bf k}_s})) / N_s}$.}} \label{fig:BPCompresults} \end{figure} \section{Discussion \& concluding remarks} Our results show that a finite electron gas is best able to reproduce the twist-averaged total and correlation energies when a special $\bf{k}_s$-point is chosen to minimize the differences between the momentum connectivity of the finite system and a reference (here, a twist-averaged finite system). {Our interpretation of the connectivity-derived special $\bf{k}_s$-point's utility is that the low-momentum two-particle excitations from HF often suffer from finite size errors due to the shape of the Fermi surface in $k$-space. By finding a particularly representative $k_s$-point, we aim to take the `best case' of a representative shape--or, at least, as best as can be managed by a truly finite system. } When we examine the occupied orbitals in $k$-space at the special $\bf{k}_s$-point, they adopt low-symmetry patterns that tend more toward the shape of a sphere than the $\Gamma$-point distribution. Though we have made significant progress here towards ameliorating finite size error, {there are still two open questions. First, could our method be modified in order to minimize the energy difference to the thermodynamic limit rather than just to the twist-averaged energy? The second open question surrounds the extrapolation -- in particular, what is the \emph{actual} form of the energy as the system size tends to infinity?} We could investigate this source of error by comparing with the known high-density limit of RPA, which CCD is expected to be able to capture. We leave both of these investigations for future work. Overall, the results here should improve our ability to understand infinite-sized model systems that are necessarily represented as finite systems, such as the electron gas with varying dimensions, the Hubbard model, and the models of nuclear matter we previously studied. \cite{shepherd_communication:_2016,Baardsen2,Baardsen1} This communication is timely due to a resurgence of interest in the uniform electron gas~\cite{neufeld_study_2017,white_time-dependent_2018,spencer_large_2018,mcclain_spectral_2016,spencer_hande-qmc_2019,malone_accurate_2016,shepherd_many-body_2013,shepherd_range-separated_2014,gruneis_explicitly_2013} and of twist-averaged coupled cluster calculations.~\cite{gruber_applying_2018,hagen_coupledcluster_2014} We expect this work can immediately be applied to improve calculations. { Our long-term goals are to use this approach to study realistic systems. Though calculations are left for future manuscripts, we expect to follow a similar approach to our prior work in this area. In particular, we start by observing the similarity between how twist-averaging works in plane wave ab initio calculations where the energy is still obtained as a sum over matrix elements $v_{ijab}$ (as in \refeq{eq:ERIs}) which are offset by a twist angle. Specifically, then, it should be possible to choose the twist angle in the same way as we propose here, so for a cubic system with $N$ electrons and a box length of $L$, the same twist angle as used here should work. As such, we will soon be applying this to real solids and leave this for a future study. } {\bf \emph{Supplementary Material.--} } The reader is directed to the supplementary material for raw data tables and illustrations mentioned in the text. {\bf \emph{Acknowledgements.--} } JJS and TM acknowledge the University of Iowa for funding. JJS thanks the University of Iowa for an Old Gold Award. ARM was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. The code used throughout this work is a locally modified version of a github repository used in previous work~\cite{shepherd_range-separated_2014,shepherd_coupled_2014}: https://github.com/jamesjshepherd/uegccd.
2024-02-18T23:40:09.165Z
2019-06-12T02:06:26.000Z
algebraic_stack_train_0000
1,480
4,892
proofpile-arXiv_065-7233
\section{Introduction} \IEEEPARstart{I}{t} is commonly accepted that sonographers are exposed to an increased risk in repetitive strain injury \cite{Seto2008, Janga2012, Harrison2015}. A representative study amongst diagnostic medical sonographers and vascular technologists indicates that a significant majority of sonographers experience pain while performing ultrasound scans \cite{Evans2009}. This suggests a high demand to improve ergonomics and offload sonographers during clinical scan procedures. Recent investigations show that besides diagnostic sonography, there is an increased demand for intraoperative transthoracic \cite{Ben-Dor2006, Hori2015} and transoesophegal \cite{Shanewise1999} ultrasound imaging, particularly for cardiac and lung procedures. Sonographers performing intraoperative ultrasound in for example cardiac catheterization procedures have therefore presumably an increased risk of radiation exposure \cite{McIlwain2014}. \begin{figure}[t!] \center{\includegraphics[width=\linewidth]{Linde1.PNG}\caption{Soft robotic end-effector (SEE) performing ultrasound scan on abdominal prenatal phantom}} \end{figure} Automating diagnostic and intraoperative ultrasound procedures through robot-guidance or -assistance can help address the aforementioned problems and lay the groundwork for more intelligent image acquisition. Robotic ultrasound guidance has found particular application in procedures involving steering orthopaedic \cite{Goncalves2014} or minimally-invasive surgical tools \cite{Antico2019UltrasoundProcedures} and biopsy needles \cite{Mahmoud2018EvolutionSystems}. Various robotic hardware solutions have been proposed. Researchers have adopted robotic platforms originally aimed at collaborative scenarios in industrial settings, such as Universal Robot’s UR-series \cite{Mathiassen2016, Sen2016} or the KUKA LWR \cite{Goncalves2014} and LBR iiwa \cite{Kojcev2016, Zettinig2016}. A commercial robotic manipulator has been released (LBR Med, KUKA AG, Augsburg, Germany) which is suitable for use in clinical environments due to its conformity with medical device safety (ISO 60601) and medical software regulations (ISO 62304). Current research suggests that such robots can be applied in diagnostics to autonomously perform aorta measurements \cite{Virga2016}, in combination with previously acquired MRI scans to autonomously find standard view-planes \cite{Hennersperger2017} and in intraoperative procedures to autonomously track surgical tools \cite{Salcudean2013}, amongst others. Whilst such robotic platforms allow for great flexibility through a large workspace and high manipulability, the use of large-scale robotic manipulators can pose various disadvantages for clinical integration. Diagnostic ultrasound scans are divided into their respective body area of interest. For an individual procedure such as a lower abdominal ultrasound scan, a robotic system is therefore only required to achieve a workspace to cover a fraction of the human body. This yields that common robotic manipulators could be oversized for such applications, which unnecessarily poses risks to patient safety. Despite high degrees of electrical safety, a mechanical system with a high mass can potentially be more dangerous \cite{Haddadin2008}. To address this issue, researchers developed customized solutions which are tailored to the application-specific requirements of diagnostic and interventional sonography. Researchers \cite{Salcudean1999, Salcudean1999a, PurangAbolmaesumiSeptimiuESalcudeanWen-hongZhuMohammadRezaSirouspour2002} have proposed a mechanism which achieves a high degree of probe manipulability and safety. The robot actuation has been moved to the base of the system, thus minimizing its size and weight. Other systems have been developed which separate the probe positioning into two stages: approximate probe placement and finer view-plane adjustments. The first can be achieved by a passive positioning mechanism, which is operated by a clinician, while the latter is obtained with an active end-effector. A system based on cables which are driven by McKibben actuators has been proposed \cite{VilchisGonzales2001}. The antagonistic configuration of the cables is employed to position the ultrasound probe on a patient. The system is tele-operated by a sonographer. Researchers from Waseda University first proposed this concept and corresponding design in \cite{Nakadate2009}, in which the end-effector is driven through a parallel mechanism. Similarly, a consortium of researchers have developed a system with active end-effector with the aim of remote tele-diagnosis \cite{Gourdon1999, Arbeille2003, Vieyres2003}. The system has since been trialled for remote scans \cite{Arbeille2005} and translated to a commercial product (MELODY, AdEchoTech, Naveil, France). Despite the scanning being performed remotely, the design of the system suggests, however, that the assisting operator is still required to apply the necessary force to maintain a stable contact. Maintaining stable mechanical coupling between ultrasound probe and patient tissue is of paramount importance for ensuring a high-quality image. Approaches to achieve this involve controlling the contact force directly or establishing an elastic contact between the position-controlled device and the patient. While the first has been researched extensively \cite{Siciliano1999RobotControl}, \cite{Fang2017} and can be commonly found in various forms of industrial applications, the latter has found more attention in recent years due to an increased demand in cost effective force control and -limiting solutions for human robot collaboration tasks \cite{McMahan2006, Eiberger2008}. Series-elastic actuators have been developed to provide passive compliance in actuated robotic joints \cite{PrattSeriesActuators}. While providing a degree of compliance, this has the disadvantage that a collision or undesired contact in a direction other than the joint axis cannot be compensated for. We have trialled safety clutches for the use in ultrasound robots which exhibit compliant behaviour once disengaged through an excess force \cite{wang2019analysis, wang2019design}, \cite{Mathur2019}. This, however, renders the system uncontrollable and requires reengaging the clutch mechanism for further operation. In this work, we make use of an elastic soft robotic system, which is aimed at overcoming aforementioned limitations. Soft robotics technologies have opened up new design paradigms for robotic systems through the use of elastic and deformable materials and structures \cite{Laschi2016, Polygerinos2017}. Soft robotics systems are commonly designed to interact with or conform to environmental contacts. This allows soft robotic manipulators to exhibit highly dexterous manoeuvrability in for example surgical \cite{Cianchetti2014, Marchese2014, Kahrs2015} or search and rescue operations \cite{Hawkes2017}. In these scenarios, however, soft robots are not applied to tasks which require significant loadbearing capabilities, predominantly due to their low stiffness. To bridge the trade-off between manoeuvrability and stiffness, research has been driven towards systems with variable stiffness capabilities. A comprehensive overview of stiffening technologies is given in \cite{Manti2016}. For applications in which softness is desired, high loadings are demanded and stiffening mechanisms are not suitable, soft robotic systems tend to be combined with external constraints to ensure structural integrity. This is commonly found in exoskeleton research and rehabilitation robotics. Examples include full body, soft exosuits \cite{Wehner2013AAssistance}, lower limb exoskeletons \cite{Costa2006} and hand exoskeletons for post-stroke rehabilitation \cite{Chiri2012, Stilli2018AirExGlovePatients}. In our previous work, we identified the advantages of soft robotics technology in ultrasound interaction tasks compared to rigid state-of-the-art robots and showed an initial proof-of-concept of a parallel soft robotic end-effector with the right characteristics for medical ultrasound tasks \cite{Lindenroth2017}. We now derive a novel soft robotic end-effector which is capable of safely acquiring standard views in extracorporeal diagnostic foetal ultrasound (US). We select foetal US as an initial application due to its high demands to robot safety. We evaluate the performance of our system with respect to derived specifications and show that the proposed system is capable of acquiring a set of standard view-plane required for the assessment of the foetus. The robot utilizes linear soft fluidic actuators (SFAs) which are arranged in parallel around the ultrasound probe to provide high axial loadbearing capabilities and high lateral compliance, thus enabling adaptability and safety in the patient interaction. The individual contributions of this study are: \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Linde2.pdf} \caption{Proposed design of the soft robotic end-effector (a) and workflow (b) for obtaining a desired view through manual placement in the approximate region of interest (i) and active steering of the probe towards the desired view-plane (ii).} \label{fig:SEE_design} \end{figure*} \begin{itemize} \item Clinical investigation to determine workspace and force requirements for view-plane adjustments in foetal diagnostic ultrasound imaging. \item Design and verification of a soft robotic end-effector which satisfies the derived clinical requirements in workspace and force. It employs robust linear soft fluidic actuators, for which a novel injection-based fabrication is derived, and undesired twist is prevented through a mesh constraint. \item Definition and validation of a lumped stiffness model to describe the motion of the soft robotic end-effector in the absence and presence of external loading. \end{itemize} The controllability and imaging capabilities of the integrated system are validated in position control and US phantom experiments respectively. The paper is structured in the following way. In Section \ref{Design} the system requirements are determined, and the robot design is introduced. Based on the design of the system, Section \ref{Modelling} derives a kinetostatic model. Methodologies for the actuation and control of the system are presented in Section \ref{ActuationAndControl}. In Section \ref{Experiments} the mechanical properties of the system and its workspace are evaluated. Results are presented in section \ref{Results}. The proposed model is validated and the position controller performance, as well as the imaging capabilities of the system, are assessed. \section{Methods} Prenatal foetal ultrasound is a routine diagnostic procedure for pregnant women to determine birth defects and abnormalities in the foetus. Common checks include measuring the foetus’ biparietal diameter (BPD), its head and abdominal circumferences (HC and AC) as well as its femur length (FL) \cite{Salomon2011}. In this work we focus on obtaining HC, AC and FL standard view-planes. We establish the clinical requirements to the contact force and movement range of the ultrasound probe for bespoke application and derive a suitable design for a soft robotic end-effector (SEE). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Linde3.pdf} \caption{Braided nylon mesh uncrimped (a) and crimped (b).} \label{fig:Mesh} \end{figure} \subsection{Design} \label{Design} \subsubsection{Clinical data acquisition and processing} Pregnant women between 18 to 24 weeks of gestation underwent research ultrasound scans at St Thomas’ Hospital (Study title: \emph{Intelligent Fetal Imaging and Diagnosis (iFIND)-2: Further Ultrasound and MR Imaging}, Study reference: 14/LO/1806). Trained sonographers performed the foetal ultrasound scan using a standard ultrasound probe (X6-1, Philips, Amsterdam, Netherlands) which is connected to an ultrasound scanner (EPIQ7, Philips, Amsterdam, Netherlands). The probe was placed in a holder as detailed in \cite{Noh2015}. This holder incorporated an electromagnetic (EM) tracking sensor (Aurora, NDI, Ontario, Canada) and six axis force-torque sensor (Nano 17, ATI, Apex, USA), which allowed measurements of the position and orientation of the probe, and the force applied at the probe face to be measured throughout the scan. The recorded tracking and force data of six patients were analysed by extracting time ranges during which standard fetal anomaly views were imaged. These included HC, AC and FL views. Each time range consisted of the few seconds when the sonographer had placed the probe in the correct anatomical region and was adjusting the probe to find the ideal view. For each view the tracking data were analysed to find the range of positions and orientations in the three axes separately. The X and Y axes show movement in the horizontal plane of the scanning bed (left to right on the patient, and foot to head, respectively), and the Z axis shows vertical movement. Orientation ranges are given in probe coordinates, with yaw showing axial rotation, pitch showing elevational tilting out of the image plane, and roll showing in-plane lateral tilting. Forces were analysed by dividing the measured force vector into normal and tangential components applied to the surface. The local surface angle was determined at each measurement by fitting an ellipsoidal shape to the tracking data of the scan. The 95th percentile of the forces measured within a time range gives an indication of the maximum force that must be applied by the probe. \subsubsection{Mechanism requirements and synthesis} Following the results of the clinical data analysis, it is found that the soft robotic end-effector must at least satisfy the following requirements \begin{itemize} \item Be able to withstand mean axial and transversal contact forces of 8.01N and 4.42N without significant deterioration of the imaging view. \item Achieve an axial extension along Z of 5.22mm and transversal translations in X and Y of 7.75mm. \item Achieve rotations of 5.08$\degree$ around X and Y. \end{itemize} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{Linde4.pdf} \caption{Free body diagram of SEE model definition} \label{fig:Model} \end{figure} To maintain a high degree of safety when interacting with the device, the SEE should furthermore comprise of a low transversal stiffness. This allows both the operating clinician and patient to manually displace the probe in case of discomfort. As the investigated system is compliant, its deflection has to be considered when determining if a position is achievable. Taking into account normal and tangential forces applied during the scanning, the system must satisfy the following conditions \[ \boldsymbol{\delta}_{SEE} \geq \boldsymbol{\delta}_{req} +\boldsymbol{\delta}_{f} \quad \text{with} \quad \boldsymbol{\delta}_{f} = \boldsymbol{K}^{-1}_{min}\boldsymbol{f}_{req} \] Where $\boldsymbol{\delta}_{f}$ is a deformation induced by external forces, $\boldsymbol{f}_{req}$ is a vector of the required forces and $\boldsymbol{K}_{min}$ is the minimum system stiffness throughout the workspace. $\boldsymbol{\delta}_{req}$ and $\boldsymbol{\delta}_{SEE}$ are vectors of the required and achievable translations respectively. As only tip forces are considered in this work, tilting effects induced by external moments at the SEE tip are ignored and forces are assumed to only affect the tip position. A soft robotic design based on soft fluidic actuators (SFAs), which have previously been presented in \cite{Lindenroth2017}, is proposed. It is comprised of two rigid platforms which serve as base and transducer holder respectively. The platforms are connected through a set of three soft fluidic actuators which are arranged in a parallel fashion at $120\degree$ intervals. To allow for sufficient space for the ultrasound transducer cable, the actuators are tilted at an angle of 15$\degree$. An overview of the design is shown in Fig. \ref{fig:SEE_design}a). Whilst a rigid mechanism of such configuration would be over-constrained and thus unable to move, the elasticity of the SFAs allows the SEE to perform bending (coupled translation and rotation) and axial extension motions. As the SFAs are tilted, axial extension causes the SFAs to bend into an S-shaped configuration. This allows for the SEE to be axially compliant whilst exhibiting a high degree of load-bearing capabilities, which is further investigated in Section \ref{Stiffness}. Furthermore, curving into an S-shaped configuration eliminates the possibility of unstable buckling to occur in the SFAs, as shown in Section \ref{Results_Stiffness}. A common problem in such a proposed soft robotic system is the low stiffness along its twist axis. To improve the stability of the system against twist deformations, a nylon fibre mesh is attached to base and transducer platforms, which acts as a mechanical constraint between the two. To reduce unwanted buckling behaviour, crimps can be added to the mesh by deforming and heat-treating it. Examples of uncrimped and crimped meshes are shown in Fig. \ref{fig:Mesh}. Thus, axial rotation of the ultrasound transducer is not considered in this study, as it could be added by simply applying a rotating mechanism to the base of the SEE, which would function as a stiff rotational axis in conjunction with the mesh constraint. The workflow of imaging using the SEE is shown in Fig. \ref{fig:SEE_design}b). Once the SEE is manually placed in the approximate area of the target view using a passive positioning arm, it is fixed on the patient. The ultrasound probe is then actively steered either in a tele-operated manner by a sonographer or in an autonomous fashion using pose or image feedback. As the loadbearing is achieved by the SEE, contact forces the sonographer is required to apply are minimized, which presumably has an impact on the ergonomics of the sonographer. \subsection{Kinetostatic modelling} \label{Modelling} To determine the ultrasound probe pose under internal fluid volume variation and external loading a kinetostatic model is derived according to \cite{Klimchik2018}. A free body diagram of the model is shown in Fig. \ref{fig:Model}. In the following, a vector denoted as $\boldsymbol{w}_f$ represents a 6 degree of freedom wrench in an arbitrary frame $f$ such that $\boldsymbol{w}_f=[F_x^f,F_y^f,F_z^f,M_x^f,M_y^f,M_z^f ]^T$ with forces $\boldsymbol{F}$ and moments $\boldsymbol{M}$. Similarly, $\boldsymbol{\tau}_f$ denotes a reaction wrench in the local SFA frame, which is of the same form as $\boldsymbol{w}_f$. Vectors noted as $\delta x_f$ indicate infinitesimally small displacements in frame f of the form $\delta \boldsymbol{x}_f=[u_x^f,u_y^f,u_z^f,v_x^f,v_y^f,v_z^f ]^T$ with translations $u$ and rotations $v$. Let $\boldsymbol{w}_{ext}$ be a vector of forces and moments applied to the tip of the ultrasound transducer. Under static equilibrium conditions, the following holds for a single actuator \begin{equation}\label{eq:StatEq} \boldsymbol{w}_{ext}=\boldsymbol{w}_\theta + \boldsymbol{w}_V \end{equation} Where $\boldsymbol{w}_\theta$ is the wrench caused by the elastic deformation of the SFA and $\boldsymbol{w}_V$ is the reaction wrench caused by the constrained hydraulic chamber. Both are expressed in the tip frame of the system. The tip wrenches $\boldsymbol{w}_\theta$ and $\boldsymbol{w}_V$ can be expressed relative to their local frames by \begin{equation}\label{eq:Ad} \begin{split} \boldsymbol{w}_\theta & =\boldsymbol{J}_\theta(\boldsymbol{x}) \boldsymbol{\tau}_\theta \\ \boldsymbol{w}_V & =\boldsymbol{J}_V(\boldsymbol{x})\tau_V \end{split} \end{equation} Where $\boldsymbol{\tau}_\theta$ is a vector of local reaction forces and moments caused by the SFA deformation and $\boldsymbol{\tau}_V$ is the uniaxial reaction force of the volumetric constraint in the actuator. The matrices $\boldsymbol{J}_\theta(\boldsymbol{x})$ and $\boldsymbol{J}_V(\boldsymbol{x})$ are defined by \begin{equation}\label{eq:Ad} \begin{aligned} \boldsymbol{J}_\theta(\boldsymbol{x}) &= \begin{bmatrix} \boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x}) \end{bmatrix}\boldsymbol{Ad} \\ \boldsymbol{J}_V(\boldsymbol{x}) &= \begin{bmatrix} \boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x}) \end{bmatrix}\boldsymbol{Ad}_z = \begin{bmatrix} \boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x}) \end{bmatrix}\boldsymbol{\hat{H}} \end{aligned} \end{equation} $\boldsymbol{R}(\boldsymbol{x})$ is the rotation matrix of the current tip deflection. Matrix $\boldsymbol{Ad}$ is the wrench transformation matrix relating the local SFA frame to the tip frame by \begin{equation} \boldsymbol{Ad} = \begin{bmatrix} \boldsymbol{R}_0 & \boldsymbol{0} \\ \boldsymbol{D}_0\boldsymbol{R}_0 & \boldsymbol{R}_0 \end{bmatrix} \end{equation} Where $\boldsymbol{R}_0$ is the spatial rotation of the respective frame and $\boldsymbol{D}_0$ is the cross-product matrix with the translation vector $\boldsymbol{d}_0 = [d_x, d_y, d_z]$. $\boldsymbol{\hat{H}}$ is for a single SFA a 6x1 vector containing the third column of $\boldsymbol{Ad}$. Considering the elastic behaviour of the SFA, its reaction force $\boldsymbol{\tau}_\theta$ caused by an infinitesimally small, local displacement $\delta\boldsymbol{x}_\theta$ can be written as \begin{equation}\label{eq:Hook} \boldsymbol{\tau}_\theta=\boldsymbol{K}_\theta \delta \boldsymbol{x}_\theta \end{equation} Where the SFA stiffness $\boldsymbol{K}_\theta$ is defined as a Timoshenko beam element with \[\label{xx} \boldsymbol{K}_\theta = \begin{bmatrix} \frac{12EI}{(1+\Phi)L^3} & 0 & 0 & 0 & \frac{6EI}{(1+\Phi)L^2} & 0\\ 0 & \frac{12EI}{(1+\Phi)L^3} & 0 & \frac{-6EI}{(1+\Phi)L^2} & 0 & 0 \\ 0 & 0 & \frac{EA}{L} & 0 & 0 & 0\\ 0 & \frac{-6EI}{(1+\Phi)L^2} & 0 & \frac{(4+\Phi)EI}{(1+\Phi)L} & 0 & 0\\ \frac{6EI}{(1+\Phi)L^2} & 0 & 0 & 0 & \frac{(4+\Phi)EI}{(1+\Phi)L} & 0\\ 0 & 0 & 0 & 0 & 0 & \frac{GJ}{L} \end{bmatrix} \] $L$ describes the length of the SFA, $A$ it’s cross-sectional area, $E$ its Young’s modulus, $I$ the area moment of inertia, $G$ its shear modulus and $J$ the torsion constant. The Timoshenko coefficient $\Phi$ is defined as \[ \Phi=\frac{12EI}{\frac{A}{\alpha}GL^3} \] with the Timoshenko coefficient $\alpha$. An overview of the SFA constants is given in Table \ref{tab:ModelParameters}. \begin{table}[h!] \centering \caption{SFA model parameters} \label{tab:ModelParameters} \setlength\tabcolsep{2pt} \begin{tabular}{lll} \toprule Constant & Value & Description\\ \midrule $L$ & $45$mm & Initial length\\ $A$ & $\pi \cdot 10^2\text{mm}^2$ & Cross-sectional area\\ $a$ & $\pi \cdot 6.9^2 \text{mm}^2$* & Fluid channel area\\ $E$ & $301.51$kPa ** & Young's modulus\\ $I$ & $1200\text{cm}^4$ ** & Area moment of interia\\ $G$ & $0.5$E & Shear modulus\\ $J$ & $0.5\pi\cdot10^4\text{mm}^4$ & Torsion constant\\ $\alpha$ & $5/6 $ & Timoshenko coefficient\\ \bottomrule \multicolumn{3}{l}{* obtained in Section \ref{SFA_characterization}; ** obtained in Section \ref{Model_validation}} \end{tabular} \end{table} Whilst parameters $L$ and $A$ are obtained from the SFA geometry, the torsion constant of a beam with circular cross-section can be expressed as $J = 0.5\pi r^4$ and its Timoshenko coefficient is defined as $5/6$ \cite{Matrix}. The shear modulus $G$ is approximated as half the Young's Modulus. For a given SFA volume, the kinematic relationship between an infinitesimal small volume change $\delta V$ of the SFA and the displacement of the ultrasound tip frame is given by \begin{equation}\label{eq:Constraint} \delta V/a=\boldsymbol{J}_V^T \delta\boldsymbol{x}_{tip} \end{equation} Where $a$ is the cross-sectional area of the fluid actuation channel. The kinematic motion of the tip frame caused by the SFA deflection can be defined as \begin{equation}\label{eq:Kin} \delta\boldsymbol{x}_{\theta}=\boldsymbol{J}_\theta^T \delta \boldsymbol{x}_{tip} \end{equation} Substituting Equation \ref{eq:Kin} into \ref{eq:Hook} yields \begin{equation}\label{eq:DeflForce} \boldsymbol{\tau}_\theta=\boldsymbol{K}_\theta \boldsymbol{J}_\theta^T \delta \boldsymbol{x}_{tip} \end{equation} Applying Equations \ref{eq:Ad} and \ref{eq:DeflForce}, the static equilibrium condition in Equation \ref{eq:StatEq} can be written as \begin{equation}\label{eq:StatEqFinal} \boldsymbol{w}_{ext}=\boldsymbol{J}_\theta\boldsymbol{K}_\theta\boldsymbol{J}^T_\theta\delta\boldsymbol{x}_{tip} + \boldsymbol{J}_V\tau_V \end{equation} Equation \ref{eq:StatEqFinal} can be combined with the imposed kinematic constraint defined by Equation \ref{eq:Constraint} to a linear equation system of the form \begin{equation}\label{eq:EqSys1} \begin{bmatrix} \boldsymbol{w}_{ext}\\ \delta V/a \end{bmatrix} = \begin{bmatrix} \boldsymbol{J}_\theta\boldsymbol{K}_\theta\boldsymbol{J}_\theta^T & \boldsymbol{J}_V\\ \boldsymbol{J}_V^T & \boldsymbol{0} \end{bmatrix} \begin{bmatrix} \delta \boldsymbol{x}_{tip}\\ \tau_V \end{bmatrix} \end{equation} The deflection of the ultrasound transducer tip and internal reaction of the system can consequently be found through matrix inversion \begin{equation}\label{eq:EqSys2} \begin{bmatrix} \delta \boldsymbol{x}_{tip}\\ \tau_V \end{bmatrix} = \begin{bmatrix} \boldsymbol{J}_\theta\boldsymbol{K}_\theta\boldsymbol{J}_\theta^T & \boldsymbol{J}_V\\ \boldsymbol{J}_V^T & \boldsymbol{0} \end{bmatrix}^{-1} \begin{bmatrix} \boldsymbol{w}_{ext}\\ \delta V/a \end{bmatrix} \end{equation} The formulation can be expanded to a number of $n$ SFAs by considering a lumped stiffness $\boldsymbol{K}$ in the probe tip frame. As the actuators are aligned in a parallel configuration, it can be defined by \begin{equation}\label{eq:Lump} \boldsymbol{K} = \sum_{i=1}^{n}\boldsymbol{J}_\theta^i \boldsymbol{K}_\theta^i {\boldsymbol{J}_\theta^i}^T \end{equation} The matrix $\boldsymbol{J}_V$ is adopted by appending the respective columns of the wrench transformation matrix of actuator $i$ $\boldsymbol{Ad}_z^i$ to $\boldsymbol{\hat{H}}$ such that \begin{equation}\label{eq:J_V} ^n\boldsymbol{J}_V = \begin{bmatrix} \boldsymbol{R}(\boldsymbol{x}) & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{R}(\boldsymbol{x}) \end{bmatrix} [\boldsymbol{Ad}_z^1, \boldsymbol{Ad}_z^2, ..., \boldsymbol{Ad}_z^n] \end{equation} The kinematic constraint relationship then becomes \begin{equation}\label{eq:KinVector} \delta \boldsymbol{V}/a = ^n\!\boldsymbol{J}_V^T \boldsymbol{x}_{tip} \end{equation} Where $\delta \boldsymbol{V}$ is an $n \times 1$ vector of SFA volume changes. Consequently, $\tau_V$ is expanded to an $n \times 1$ vector containing $n$ local reactions in the form $\boldsymbol{\tau}_V=[\tau_{V,1},\tau_{V,2}...,\tau_{V,n}]^T$. To account for changes in matrices $\boldsymbol{J}_\theta$ and $\boldsymbol{J}_V$ for a given motion, the model is solved numerically by dividing the applied external wrench and induced volume vectors into small increments $[\Delta \boldsymbol{w}_{ext}, \Delta \boldsymbol{V}]^T$. After each iteration, $\boldsymbol{R}(\boldsymbol{x})$ is updated according to the previous tip pose. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{Linde5.pdf} \caption{Actuation unit (a) with syringe pumps (b) and controller system} \label{fig:SP} \end{figure} For the given number of three SFAs, the update rule for the numerical solution is defined by \begin{equation}\label{eq:EqSys2} \begin{bmatrix} \boldsymbol{x}^{k}_{tip}\\ \boldsymbol{\tau}_V^{k} \end{bmatrix} = \begin{bmatrix} \boldsymbol{x}^{k+1}_{tip}\\ \boldsymbol{\tau}_V^{k+1} \end{bmatrix} + \begin{bmatrix} \boldsymbol{K} & ^3\boldsymbol{J}^k_V\\ ^3{\boldsymbol{J}^k_V}^T & \boldsymbol{0} \end{bmatrix}^{-1} \begin{bmatrix} \Delta \boldsymbol{w}_{ext}\\ \Delta \boldsymbol{V}/a \end{bmatrix} \end{equation} For iteration step $k$. \subsection{Actuation and control} \label{ActuationAndControl} The SEE is actuated by inflating respective SFAs with a working fluid. As shown in our previous work \cite{Lindenroth2016}, we utilize custom hydraulic syringe pumps (Fig. \ref{fig:SP}b)) which are driven by stepper motors (Nema 17, Pololu Corporation, Las Vegas, USA) to induce volume changes in the SFAs. The pumps are controlled with a microcontroller (Teensy 3.5, PJRC, Sherwood, USA) which communicates via a serial interface with a PC running ROS (Intel Core I7-7700HQ, XPS15 9560, Dell, Texas, USA). The PC generates demand velocities or positions for the microcontroller and solves the previously-defined kinetostatic model to determine the system Jacobian for a given pose. Furthermore, the laptop handles interfaces with peripherals such as a joystick for teleoperation (Spacemouse Compact, 3dconnexion, Monaco) and an electromagnetic tracking (EM) system for closed-loop position control (Aurora, NDI, Ontario, Canada). The linear soft fluidic actuators which are utilized to drive the system have first been conceptualized in our previous work \cite{Lindenroth2017}. They are comprised of a silicone rubber body (Dragonskin 10-NV, SmoothOn Inc, Pennsylvania, USA) and stiffer silicone rubber endcaps (SmoothSil 945, SmoothOn Inc, Pennsylvania, USA). A helical constraint is inserted into the silicone to counteract radial expansion of the actuator upon inflation. This, in combination with the stiff endcaps, allows for the actuators to maintain its form and only expand in the direction of actuation. The moulding process of creating SFAs has been significantly improved from our previous work. For the radial constraint an extension spring (Fig. \ref{fig:Mould}(v)) is used. The liquid silicone rubber is injected through an inlet (Fig. \ref{fig:Mould}(ii)) using a syringe instead of being poured into the mould. This has the significant advantage for the user to be able to pre-assemble the mould without having to manually wind the constraint helix, as it has been commonly done in soft fluidic actuators \cite{Suzumori2002}. In combination with the injection of the silicone this could reduce variations in the fabrication process. A drawing of a finished actuator is shown in Fig. \ref{fig:Mould}(vii). The combination of radial constraint and stiff endcaps allows for the actuators to be driven efficiently with a volumetric input without exhibiting a nonlinear relationship between input volume and output length change due to bulging, which is investigated in Section \ref{SFA_characterization}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Linde6.pdf} \caption{Overview of mould components (i)-(vi) and drawing of final SFA (vii)} \label{fig:Mould} \end{figure} In this work, two methods for controlling the ultrasound probe pose are investigated. A joystick-based teleoperated open-loop controller is implemented to allow a sonographer to steer the probe according to the acquired ultrasound image stream. For this purpose, the aforementioned joystick is used. The axial motion of the joystick is linked to a translation of the SEE in Z-direction while the two tilt axes of the joystick are mapped to the X- and Y-rotation axes of the SEE. The high-level controller generates syringe pump velocities according to \begin{equation} \boldsymbol{\dot{V}_d} = \boldsymbol{J}_V^T \boldsymbol{v_{cart}} \end{equation} Where $\boldsymbol{\dot{V}}_d$ is the desired SFA velocity, $\boldsymbol{v_{cart}}$ the target velocity in Cartesian space and $\boldsymbol{J_V}^T$ the actuation matrix of the system which has been derived in Section \ref{Modelling}. \textcolor{black}{ A closed-loop controller is integrated to drive the ultrasound probe tip position according to EM tracker feedback. For this purpose, a high-level trajectory generator continuously updates the demand position for the position controller, which generates in return demand volumes for the three syringe pumps according to the control law \begin{equation} \boldsymbol{\Delta{V}_d} = \boldsymbol{J}_V^T \boldsymbol{U} \end{equation} Where $\Delta\boldsymbol{V}_d$ is the desired change in volume and $\boldsymbol{U}$ the control signal. A linear PI controller of the form \begin{equation} \boldsymbol{U} = \boldsymbol{K}_P\boldsymbol{X}_e + \boldsymbol{K}_I\int\boldsymbol{X}_e dt \end{equation} is employed, where $\boldsymbol{X}_e = \boldsymbol{X}_d - \boldsymbol{X}_c$. $\boldsymbol{X}_d$ and $\boldsymbol{X}_c$ are demanded and measured probe tip position respectively.} \textcolor{black}{The gain matrices $\boldsymbol{K}_P=diag(k_P,k_P,k_P)$ and $\boldsymbol{K}_I=diag(k_I,k_I,k_I)$ contain the gain constants $k_P$ and $k_I$, which have been verified experimentally and are defined as $0.3 \frac{\text{ml}}{\text{mm}}$ and $0.03 \frac{\text{ml}\cdot s}{\text{mm}}$ respectively.} The target points are generated at 2Hz while both the position controller and the kinetostatic model are updated at 30Hz. The low-level step generation for driving the syringe pumps is achieved with an update rate of 6kHz. \section{\textcolor{black}{Experimental validation}} \label{Experiments} \subsection{SFA characterization} Using the \textcolor{black}{three SFAs to control the SEE pose} in an open-loop configuration requires the volume-extension relation to be predictable for any given point in time. \textcolor{black}{From the radial mechanical constraint incorporated in the SFA design it is assumed that the relationship between induced volume and SFA length change is linear. To verify this, the extension behaviour of a single SFA is investigated for different working fluid changes} using a linear rail setup. The position of the tip of the actuator is equipped with a slider and tracked using a linear potentiometer. Contact friction between the linear bearings and rails is minimized using lubrication and friction forces are therefore neglected in the evaluation of the results. Volume and extension data are tracked and synchronized using ROS. \subsection{Stiffness characterization} \label{Stiffness} \textcolor{black}{As the SEE is highly compliant, knowledge of its deformability under external loads is required to determine its efficacy to the given task. To verify the structural behaviour of the SEE under contact forces required for the clinical application, the stiffness of the system is characterized} with the setup shown in Fig. \ref{fig:UR3}. The SEE is mounted to a base plate and its tip is connected through a force-torque sensor (Gamma, ATI, Apex, USA) to a robot manipulator (UR3, Universal Robots, Odense, Denmark). To determine the stiffness of the SEE in a given direction, the manipulator moves the SEE in said direction and the resulting reaction force is measured\textcolor{black}{. The robot allows for an accurate, repeatable displacement of the SEE in a defined direction, thus isolating the desired DOFs. The payload of the system is with 3kg sufficiently high to withstand the induced reaction forces caused by the elastic deformation of the SEE}. The motions are repeated 10 times for each configuration. The linearized relationship between reaction force and manipulator displacement corresponds to the stiffness of the SEE. The mesh reinforcement’s effect on the axial twist stiffness is determined by twisting the SEE repeatedly by $10\degree$ and measuring the z-axis moment. This is done for a configuration without mesh reinforcement, mesh reinforcement without crimps and mesh reinforcement with crimps. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Linde7.pdf} \caption{Experimental setup for stiffness characterization} \label{fig:UR3} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Linde8.pdf} \caption{SEE moving in contact with soft rubber patch. \textcolor{black}{The tip pose change with respect to the SEEs origin is highlighted with an arrow.}} \label{fig:ContactPatch} \end{figure} The directional lateral stiffness is obtained by displacing the SEE tip radially in a defined direction over a distance of 10mm. This is repeated for four inflation levels (25\%, 50\%, 75\% and 100\% of the maximum SFA volume) and for directions between 0$\degree$ and $345\degree$ in $15\degree$ increments around the z-axis. The axial stiffness which corresponds to each extension is determined by displacing the SEE tip in negative z-direction by 1.5mm for 25\% and 50\% inflation, and by 2.5mm for 75\% and 100\% extension. \subsection{Workspace and repeatability} \label{Experimental_WS} \textcolor{black}{To verify whether the attainable motions of the SEE satisfy the imposed clinical requirements for the ultrasound probe motion, the workspace of the SEE is mapped for achievable volumetric inputs.} The \textcolor{black}{SEE pose} is measured using an electromagnetic tracker (6DOF Reference, Aurora, NDI, Ontario, Canada) which is attached to the side of the SEE tip. The pose of the ultrasound probe tip is calculated with the known homogeneous transformation between tracker and tip. The SFA volumes are varied between 0\% and 100\% in 10\% increments and the resulting static tip pose is determined with respect to its deflated state. The repeatability in positioning the tip of the SEE is determined by repeatedly approaching defined SFA volume states and measuring the tip pose. A set of 6 states is defined and the resultant trajectory is executed 50 times. \subsection{Model validation} The derived model is validated by comparing the workspace and corresponding SFA volumes to the calculated tip pose of the SEE. \textcolor{black}{For this purpose tip poses are calculated for each configuration achieved in Section \ref{Experimental_WS} and the error between model and measurement is determined.} \subsection{Indentation behaviour} \textcolor{black}{Whilst the abdomen exhibits an increased stiffness with the duration of the pregnancy and thus counteracts indentation of the ultrasound probe, deep tissue indentation in the early weeks can affect the positioning behaviour of the SEE.} To verify the effect a soft tissue-like contact has on the SEE, a soft mechanical phantom is created. The cylindrical phantom is moulded from a layer of Ecoflex Gel and a structural layer of Ecoflex 00-30 (SmoothOn Inc, Pennsylvania, USA). The tip of the SEE is controlled to perform a line trajectory from its negative to positive x-axis limits at 60\% inflation. The tip pose is monitored with a magnetic tracker and contact forces between SEE and phantom are measured using aforementioned force sensor at the base of the phantom. The manipulator is used to test for different indentation depths from 0mm to 15mm in 5mm increments. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Linde9.pdf} \caption{Sonographer performing SEE-assisted ultrasound scanning of a prenatal abdominal phantom (iii). The SEE (i) is attached to a passive arm (ii) and manually placed on the phantom. A joystick (iv) is used to manipulate the ultrasound probe under visual guidance of the acquired image (v).} \label{fig:Imaging} \end{figure} \subsection{Controllability} \textcolor{black}{To achieve a desired view-plane in the ultrasound image, the probe attached to the SEE needs to be steerable accurately across the patient's body.} The controllability of the SEE is verified with the closed-loop position control system described in Section \ref{ActuationAndControl}. Target trajectories are defined as isosceles triangles with a base of 12.33mm and height of 10mm. For the tilted trajectory, the triangle is titled about one of its sides by 19$\degree$. The trajectory is tested in a planar and tilted configuration and tracked 3 times each. To determine the controllability under an external load, a stiff silicone rubber patch is created as shown in Fig. \textcolor{black}{\ref{fig:ContactPatch}}. The patch is lubricated and positioned with its center at the tip of the SEE. To ensure contact with the patch, an initial axial force of 5N is generated by displacing the patch and running the position controller. This is repeated for planar and tilted configurations, where each trajectory is tracked 3 times. \subsection{Sonographer-guided teleoperation} The imaging capabilities of an ultrasound transducer guided by the SEE are verified using a prenatal abdominal phantom (SPACE FAN-ST, Kyoto Kagaku, Japan). The SEE is equipped with an abdominal ultrasound probe (X6-1, Philips, Amsterdam, Netherlands) which is connected to an ultrasound scanner (EPIQ7, Philips, Amsterdam, Netherlands). A passive positioning arm (Field Generator Mounting Arm, NDI, Ontario, Canada) is used to manually position the SEE in the region of interest on the phantom. The sonographer uses the provided ultrasound image feedback to steer the SEE with the connected joystick towards a desired view-plane. The target view-planes are manually acquired using a handheld ultrasound probe. An overview of the experimental setup is shown in Fig. \ref{fig:Imaging}. \section{Results} \label{Results} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Linde10.pdf} \caption{\textcolor{black}{Time series of probe pose (a) and tip force (b) for subject 5. Data between motions corresponding to standard views HC, AC and FL have been omitted.}} \label{fig:ClinicalData} \end{figure} \subsection{Clinical data} \label{ClinicalData} \textcolor{black}{The results of the clinical data acquisition are presented in Table \ref{tab:ClinicalData}. For each subject the maximum observed motion range in translation and rotation of the ultrasound probe is presented for the HC, AC and FL standard views. The presented forces correspond to the 95th percentile of the occurring normal and tangential force magnitudes. A time series of the probe pose and force data obtained for subject 5 is shown in Fig. \ref{fig:ClinicalData}.} For subject 2 only HC and AC views were obtained. Translations and rotations are shown with respect to the patient bed. The normal force is assumed to be acting only in negative probe direction and the tangential force shows the vector magnitude of the tangential forces in X and Y. \textcolor{black}{To obtain workspace requirements which are compatible with the obtained forces, it is divided into transversal and axial movements and transversal rotations. In this study, axial rotations of the probe are ignored. Workspace requirements for the SEE are consequently obtained by selecting the larger translation between X and Y for the transversal $\delta_{req}^{tr}$ and the translation in Z for the axial motion $\delta_{req}^{ax}$, thus resulting in a required cylindrical workspace of radius $\delta_{req}^{tr}$ and height $\delta_{req}^{ax}$. For the orientation, the required rotation is defined by $\theta_{req}^{tr}$. The mean required workspace from the clinical data is therefore \begin{equation} \begin{split} \boldsymbol{\delta}_{req} = &[\delta_{req}^{ax}, \delta_{req}^{tr}]^T = [5.22\text{mm}, 7.75\text{mm}]^T\\ \theta_{req} = &5.08\degree \end{split} \end{equation}} Corresponding maximum tilts of pitch and roll are in ranges of $\pm9.8\degree$ and $\pm12.9\degree$. The maximum occurring normal and tangential forces are 20.77N and $\pm$10.67N respectively. \begin{table}[htbp] \centering \caption{Range of motion and contact force required to obtain a desired view in foetal ultrasound. Values used to generate the required SEE workspace are marked in blue.} \setlength\tabcolsep{2pt} \resizebox{\linewidth}{!}{ \begin{tabular}{rccccccccc} \multicolumn{1}{c}{\multirow{2}[3]{*}{Subj.}} & \multirow{2}[3]{*}{View} & \multicolumn{3}{c}{Max. translation [mm]} & \multicolumn{3}{c}{Max. rotation [deg]} & \multicolumn{2}{c}{Force range [N]} \\ \cmidrule{3-10} & & X & Y & Z & Yaw & Pitch & Roll & Normal & Tangential \\ \midrule \multicolumn{1}{c}{\multirow{3}[2]{*}{1}} & HC & \textbf{8.50} & 3.41 & \textbf{5.67} & \textbf{6.19} & \textbf{4.98} & \textbf{7.72} & \textbf{13.81} & 1.92 \\ & AC & 4.49 & 5.60 & 2.76 & 3.44 & 3.45 & 3.17 & 4.10 & 2.60 \\ & FL & 6.03 & \textbf{7.86} & 4.51 & 5.51 & 4.84 & 2.72 & 7.44 & \textbf{4.15} \\ \midrule \multicolumn{1}{c}{\multirow{3}[2]{*}{2}} & HC & 4.95 & \textbf{6.45} & 2.96 & 5.74 & \textbf{5.91} & 10.09 & \textbf{14.09} & \textbf{6.15} \\ & AC & \textbf{13.53} & 6.33 & \textbf{7.53} & \textbf{10.80} & 5.88 & \textbf{12.90} & 8.73 & 1.78 \\ & FL & - & - & - & - & - & - & - & - \\ \midrule \multicolumn{1}{c}{\multirow{3}[2]{*}{3}} & HC & \textbf{9.73} & \textbf{12.41} & 6.52 & 7.26 & \textbf{9.80} & 4.92 & \textbf{13.27} & \textbf{4.87} \\ & AC & 1.53 & 9.37 & 2.60 & 4.23 & 3.62 & 4.10 & 5.55 & 1.40 \\ & FL & 5.81 & 6.93 & \textbf{7.96} & \textbf{10.34} & 3.80 & \textbf{7.78} & 6.61 & 2.36 \\ \midrule \multicolumn{1}{c}{\multirow{3}[2]{*}{4}} & HC & \textbf{11.87} & \textbf{8.36} & \textbf{7.02} & \textbf{14.84} & \textbf{4.70} & \textbf{9.39} & 3.47 & \textbf{3.62} \\ & AC & 2.76 & 2.31 & 3.19 & 0.67 & 1.04 & 1.09 & \textbf{4.36} & 3.59 \\ & FL & 4.64 & 4.51 & 4.64 & 1.82 & 3.55 & 2.07 & 3.61 & 2.27 \\ \midrule \multicolumn{1}{c}{\multirow{3}[2]{*}{5}} & HC & \textbf{13.08} & 11.82 & 5.79 & \textbf{14.78} & \textbf{4.96} & \textbf{7.14} & \textbf{8.30} & \textbf{8.94} \\ & AC & 9.77 & \textbf{19.91} & \textbf{10.22} & 2.83 & 4.55 & 2.65 & 4.46 & 1.49 \\ & FL & 4.11 & 3.77 & 2.61 & 6.75 & 1.92 & 3.48 & 3.59 & 2.92 \\ \midrule \multicolumn{1}{c}{\multirow{3}[2]{*}{6}} & HC & 2.49 & \textbf{10.18} & \textbf{7.55} & \textbf{7.01} & \textbf{7.83} & \textbf{4.07} & \textbf{20.77} & 9.13 \\ & AC & \textbf{5.69} & 8.57 & 4.22 & 2.27 & 3.73 & 1.20 & 6.06 & 7.24 \\ & FL & 3.79 & 3.97 & 3.02 & 3.60 & 2.90 & 1.90 & 7.97 & \textbf{10.67} \\ \midrule & $\mu$ & 6.63 & \textcolor{blue}{7.75} & \textcolor{blue}{5.22} & 6.36 & 4.56 & \textcolor{blue}{5.08} & \textcolor{blue}{8.01} & \textcolor{blue}{4.42} \\ & \textcolor{black}{$\sigma$} & \textcolor{black}{3.64} & \textcolor{black}{4.16} & \textcolor{black}{2.23} & \textcolor{black}{4.09} & \textcolor{black}{2.01} & \textcolor{black}{3.37} & \textcolor{black}{4.70} & \textcolor{black}{2.87} \\ & max & 13.53 & 19.91 & 10.22 & 14.84 & 9.8 & 12.9 & 20.77 & 10.67 \\ \end{tabular}} \label{tab:ClinicalData}% \end{table}% \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Linde11.pdf} \caption{SFA pressure (a) and extension (b) under increasing working fluid volume \textcolor{black}{for different inflation levels}.} \label{fig:1DOF} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{Linde12.pdf} \caption{Change in transversal stiffness with the direction of the applied force for different extensions (a). Change in stiffness with extension for axial (i) and transversal stiffness (ii) (b). Change in stiffness with bending for axial (i) and transversal stiffness (ii) (c).} \label{fig:Stiffness} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{Linde13.pdf} \caption{Measured compression force of the SEE $f_{meas}$ at 0\% (a), 50\% (b) and 100\% axial extension with it's corresponding linear interpolation $f_{lin}$. For each configuration the compressed SEE is depicted and the SFA centerlines are highlighted.} \label{fig:Buckling} \end{figure} \subsection{SFA characterization} \label{SFA_characterization} The results of the SFA characterization are shown in Fig. \ref{fig:1DOF}. The hydraulic pressure under SFA inflation and the resulting extension are shown in Fig. \ref{fig:1DOF}a) and \ref{fig:1DOF}b) respectively. \textcolor{black}{Hysteresis is mainly observable in the fluid pressure. The mean deviation from the centerline between loading and unloading is 3.82$\pm$1.63kPa for the pressure and 0.14$\pm$0.05mm for the extension across the different inflation cycles. A maximum deviation due to hysteresis is observable in the pressure when inflated to 100\% with 9.28kPa and when inflated to 50\% at 0.44mm}. The volume-extension curve of the SFA can be separated into two regions, a nonlinear (0ml to $\approx$1.25ml) and a linear region ($\approx$1.25ml to 5ml). In the linear region, the relationship can be can be approximated with a first order polynomial as $\Delta L(\Delta V)=6.61\text{mm}/\text{ml} - 5.52\text{mm}$. \textcolor{black}{The interpolation is used to determine the relationship between the SFA length change and the input volume change of the form $a = {\Delta V}/{\Delta L} = \pi \cdot 6.9^2 \text{mm}^2$}. As the proportion of the nonlinear region compared to the overall extension of the SFA is small, it is ignored for the following investigations. SFAs are therefore assumed to be pre-extended with a volume of 1.25ml. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{Linde14.pdf} \caption{Workspace of the SEE in position (a-b) and orientation (c-d). \textcolor{black}{The required workspace $\boldsymbol{\delta}_{req}$ without and with consideration of the deflected tip pose $\boldsymbol{\delta}_f$ are indicated. A cross-section view along the dotted lines shows the coupling between position and orientation in the performed bending motions (e), in which the dashed lines indicate iso-volume lines for $V_2=V_3$.}} \label{fig:WSPos} \end{figure*} \subsection{Stiffness} \label{Results_Stiffness} The results of the twist stiffness characterization for each mesh configuration are shown in Table \ref{tab:Twist}, where $\mu$ and $\sigma$ are the mean and standard deviation of the twist stiffness $K_{tw}$ respectively. The application of a nylon mesh helps to significantly stiffen the torsional axis of the system by 184\%. A crimped mesh can further improve the torsional stiffness to 299\% of its original value. \begin{table}[h!] \centering \caption{Twist stiffness} \label{tab:Twist} \begin{tabular}{@{}cccc@{}} \toprule & None & Uncrimped & Crimped\\ \midrule $\mu(K_{tw})$ [Nmm/$\degree$] & 45.94 & 84.68 & 137.37\\ $\sigma(K_{tw})$ [Nmm/$\degree$] & 0.70 & 4.30 & 2.73 \\ \bottomrule \end{tabular} \end{table} The results of the lateral stiffness characterization under inflation of the SEE are shown in Fig. \ref{fig:Stiffness}a) in polar coordinates. The radius indicates the magnitude of the stiffness in the given direction. The axial and averaged lateral stiffness of the SEE under axial extension are presented in Fig. \ref{fig:Stiffness}b). The data are presented alongside their corresponding spline interpolations. Both decrease monotonically with the axial stiffness starting from a maximum of 34.83N/mm and reaching a minimum of 14.41N/mm at 100\% extension. The transversal stiffness decreases at a comparable rate from 3.21N/mm at 25\% down to 1.51N/mm at 100\% extension. The stiffness variation under bending of the SEE is shown in Fig. \ref{fig:Stiffness}c) with the visualized trends interpolated by splines. Whilst the the transversal stiffness decreases monotonically from 3.15N/mm to 1.77N/mm, the axial stiffness decreases from 21.15N/mm at 0.3$\degree$ tilt to a minimum of 9.99N/mm at 10$\degree$ followed by an increase in stiffness to 18.75N/mm at 13.75$\degree$. The presented data is employed to determine the minimum stiffness of the system throughout the workspace to infer possiblly occurring tip pose deviations from external forces. It can be seen that the system reaches a minimum axial stiffness of 14.41N/mm and transversal stiffness of 1.51N/mm, both in a straight and fully extended configuration. Despite high loads along the axial direction of the SEE no discontinuous buckling behaviour of the SFAs is observable. This is demonstrated in Fig. \ref{fig:Buckling}. The force-displacement relationships and their corresponding linear interpolations are shown for 0\%, 50\% and 100\% extension and depictions of the SEE at the corresponding maximum loads are presented. Whilst a slight increase in the nonlinearity between force and displacement is observable for 100\% extension (the corresponding mean absolute errors between data and linear interpolation are 0.84N, 0.62N and 1.16N for 0\%, 50\% and 100\% extension) no discontinuities are identifiable. The depictions of the deformed SEE show how the forced S-shape bending of the SFAs helps to prevent buckling. An increase in axial force only causes the curvature of the S-bend to increase. \subsection{Workspace} The workspace of the SEE in position and orientation is shown in Fig. \ref{fig:WSPos}. The figures show the tip pose acquired by the EM tracker for any given SFA configuration. The required workspace in position and orientation, $\boldsymbol{\delta}_{req}$ and $\boldsymbol{\theta}_{req}$, obtained in Section \ref{ClinicalData} from clinical data is projected into the center of the SEE workspace. The deflected workspace $\boldsymbol{\delta}_f$ is calculated from the results obtained in Section \ref{Stiffness}. It can be seen that the SEE exhibits an minimum transversal stiffness of $1.51\text{N}/\text{mm}$ and a minimum axial stiffness of $14.41\text{N}/\text{mm}$ at 100\% extension. Taking into account the mean external load applied to the tip, a possible additional deflection of \[ \boldsymbol{\delta}_f = \begin{bmatrix} 14.41& 0\\ 0& 1.51 \end{bmatrix}^{-1} \begin{bmatrix} 8.01\\ 4.42 \end{bmatrix} = \begin{bmatrix} 0.56\\ 2.95 \end{bmatrix} \] Thus, the workspace the SEE is required to achieve extends correspondingly to \begin{equation} \boldsymbol{\hat{\delta}}_{req} = \boldsymbol{\delta}_{req} + \boldsymbol{\delta}_f = \begin{bmatrix} 5.78\\ 10.68 \end{bmatrix} \end{equation} Whilst in some instances larger motions have to be achieved, the derived values represent a baseline motion range desirable from the SEE. \textcolor{black}{To quantify whether the SEE is able to reach the desired workspace, the intersections between requirement and SEE workspace volumes are computed. It can be seen that for the unloaded requirements in translation and rotation, $\boldsymbol{\delta}_{req}$ and $\theta_{req}$, the SEE can accomplish 100\% of the workspace. For the workspace adapted to account for an external force $\boldsymbol{\hat{\delta}_{req}}$, the robot achieves 95.18\% of the required workspace.} It is shown that a maximum combined lateral deflection of 19.01mm can be reached along the principal plane of $SFA_3$, which is about 4.5\% lower than the maximum transversal motion \textcolor{black}{observed in manual scanning}. The maximum extension of the SEE of 22.13mm is reached for a full inflation of all SFAs \textcolor{black}{and exceeds the demanded axial translation of 10.22mm as well as the transversal translation of 19.91mm determined from the clinical data}. The maximum tilt of the SEE is reached along the principal plane of $SFA_1$ with 14.02$\degree$\textcolor{black}{, which is $\approx9\%$ greater than the maximum demanded tilt of 12.9$\degree$}. A maximum axial torsion of 1.03$\degree$ occurs. Compared to the tilt ranges in X and Y the twist is significantly lower and will therefore be ignored in the following investigations. \textcolor{black}{The coupling between translation and rotation, the bending, of the SEE upon actuator inflation is shown in Fig. \ref{fig:WSPos}e) for a cross-section of the workspace along the central x-z-plane in translation and the corresponding y-axis of the rotational workspace. It can be seen that with the amount of transversal translation, the rotation of the tip increases, whilst axial extension has no effect on the rotation.} \begin{table}[h!] \centering \caption{Repeatability} \label{tab:Repeatability} \begin{tabular}{@{}cccc@{}} \toprule Pose & \textcolor{black}{$[V_1, V_2, V_3]$}& $||\delta_e|| [\text{mm}]$ & $||\theta_e|| [\degree]$ \\ \midrule \textcolor{black}{$C_1$}&$[0\%, 0\%, 0\%]$& $0.07 \pm 0.05$& $0.03 \pm 0.02$\\ \textcolor{black}{$C_2$}&$[75\%, 50\%, 75\%]$& $0.10 \pm 0.05$ & $0.05 \pm 0.02$\\ \textcolor{black}{$C_3$}&$[25\%, 0\%, 100\%]$& $0.07 \pm 0.03$ & $0.06 \pm 0.03$\\ \textcolor{black}{$C_4$}&$[50\%, 25\%, 0\%]$& $0.08 \pm 0.06 $& $0.04 \pm 0.02$\\ \textcolor{black}{$C_5$}&$[70\%, 80\%, 25\%]$& $0.09 \pm 0.04$ & $0.06 \pm 0.03$\\ \textcolor{black}{$C_6$}&$[0\%, 20\%, 70\%]$&$ 0.11 \pm 0.05$ & $0.07 \pm 0.03$\\ \midrule \multicolumn{2}{c}{$\mu$ } & 0.09 & 0.05\\ \bottomrule \end{tabular} \end{table} The results of the positioning repeatability evaluation are presented in Table \ref{tab:Repeatability}. The table indicates the mean Euclidean errors in position and orientation with their respective standard deviations from the given pose for the 50 repetitions \textcolor{black}{with respect to the mean pose for the given configuration, $\mu(\boldsymbol{x}(C_j))$}. \textcolor{black}{For a configuration $C_j$, for instance, the Euclidean error $||\delta_e||$ is computed as \begin{equation} ||\delta_e|| = \sum_{i=1}^{n=50}\frac{||\boldsymbol{x}_i-\mu(\boldsymbol{x}(C_j))||}{n} \end{equation} The pose $\boldsymbol{x}_i$ for a given configuration $C_j$ is obtained by averaging the measured static tip pose over a period 4 seconds. The orientation error $||\theta_e||$ and both corresponding standard deviations are calculated in the same manner. Whilst it can be seen that the measured accuracy of the SEE is with $\approx 0.1$mm in position and $0.05\degree$ orientation slightly below the rated accuracy of the EM tracking system (0.48mm and 0.30$\degree$ RMS \cite{NDI2013}), it can be seen that averaging the pose data over 4 seconds reduces noise-related variance in the data. The samples are normally distributed across the workspace and thus the time-averaged mean is assumed to represent the tip pose sufficiently.} \begin{figure}[t] \centering \includegraphics[width = \linewidth]{Linde15.pdf} \caption{Workspace generated with model in position (a) and orientation (b). The colour indicates the normalized Euclidean error in the given state with respect to the maximum deviation from the model\textcolor{black}{, which is 2.37mm in position and 2.46$\degree$ in orientation}.} \label{fig:ModelValidation} \end{figure} \subsection{Model validation} \label{Model_validation} The results of the model validation are shown in Fig. \ref{fig:ModelValidation} and summarized in Table \ref{tab:ModelValidation}\textcolor{black}{, where $\mu$ refers to the mean error, $\sigma$ to the standard deviation and $max$ to the maximum error}. The estimated workspace of the SEE generated with the kinetostatic model is shown in Fig. \ref{fig:ModelValidation}. The colour of each marker indicates the Euclidean distance between the calculated point and the corresponding measured pose normalized to the maximum error in position and orientation respectively, namely 2.37mm and 2.46$\degree$. \textcolor{black}{The Young's modulus of the SFA material $E$ and its area moment of intertia $I$ have been manually tuned to minimize the Euclidean error in position and orientation. The obtained values are shown in Table \ref{tab:ModelParameters}.} Overall, the model validation shows with a mean Euclidean error of $1.18\pm0.29$mm in position and $0.92\pm0.47\degree$ in orientation good results in predicting the tip pose under SFA extension. \begin{table}[h!] \centering \caption{Model validation} \label{tab:ModelValidation} \begin{tabular}{@{}ccccccc@{}} \toprule & \multicolumn{3}{c}{Displacement [mm]} & \multicolumn{3}{c}{Tilt [$\degree$]} \\ \cline{2-7} & $\mu$ & $\sigma $ & max & $\mu$ & $\sigma $ & max \\ \midrule $e_x$ & -0.81 & 0.20 & 1.25 & 0.04 & 0.44 & 1.34 \\ $e_y$ & -0.55 & 0.47 & 1.60 & 0.05 & 0.86 & 2.26 \\ $e_z$ & -0.05 & 0.50 & 1.80 & -0.13 & 0.36 & 1.36 \\ $||e||$ & 1.18 & 0.29 & 2.37 & 0.92 & 0.47 & 2.47 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Linde16.pdf} \caption{Effect of axial loading on transversal motion} \label{fig:Contact} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Linde17.pdf} \caption{Example of tracked trajectory under external loading. The normal force is represented with scale of 0.2mm/N} \label{fig:PositionControl} \end{figure} \subsection{Contact experiment} The motion constraint induced by an indentation contact is investigated. Fig. \ref{fig:Contact} shows the constraint of the mean X-displacement and Y-tilt for a given motion over 10 repetitions normalized to \textcolor{black}{their respective} maximum value\textcolor{black}{s of 12.92mm in position and 8.67$\degree$ in orientation when no external force is present, as well as their corresponding linear interpolations}. The transversal force applied by the SEE is measured with the force torque sensor. For both, the displacement and the tilt, the magnitude declines monotonically. Whereas the displacement reaches a minimum at 27.84\%, the tilt remains less affected by the lateral force with a minimum of 55.35\%. Linearizing the trends yield a decrease of 14.09$\%/\text{N}$ for the displacement and only 8.56$\%/\text{N}$ for the tilt. \begin{table}[t!] \centering \caption{Position control results} \label{tab:PositionControl} \begin{tabular}{@{}ccccccc@{}} \toprule & \multicolumn{3}{c}{Flat - Unloaded}& \multicolumn{3}{c}{Tilted - Unloaded} \\ \cline{2-7} & $\mu$ & $\sigma$ & max & $\mu$ & $\sigma$ & max\\ \midrule $e_x$ [mm] & 0.20 & 0.19 & 0.98 & 0.19 & 0.19 & 0.82\\ $e_y$ [mm] & 0.25 & 0.20 & 1.03 & 0.27 & 0.20 & 0.88\\ $e_z$ [mm] & 0.11 & 0.10 & 0.52 & 0.13 & 0.07 & 0.35\\ $||e||$ [mm] & 0.34 & 0.29 & 1.51 & 0.36 & 0.28 & 1.25\\ \midrule & \multicolumn{3}{c}{Flat - Loaded}& \multicolumn{3}{c}{Tilted - Loaded}\\ \cline{2-7} & $\mu$ & $\sigma$ & max & $\mu$ & $\sigma$ & max\\ \midrule $e_x$ [mm] & 0.24 & 0.23 & 1.10 & 0.27 & 0.28 & 1.50 \\ $e_y$ [mm] & 0.33 & 0.22 & 1.08 & 0.32 & 0.25 & 1.07 \\ $e_z$ [mm] & 0.13 & 0.10 & 0.56 & 0.17 & 0.10 & 0.65 \\ $||e||$ [mm] & 0.42 & 0.33 & 1.64 & 0.45 & 0.39 & 1.95 \\ \bottomrule \end{tabular} \end{table} \subsection{Position control} An example of a tracked trajectory with external loading is shown in Fig. \ref{fig:PositionControl}. The position controller tracks the desired position accurately with marginally larger tracking error around the corners of the triangular path. The quantitative results of the controller evaluation for the three executions are presented in Table \ref{tab:PositionControl} for both the unloaded and loaded trajectories, \textcolor{black}{where, as in Section \ref{Model_validation}, $\mu$ refers to the mean error, $\sigma$ to the standard deviation and $max$ to the maximum error in the respective direction}. The results indicate a higher mean error for the z-direction regardless of the configuration, which is also observable in the visualization above. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Linde18.pdf} \caption{Ultrasound images acquired by sonographer (a-c) and SEE (d-f) for HC (a,d), AC (b,e) and FL (c,f) measurements } \label{fig:USImages} \end{figure} \subsection{Teleoperation and image-acquisition} \label{ImageAcquisition} The images obtained through manual ultrasound probe placement and steering with the SEE are presented in Fig. \ref{fig:USImages}. Anatomical structures of the foetus phantom are clearly visible throughout all images with minor shadowing on the left side of the FL standard view-plane, outside of the region of interest. In both cases, the regions of interest are centered in the image. Moreover, the contrast in the robot-acquired images is similar to the one in the manually-obtained images. \section{Discussion} In this work we developed a soft robotic ultrasound imaging system \textcolor{black}{to offload sonographers in day-to-day scanning routines. The system addresses the issue of providing a stable contact between the ultrasound probe and patient, which could help improve sonographers’ ergonomics particularly with respect to work-related musculoskeletal disorders which arise from stresses induced by repeated manual probe handling.} The robot allows for tele-operated scanning and provides a platform for advanced imaging approaches. It is designed in form of an end-effector which is manually positioned in the area of interest and actively steered towards the desired view-plane. Due to its inherent compliance, the SEE is able to maintain contact while exhibiting sufficient axial stiffness to ensure mechanical coupling for the ultrasound image acquisition \textcolor{black}{which is verified by acquiring standard views on a foetal ultrasound phantom}. The system shows with its high axial and low lateral stiffness good applicability to foetal ultrasound scanning. Despite the quick decline of stiffness with axial extension, the SEE is with $14.41\text{N}/\text{mm}$ axial stiffness at full extension still capable to apply sufficiently high forces to the patient without significant deformation, \textcolor{black}{which is approximately 1.44mm at maximum axial load of 20.77N}. The lower lateral stiffness allows for the system to be adaptable to the contact surface and to be moved away in case of discomfort in the patient \textcolor{black}{whilst being sufficiently high to counteract transversal loads occurring during the intervention. It can be seen that for the fully extended SEE the transversal displacement at a maximum occurring load of 10.67N reaches 7.1mm.} \textcolor{black}{The compliance of the system allows for deformation upon external motion when clamped onto a patient. Thus, the resulting contact force is significantly lower compared to a rigid system. It furthermore exhibits a low mass which could be beneficial in the dynamic safety of the system \cite{Haddadin2008}.} \textcolor{black}{If the stiffness in the axial direction of the probe needs to be adjusted or the tip force controlled, the system can be equipped either with a force sensor at the base to estimate tip forces or serve as a sensor itself \cite{Lindenroth2017IntrinsicActuation}. While in the first case the tip pose change during the operation needs to be accounted for to accurately determine the external force, either by an accurate model or pose feedback, the second case can make use of the deformable structure of the robot paired with the kinematic constraints induced by the actuation channels to infer the external force.} We have shown that the integration of a braided nylon mesh, which has previously only been used to avoid ballooning in SFAs, can significantly improve the twist stiffness of the SEE to up to three times in comparison to the mesh-free system. \textcolor{black}{The use braided meshes is a highly versatile design approach and shows the potential to become a de facto standard in reinforcing not only soft robotics system but also continuum robots against unwanted external twists induced by contact wrenches, thus enabling such robots for a wider range of applications.} \textcolor{black}{The workspace achieved by the SEE covers without external loading the \textcolor{black}{average} translation and rotation motion ranges required to achieve a desired view, as shown from clinical data. Loading the probe with the contact forces measured in clinical scans and assuming the lowest possible stiffness of the system reduces the achieved workspace to about 95.18\% of the \textcolor{black}{mean required range}. Whilst\textcolor{black}{, for example,} the maximum translation of the SEE is at 19.01mm significantly higher than the required deflected motion of 10.68mm, the non-homogeneous shape of the SEE workspace dictates the limitations in covering the required translation range.} This limitation could be addressed by adding a linear stage to the base of the SEE to allow for axial translation without sacrificing the softness of the system. \textcolor{black}{Moreover, an axial rotation stage could be added to allow for more complex probe motions.} \textcolor{black}{A high variability in the monitored ultrasound probe motion ranges can be observed across the obtained views and subjects. Whilst, on average, relatively small maximum deflections are observed, in some instances significantly larger motions occur. This is indicated by the high standard deviations in the motion ranges of the respective axes. Further research needs to be conducted into the exact metrics of the ultrasound probe motions and whether the designed system can satisfy those metrics. Additional considerations such as the coupling between different motion axes then need to be accounted for. Another factor in the feasibility of a desired view is the accuracy of the manual placement of the passive positioning arm. If the accuracy is low and the view is out of reach of the end-effector, the passive arm could either be repositioned manually or additional DOFs could be added to the system. More accurate methods should be employed in evaluating the manual probe motions. The use of a percentile is difficult for the given data due to the high variability in the times required to obtain desired views, as seen in the presented time series for the motions of subject 5 in Fig. 10 for example. Thus, a larger scale and more streamlined data acquisition needs to be conducted.} \textcolor{black}{We showed that the combination of SFAs and hydraulic actuation exhibits good properties for the SEE to be driven in an open-loop configuration. The relationship between SFA length and input volume is highly linear and only shows 0.14$\pm$0.05mm deviation due to hysteresis, thus allowing for an accurate prediction of the kinematic constraints imposed on the SEE. This compliments the derived kinetostatic model, which is able to accurately predict the SEE tip motion with an accuracy of 1.18mm in position and 0.92$\degree$ in orientation as a function of the induced working fluid volume. The model deviates more along the boundaries of the workspace, which could be caused by the larger deflection of the SFAs and resultant nonlinearieties caused by the bending of the actuators. This could be addressed by extending the model to a nonlinear approach, as we have for example demonstrated in \cite{Lindenroth2017IntrinsicActuation} for a soft continuum robot.} \textcolor{black}{The repeatability lies with 0.1mm in position and 0.05$\degree$ in orientation slightly below the rated accuracy of the measurement system. As the obtained measurements are expressed relative to a mean pose, averaged over time and normally distributed, it is assumed that these values still represent the true pose well. The high repeatability and should allow for accurate positioning of the SEE in view-plane finding applications.} \textcolor{black}{The system maintains stability and controllability well when in contact with a tissue-like soft silicone rubber patch. We showed that the implemented closed-loop position controller is able to track target trajectories accurately with a mean position error of 0.35mm with only marginally increased errors in the tracking accuracy of 0.44mm when a contact force applied. In scenarios where EM tracking is not available, the ultrasound image could be used to provide pose feedback. This could then employed as a substitute for the position feedback in the closed-loop controller.} The coupling between position and orientation is an obvious limitation in the usability of the design. It can be seen, however, that the mechanical properties of the surface contact greatly affect the coupling behaviour. We have shown that an indenting contact reduces the lateral motion of the ultrasound probe significantly more than the tilt. It can easily be seen that a very stiff coupling in combination with the minimal contact friction caused by the application of ultrasound gel greatly reduces the tilt capabilities of the system while allowing for lateral sliding. It can therefore be assumed that in practice the coupling can be reduced by varying the axial pressure applied to the patient. This is supported by the findings of the tele-operated image acquisition in Section \ref{ImageAcquisition} and will be investigated further in future research. \section{Conclusion} The SEE design proposed in this work \textcolor{black}{shows a novel approach to applying soft robotics technologies in medical ultrasound imaging.} We have shown that under certain conditions the SEE satisfies the requirements imposed by the clinical application. The derived kinetostatic model mimics adequately the behaviour of the physical robot and the integrated system is capable of tracking target trajectories accurately and obtaining high-quality ultrasound images of a prenatal ultrasound phantom. In our future work, we will make use of the hydraulic actuation to integrate a force-controlled system through intrinsic force sensing, as shown in our previous work \cite{Lindenroth2017IntrinsicActuation}.
2024-02-18T23:40:09.178Z
2020-07-24T02:04:35.000Z
algebraic_stack_train_0000
1,481
11,451
proofpile-arXiv_065-7297
\section{Goldstone modes counting without Lorentz invariance} Different low-energy states, that can be discriminated from each other, must have the same energy if they are related by a symmetry of the free energy. In particular, if a ground state breaks a symmetry of the free energy, all states that can be obtained by recursively applying this symmetry to the ground state are also possible ground states. This is how the comparative study of the symmetries of the states that the system can reach, and those of the free energy, gives insight into the degeneracies of the spectrum of the theory. In the following, we shall restrict ourselves to systems simple enough that more refined arguments about Higgs mechanisms, or other subtle ways to generate gaps \cite{Nicolis13,Nicolis13a} are not necessary. Then the low-energy spectrum can be directly read from the spontaneous symmetry-breaking pattern. Obviously, not all the symmetries need to be broken by the ground state, i.e., there can be residual symmetries. In the case when the broken symmetries are associated with continuous transformations, it is possible to relate any pair of ground states by a continous path of other ground states. That is, the spectrum includes massless excitations, which are the \textit{Goldstone modes}. Because of their massless nature, Goldstone modes are not secreened at large distances, and therefore they play a crucial role in the infrared physics of the system. In relativistic systems, Lorentz symmetry imposes on these modes a dispersion relation of the form $\omega=q c$ ; however this no longer holds for non-relativistic systems. For example, the ferromagnetic magnon has a low-energy dispersion relation of the form $\omega\sim q^2$ \cite{Nielsen76}. Hence for systems without Lorentz invariance, the counting rule should give access not only to the number of Goldstone modes, but also to their type of dispersion relation. The counting of Goldstone modes can be related to the algebra of the symmetry groups of the free energy and the ground states by Goldstone's theorem \cite{Goldstone62} : let $G$ be the symmetry group of the free energy, and let $H$ be the group of symmetries of the ground states, namely the group of residual symmetries, with $H\subset G$ in light of the above. Goldstone's theorem (in its original version) states that the number of Goldstone modes is simply given by dim$(G/H)$. However, we must note the following: \begin{itemize} \item this rule cannot be trivially extended to non-relativistic systems. Indeed, in ferromagnets for example, $G=$ O$(3)$ is broken into $H=$ O$(2)$ (the ground state is still invariant with respect to rotations of axis in the direction of the spontaneous magnetization). The ferromagnetic state thus breaks two generators of $G$, but there is only one magnon \cite{Nielsen76}. \item As Goldstone's theorem is intended only for relativistic systems, it does not give any information about the dispersion relations, which are crucial characteristics of the massless modes in non-relativistic theories. It was first pointed out by Nielsen and Chadha that the infrared behavior of the Goldstone bosons has a direct influence on the number of generated massless modes : to continue with the example of magnets, ferromagnets have one magnon with a quadratic dispersion relation $\omega\sim q^2$ , whereas antiferromagnets have two magnons with a linear dispersion relation $\omega\sim q$ , while in both cases two rotation generators are broken by the ground state \cite{Nielsen76}. \item the knowledge of the algebra of broken generators is not sufficient in general. Indeed, two different broken generators can generate the same excitation on a given ground state (see Fig. \ref{figLM} for a simple example), and therefore they are not associated with different Goldstone modes \cite{Low02}. This is all the more important for systems that possess spacetime symmetries which frequently generate non-independent transformations of the ground states \cite{Kharuk18}. \end{itemize} \begin{figure} \begin{center} \includegraphics[scale=1.35]{LM.png} \end{center} \caption{Left: effect of a rotation on a line. Right: the action, on the same line, of a combination of local translations induces the exact same transformation of the line. It is one of the simplest examples in which mathematically independent transformations can induce similar excitations of a given state. This example is discussed in \cite{Low02}.} \label{figLM} \end{figure} It can be understood in light of the above that the central quantity involved in the counting of Goldstone bosons should include more information than the mere algebra of the broken generators. Let us define the matrix $\rho$ of the \textit{commutators} of \textit{independent} broken generators $\left\{Q_i\right\}$ \textit{evaluated} in the ground state $\left|0\right>$ \cite{Watanabe12} : \begin{equation} \rho_{ab}=\left<[Q_a,Q_b]\right>\ . \end{equation} It is also important to discriminate the generated massless modes according to their dispersion relation: we will call type-A Goldstone bosons those that have a linear dispersion relation, and type-B those whose dispersion relation is quadratic (the rigorous definition is a little bit more subtle, but the difference is irrelevant to our purpose \cite{Watanabe14}). The numbers $n_A$ and $n_B$ of each type of Goldstone modes is then given by the following formulas \cite{Watanabe12}: \begin{equation} \label{eqWata} \left\{\begin{split} &n_A = \text{dim}(G/H)-\text{rank}(\rho)\\ &n_B = \frac{\text{rank}(\rho)}{2}\ . \end{split}\right. \end{equation} Note that in the relativistic case, the existence of a non zero expectation value of the commutator in the ground state would break Lorentz invariance \cite{Watanabe14}, thus Lorentz symmetry requires $\rho=0$, and we recover the original Goldstone's theorem with dim$(G/H)$ type-A massless modes. As a less obvious example, consider an ideal crystalline solid in $D$ dimensions. The ground state of the system is given by the periodic lattice that breaks both the translation and rotation symmetries of the free-energy (given in that case by the theory of elasticity \cite{Landau90}). The residual symmetry group $H$ is the discrete subgroup of symmetry of the crystalline lattice denoted $\mathcal{C}$. The spontaneous symmetry breaking mechanism characterizing the crystalline ground state can be written as follows: \begin{equation} \label{eqM1} \text{\underline{\textbf{Mechanism 1:}}} \quad \text{ISO}(D)\rightarrow \mathcal{C}\ , \end{equation} where ISO$(D)$ is the group of isometries. There are $D$ broken translation generators, as well as $D(D-1)/2$ broken rotation generators. However, the action of translations and rotations on the ground state do not generate independent excitations \cite{Watanabe13,Beekman17a}, and the group of \textit{independent} broken generators reduces to the broken translations. Because translations commute with each other, $\rho=0$ and the counting rule Eq.(\ref{eqWata}) gives $D$ type-A Goldstone bosons, corresponding to the well-known $D$ acoustic phonons of crystals, and not $D+D(D-1)/2=D(D+1)/2$ Goldstone bosons as a naive application of Goldstone's theorem would have given. This simple example sheds light onto two properties of the counting rule Eq.(\ref{eqWata}). First, for non-Lorentz-invariant systems, the total number of Goldstone modes does not need to be equal to the total number of broken generators, even if all of them have a linear dispersion relation (the equality occurs only if we suppose further that the actions of each of the generators on the ground state are independent). Second a non-trivial algebra of independent broken generators is required for the existence of type-B Goldstone bosons. \section{Spontaneous symmetry breaking in the flat phase} \label{secII} In this section, we study the application of the Goldstone counting rule Eq. (\ref{eqWata}) to the flat phase. Note that we are concerned here with the symmetries of the zero-temperature ground state of the crystalline membrane, which is still a periodic crystal. The influence of thermal fluctuations on the stability of such a state is discussed in the next section. In addition to usual crystals, a fundamental property of crystalline membranes is their ability to fluctuate in a bigger embedding space, whose dimension will be called $d$. This paves the way to more complex spacetime symmetry breaking patterns, which, as we can already anticipate will be of paramount importance to explain the quadratic dispersion relation of flexurons (see below). The massless fluctuation spectrum of the flat phase of crystalline membranes is well-known. It includes the following\cite{Nelson04}: \begin{itemize} \item $D$ acoustic phonons, which are type-A Goldstone bosons, as in any crystal. \item $d-D$ flexurons which are type-B Goldstone bosons, and therefore much stronger than the phonons in the infrared limit. \end{itemize} There is no clear consensus in the literature on the spontaneous symmetry-breaking pattern associated with the flat phase to the best of our knowledge \cite{Aronovitz89,Zanusso14,Guitter89}. Note that most of these patterns were proposed at a time when the Goldstone counting rule was not known. In the following we will only discuss the more widely used mechanism, which is also the one that best respects the symmetries of the flat phase (see Fig. \ref{figSymPlan}) and argue why we think it is not correct. Consider the following symmetry breaking pattern \cite{Guitter89}: \begin{equation} \label{eqM2} \text{\underline{\textbf{Mechanism 2:}}} \quad \text{ISO}(d)\rightarrow\text{ISO}(D)\times \text{SO}(d-D)\ . \end{equation} The free-energy is invariant under all isometries of the embedding space, forming the group ISO$(d)$, and the flat phase configuration is an infinite plane, which is still invariant under the plane isometries ISO$(D)$ as well as the rotations of SO$(d-D)$ which act only on directions of the embedding space that are all orthogonal to the flat phase plane (see Fig. \ref{figSymPlan} for a picture for the physical dimensions $D=2$, $d=3$). \begin{figure} \begin{center} \includegraphics[scale=1]{SymPlan.png} \end{center} \caption{Symmetries of the infinite plane in three dimensions : the plane is invariant under two translations and one rotation (in blue in the figure), but it breaks the other two rotations and the remaining translation (in red). The SO$(d-D)$ group reduces to the discrete $\mathbb{Z}_2$ group corresponding to reversing the up and down sides of the plane.} \label{figSymPlan} \end{figure} The set of broken symmetry generators thus contains the following: \begin{itemize} \item the $d-D$ translations in the directions orthogonal to the flat phase plane, denoted $\left\{P_\alpha\right\}_{\alpha\in[\![D+1;d]\!]}$. In the following, greek letters denote indices in $[\![1;d]\!]$, whereas latin ones only run into $[\![1;D]\!]$. \item the $D\times(d-D)$ \textit{mixed} rotations that bring the vector giving the $i^{th}$ direction inside the flat phase plane to the direction given by the $\alpha^{th}$ vector of the embedding space outside the membrane's plane $\left\{J_{i\alpha}\right\}_{i\in[\![1;D]\!] ,\alpha\in[\![D+1;d]\!]}$. \end{itemize} The commutation relations between these generators are given by the $\text{iso}(d)$ algebra of the isometry group \cite{Schwarz71}: \begin{equation} \label{eqisod} \begin{split} [J_{\mu\nu},J_{\alpha\beta}]&=\delta_{\nu\alpha}J_{\mu\beta}-\delta_{\mu\alpha}J_{\nu\beta} -\delta_{\nu\beta}J_{\mu\alpha}+\delta_{\mu\beta}J_{\nu\alpha}\\[0.2cm] [J_{\mu\nu},P_{\theta}]\ \,&=\delta_{\nu\theta}P_{\mu}-\delta_{\mu\theta}P_{\nu}\\[0.2cm] [P_\mu,P_\nu]\ \ \,&=0\ . \end{split} \end{equation} The next question one should answer is which of these generators act independently on the ground state $\left|0\right>$. Consider the action of a mixed rotation on the ground state : \begin{equation} J_{i\alpha}\left|0\right>=\big(x_iP_\alpha-x_\alpha P_i\big)\left|0\right> =x_i P_\alpha\left|0\right>\ . \end{equation} The second equality follows from the fact that the translations inside the flat phase plane are not broken. Hence the action of a mixed rotation can always be canceled by a combination of broken translations. The set of independent broken generators thus reduces to $\left\{P_\alpha\right\}_{\alpha\in[\![D+1;d]\!]}$, and the counting rule Eq.(\ref{eqWata}) yields: \begin{equation} \label{eqG2} \left\{\begin{split} &n_A=d-D\\ &n_B=0\ , \end{split}\right. \end{equation} which is not consistent with the expected spectrum for the flat phase of crystalline membranes. This computation is quite enlightening though, since it underlines the necessity of having a non-trivial algebra of broken generators to describe type-B Goldstone bosons such as the flexurons in crystalline membranes. Looking more thoroughly at Eq. (\ref{eqG2}), we notice that the number of generated Goldstone bosons is equal to the number of directions of the embedding space orthogonal to the flat phase plane, which indicates that they must be flexurons (but not with the expected dispersion relation): the mechanism in Eq. (\ref{eqM2}) misses the phonons. Thus, this mechanism seems more appropriate to describe the flat phase of fluid membranes \footnote{Such an ordered phase only exists at $T=0$ however, because of Hohenberg-Mermin-Wagner's theorem.} in which only flexurons are at play. As a matter of fact, the constituents of an incompressible fluid membrane are free to diffuse on its surface, thereby forbidding the definition of any particular reference state by the position of the molecules, and therefore any kind of positional order. Such materials thus do not have acoustic phonons. This assertion can indeed be checked for consistency with usual models of fluid membranes: from the Canham-Helfrich free-energy \cite{Canham70,Helfrich73}, the flexuron's propagator $G(q)$ has the following asymptotic behavior in the infrared limit: \begin{equation} \label{eqh} G(q)=\left<h(q)h(-q)\right>\underset{q\rightarrow0}{\sim}\frac{1}{\sigma q^2}\ , \end{equation} where $\sigma$ is the tension of the membrane. Such behavior is typical of type-A Goldstone bosons. One could argue that, according to Eq.(\ref{eqh}), another type of behavior could be expected in tensionless fluid membranes, but this does not hold since even if not present at the microscopic scale, $\sigma$ is generated by the renormalization group flow when going towards the infrared regime \cite{Nelson04}. Our Goldstone modes analysis leads to the following complementary argument: the tension term is not protected by the symmetries of the system, and therefore cannot be consistently enforced to be zero. As we have seen in Eq.(\ref{eqM1}), the origin of acoustic phonons in crystals is the breaking of the isometry group by the discrete group of the crystalline lattice $\mathcal{C}$. This must also hold for crystalline membranes in which, although not well preserved by the thermal fluctuations, a crystalline lattice is still present in the flat phase. In light of these arguments, it seems reasonable to take a closer look at the following symmetry-breaking pattern: \begin{equation} \label{eqM3} \text{\underline{\textbf{Mechanism 3:}}} \quad \text{ISO}(d)\rightarrow\mathcal{C}\times \text{SO}(d-D)\ . \end{equation} Note that the track of the discrete symmetry group $\mathcal{C}$ is still present even in the continuum theory of crystalline membranes: in \cite{David88} for example, the transition to the flat phase is presented as the generation of a non-zero expectation value for the metric of the flat phase $g_{ij}^0$ which then characterizes the ground state. Because the membrane is assumed to be homogeneous and isotropic, its metric in the ground state has to be proportional to $\delta_{ij}$. The proportionality coefficient $\zeta^2$, called the extension parameter, characterizes the unit of length inside the membrane's plane. The presence of this fixed reference metric is one of the key differences between crystalline membranes and fluid ones. The broken symmetry generators in the mechanism given by Eq.(\ref{eqM3}) are thus as follows: \begin{itemize} \item the $d-D$ external translations $\left\{P_\alpha\right\}_{\alpha\in[\![D+1;d]\!]}$. \item the $D$ internal translations $\left\{P_i\right\}_{i\in[\![1;D]\!]}$. \item the $D\times(d-D)$ mixed rotations, mixing the internal and external spaces $\left\{J_{i\alpha}\right\}_{i\in[\![1;D]\!],\alpha\in[\![D+1;d]\!]}$. \item the $\frac{D(D+1)}{2}$ internal rotations $\left\{J_{ij}\right\}_{(i,j)\in[\![1;D]\!]^2}$. \end{itemize} The action of internal translations and rotations are not independent, as in usual crystals. This time however, the action of mixed rotations on the ground state can no longer be canceled by a carefully chosen combination of external translations (see Fig. \ref{figRotMix} for a picture in the one dimensional case). Indeed, whereas rotations always preserve the ground state metric, a combination of translations bringing the system into a similar plane would induce a dilation of the system (as an obvious application of Pythagoras' theorem shows) and therefore a flat phase state with a different extension parameter $\zeta$. This is a direct consequence of the presence of a microscopic lattice, or equivalently of the presence of phonons in the system. \begin{figure} \begin{center} \includegraphics[scale=1.35]{RotMix.png} \end{center} \caption{Left: action of a mixed rotation on a one-dimensional lattice. Right: a linear combination of translations can align the lattice on the same line, but the state of the system is different because of the dilation that the translations induce on the lattice due to Pythagoras' theorem.} \label{figRotMix} \end{figure} The internal space isometries can be used to relate the mixed rotations with the same external index $\alpha$, but different internal indices $i$ . All in all, the total number of independently acting broken symmetry generators is as follows: $D$ internal translations $+$ $(d-D)$ external translations $+$ $(d-D)$ mixed rotations, so that finally: \begin{equation} \text{dim}(G/H)=2d-D\ . \end{equation} The commutations relations between these generators evaluated in a ground state are given as usual by the algebra of isometries $\text{iso}(d)$ : \begin{equation} \label{eqisod2} \begin{split} &\left<[J_{\alpha i},J_{\beta i}]\right> \propto\left<J_{\alpha\beta}\right>=0\\ &\left<[P_{\alpha},P_{\beta}]\right>=0\\ &\left<[J_{\alpha i},P_{\gamma}]\right>\underset{\gamma\neq i,\gamma \neq\alpha}{=}0\\ &\left<[J_{\alpha i},P_{\alpha}]\right>=\left<P_i\right>\neq0\\ &\left<[J_{\alpha i},P_{i}]\right>=\left<P_\alpha\right>\neq0\ . \end{split} \end{equation} It is then possible to write $\rho$ is a basis where the determination of its rank is simple, even without knowing the precise value of the non-zero matrix coefficients, denoted "$*$" below. We choose to write in a basis in which the $d$ translations are displayed first, and then the $d-D$ rotations : \begin{equation} \rho=\left[ \begin{array}{ccccc|ccccc} 0 & 0 & 0 & 0 & ... & * & * & 0 & 0 & ... \\ 0 & 0 & 0 & 0 & ... & * & 0 & * & 0 & ... \\ 0 & 0 & 0 & 0 & ... & * & 0 & 0 & * & ... \\ 0 & 0 & 0 & 0 & ... & * & 0 & 0 & 0 & ... \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ \hline * & * & * & * & ... & 0 & 0 & 0 & 0 & ... \\ * & 0 & 0 & 0 & ... & 0 & 0 & 0 & 0 & ... \\ 0 & * & 0 & 0 & ... & 0 & 0 & 0 & 0 & ... \\ 0 & 0 & * & 0 & ... & 0 & 0 & 0 & 0 & ... \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \end{array}\right]\ , \end{equation} which leads to: \begin{equation} \text{rank}(\rho)=2(d-D)\ , \end{equation} and finally, thanks to the counting rule Eq. (\ref{eqWata}), \begin{equation} \label{eqG3} \left\{\begin{split} &n_A=D\\ &n_B=d-D\ . \end{split}\right. \end{equation} Finally, we get $D$ type-A Goldstone modes with linear dispersion relation, corresponding to the acoustic phonons, and $d-D$ type-B Goldstone bosons with quadratic dispersion relation corresponding to the flexurons; which is the exact spectrum of crystalline membranes that we recalled at the beginning of the section. The crucial difference between the second and third proposed mechanisms -- Eq. (\ref{eqM2}) and Eq. (\ref{eqM3}) respectively -- is the presence of the crystalline lattice discrete group in the latter, that allows to keep some independent rotation generators required to get a non trivial algebra Eq. (\ref{eqisod2}) and thus a non zero rank for $\rho$, which allows for the existence of the type-B flexurons. The consequences are far-reaching as among the proposed mechanisms, only the third one is associated with a stable ordered phase in two dimensions: phonons Eq. (\ref{eqM1}) or flexurons Eq. (\ref{eqM2}) alone cannot stabilize a long range order of positional or orientational nature. Already at this stage, we can notice two main differences between mechanisms 1 and 2 on the one hand and mechanism 3 Eq. (\ref{eqM3}) on the other hand (although they are all three built from the same groups), namely that mechanism 3 has two different types of fluctuation modes, and it has type-B Goldstone modes. The rest of this paper is dedicated to the analysis of the consequences of these two differences. \section{Hohenberg-Mermin-Wagner's theorem} The spontaneous symmetry breaking pattern combined with the counting rule Eq.(\ref{eqWata}) gives access to the number of massless modes as well as their dispersion relation in the large distance limit in the ordered phase predicted by mean-field theory, but it is not sufficient to know if such an ordered phase is robust to thermal fluctuations. For that last purpose, the most useful tool is the Hohenberg-Mermin-Wagner's theorem \cite{Hohenberg67,Mermin66}. In their original paper, Mermin and Wagner stress an important hypothesis for their theorem to apply: the interaction needs to have a short enough range. Namely, if $J$ is the coupling constant describing the strength of the interaction between the order parameter fields, $J(x)x^2$ must be an integrable function in $D$ dimensions \cite{Mermin66}. Let us test this hypothesis in the case of crystalline membranes. First, we need the action describing the small fluctuations around the flat phase. As stated before, crystalline membranes can be described as an elastic medium fluctuating in an embedding space, and their action thus contains both a curvature term, proportional to the bending energy $\kappa$ of the membrane, and an elastic term quadratic in the strain tensor $\varepsilon_{ij}$: \begin{equation} S=\int_x\left[\frac{\kappa}{2}\big(\partial^2\vec{r}\big)^2+\frac{c_{ijab}}{2}\varepsilon_{ij} \varepsilon_{ab}\right]\ , \end{equation} where $\int_x=\int d^Dx$ is an integral over the internal space of the membrane, $\vec{r}$ is the position vector describing the membrane, and $c_{ijab}$ is the elastic tensor, which can be expressed for example in terms of the Lam\'e coefficients as $\lambda \delta_{ij}\delta_{ab}+\mu(\delta_{ia}\delta_{jb}+\delta_{ib}\delta_{ja})$. The strain tensor $\varepsilon_{ij}$ can be expressed as half the difference between the metric in the current state of the membrane $g_{ij}=\partial_i\vec{r}.\partial_j\vec{r}$ and that of the flat phase reference state $g_{ij}^0=\zeta^2\delta_{ij}$. To build a theory of small fluctuations around the flat phase, it is necessary to expand $\vec{r}$ around the equilibrium configuration with extension parameter $\zeta$ : \begin{equation} \vec{r}=\zeta x_i\vec{e}_i+\vec{u}+\vec{h}\ , \end{equation} where a basis of the flat phase plane $\{\vec{e}_i\}$ has been introduced. This expansion causes the phonons $\vec{u}$ and the flexurons $\vec{h}$ to appear explicitely. Using the fact that the phonons and flexurons vibrate in orthogonal spaces, the action writes in terms of the most relevant terms reads \cite{Aronovitz88} : \begin{equation} \label{eqFl} \begin{split} S=\int_x\bigg[&\frac{\kappa}{2}\big(\partial^2\vec{h}\big)^2 +\frac{c_{abcd}}{2}u_{ab}u_{cd}+\frac{c_{abcd}}{2}u_{ab}(\partial_c\vec{h}.\partial_d\vec{h})\\ &+\frac{c_{abcd}}{8}(\partial_a\vec{h}.\partial_b\vec{h})(\partial_c\vec{h}.\partial_d\vec{h})\bigg] \ , \end{split} \end{equation} where the symmetric tensor $u_{ab}=(\partial_au_b+\partial_bu_a)/2$ has been introduced. At first glance, it seems that the action Eq.(\ref{eqFl}) contains only local interactions between phonons and flexurons, what could lead one to conclude that the long-range orientational order is broken by thermal fluctuations. But in the original argument of Mermin and Wagner, there is only one fluctuation mode. We have seen that flexurons are the dominant modes in the infrared limit. Moreover, the action Eq.(\ref{eqFl}) is quadratic in the phonons, it is thus possible to perform an exact integration over the phonons to define an effective action with the flexurons as only fields \cite{LeDoussal92}. For sake of simplicity, we give it here in the Fourier space, with implicit momentum conservation, and the shorthand notation $\vec{h}(k_i)=\vec{h}_i$ : \begin{equation} \label{eqFeff} \begin{split} S_{\rm eff}&=\int_{k_1,k_2}\frac{\kappa}{2}\,k_1^4\big(\vec{h}_1.\vec{h}_2\big)\\ +&\int_{k_1,k_2,k_3,k_4}\bigg[\frac{\mathcal{R}_{abcd}(q)}{4}k_1^ak_2^bk_3^ck_4^d\big(\vec{h}_1.\vec{h}_2\big) \big(\vec{h}_3.\vec{h}_4\big)\bigg]\ , \end{split} \end{equation} with $\int_k=\int\frac{d^Dk}{(2\pi)^D}$ according to our convention for Fourier transforms, and $q=k_1+k_2=-k_3-k_4$. The price for working with one type of field only is that now the interaction vertex $\mathcal{R}$ is non local. It depends only on two coupling constants, exactly like the elasticity tensor $c$, which can be made explicit by decomposing it onto the following set of orthogonal projectors: \begin{equation} \label{eqDefNM} \begin{split} & N_{abcd}(q) = \frac{1}{D-1}P^T_{ab}(q)P^T_{cd}(q)\\ & M_{abcd}(q) = \frac{1}{2}\big(P^T_{ac}(q)P^T_{bd}(q)+P^T_{ad}(q)P^T_{bc}(q)\big)-N_{abcd}(q)\ , \end{split} \end{equation} where $P^T_{ij}(q)=\delta_{ij}-q_iq_j/q^2$ is the projector in the direction orthogonal to $q$. The effective interaction vertex is then \cite{LeDoussal92} : \begin{equation} \label{eqReff} \mathcal{R}_{abcd}(q)=\frac{\mu(D \lambda+2\mu)}{\lambda+2\mu}N_{abcd}(q)+\mu M_{abcd}(q)\ . \end{equation} Note that in the particular case $D=2$, corresponding to physical membranes, the two projectors Eq.(\ref{eqDefNM}) are equal, and $\mathcal{R}$ depends on only one elastic constant, which turns out to be Young's modulus $\mathcal{Y}$ \cite{Nelson87}. The first proof of the presence of long-range content of the interaction in Eq.(\ref{eqFeff}) has been given by Nelson and Peliti \cite{Nelson87}, they showed that in two dimensions, the interaction term in Eq.(\ref{eqFeff}) $S_{int}$ can be rewritten as an interaction between the local Gaussian curvature $s(x)=\text{det}(\partial_i\partial_j h)$ : \begin{equation} S_{int}=\frac{\mathcal{Y}}{16\pi}\int_{x,y}\mathcal{G}(x-y)s(x)s(y)\ , \end{equation} where the (non-local) interaction vertex between the Gaussian curvature $\mathcal{G}$ behaves as $\mathcal{G}(x)\simeq x^2 \ln(x/a)$ at large distance ($a$ being the lattice spacing), which is clearly a long-range type of interaction. To make the connection with the original work of Mermin and Wagner \cite{Mermin66}, we must first find the equivalent to the $J(x)$ interaction term. An order parameter associated to the flat phase is given by the extension parameter $\zeta$ which is always non zero in the flat phase, and equal to zero in a completely disordered crumpled configuration. It can also be expressed as a function of the corraltion between the tangent vectors to the surface generated by the membrane \cite{Guitter89}: \begin{equation} \zeta^2=\frac{1}{D}\left<\partial_i\vec{r}\right>.\left<\partial_i\vec{r}\right>\ . \end{equation} Finally, taking into account the fact that the action in Eq.(\ref{eqFeff}) is generated after an integration over the phonon fields, the analog of $J(x)$ in our model is the interaction between the $\partial h$ terms, which turn out to be $\mathcal{R}$. To test if the range of $\mathcal{R}$ is short enough for Hohenberg-Mermin-Wagner's theorem to apply, it must be reexpressed in direct space. We do not give here the full expression of $\mathcal{R}(x-y)$, but we rather analyze the following elementary block: \begin{equation} P^T_{ab}(q)P_{cd}^T(q)=\delta_{ab}\delta_{cd}-\delta_{ab}\frac{q_cq_d}{q^2}-\delta_{cd}\frac{q_aq_b} {q^2}+\frac{q_aq_bq_cq_d}{q^4}\ . \end{equation} Each term can be expressed in direct space thanks to the following formula of the Fourier transform of power laws (see, for example, Ref. \cite{Kleinert01}): \begin{equation} \frac{1}{\big(p^2\big)^a}=\frac{1}{4^a\pi^\frac{D}{2}}\frac{\Gamma\Big({\small\frac{D}{2}-a}\Big)} {\Gamma(a)}\int_x\frac{e^{ip.x}}{\big(x^2\big)^{\frac{D}{2}-a}}\ , \end{equation} which finally gives: \begin{equation} \label{eq123} \begin{split} & \int_q\delta_{ab}\delta_{cd}e^{i\,q.(x-y)}=\delta_{ab}\delta_{cd} \,\delta^{(D)}(x-y)\\ & \int_q\frac{q_aq_b}{q^2}e^{i\,q.(x-y)}=\frac{\delta_{ab}}{2\pi|x-y|^2} -\frac{(x_a-y_a)(x_b-y_b)}{\pi|x-y|^4}\\ & \int_q\frac{q_aq_bq_cq_d}{q^4}e^{i\,q.(x-y)}=\frac{X_{abcd}}{4\pi|x-y| ^2}-\frac{Y_{abcd}(x-y)}{2\pi|x-y|^4}\\ &\quad+\frac{2(x_a-y_a)(x_b-y_b)(x_c-y_c)(x_d-y_d)}{\pi|x-y|^6}\ , \end{split} \end{equation} with the following tensors being defined: \begin{equation} \label{eqXY} \begin{split} & X_{\mu\nu\rho\sigma}=\delta_{\mu\nu}\delta_{\rho\sigma}+\delta_{\mu\rho}\delta_{\nu\sigma} +\delta_{\mu\sigma}\delta_{\nu\rho}\ ,\\ & Y_{\mu\nu\rho\sigma}(\vec{x})=x_\mu x_\nu\delta_{\rho\sigma}+x_\mu x_\rho\delta_{\nu\sigma} +x_\mu x_\sigma\delta_{\nu\rho}\\ &\qquad\qquad+x_\nu x_\rho\delta_{\mu\sigma}+x_\nu x_\sigma\delta_{\mu\rho} +x_\rho x_\sigma\delta_{\mu\nu}\ . \end{split} \end{equation} Among the different terms of Eq.(\ref{eq123}), only the first one is local. The other ones are not integrable over the membrane's internal space once multiplied by $(x-y)^2$. Hence, Hohenberg-Mermin-Wagner's theorem does not apply, and the orientational order in crystalline membranes can be long-range. In the previous argument, it is the non-local structure of the effective interaction vertex between flexurons that is at the origin of the stabilization of long range order in two dimensions. In light of our analysis of the spontaneous symmetry-breaking pattern Eq. (\ref{eqM3}), we can add the following: in the flat phase, even if flexurons are the modes that dominate in the infrared limit, they are not the only important fluctuation modes. In particular, acoustic phonons carry a non-local effective interaction between flexurons at various locations in the flat phase's plane. The mechanism in Eq. (\ref{eqM3}) moreover guarantees that the phonons are Goldstone modes, i.e., thanks to the massless nature they posses by symmetry, they are not efficiently screened out at large distances. Thus, the effective interaction they carry is a true long range one, which explains why Hohenberg-Mermin-Wagner's theorem does not apply, and the flat phase is robust against thermal fluctuations. \section{The origin of long-range order in the flat phase} In the previous sections, we identified two main differences between the third mechanism Eq. (\ref{eqM3}) and the two other ones Eq. (\ref{eqM1}) and Eq. (\ref{eqM2}). In order to refine our comprehension of the necessary ingredients to generate a stable order phase in the system, we propose to analyze a fourth mechanism. Whenever a crystalline membrane undergoes a strong enough stress, it buckles. This buckling phenomenon can be understood as a (second-order) phase transition \cite{Aronovitz89,Guitter89,Roldan11,Kosmrlj16,Gornyi17} : depending on the type of applied forces, the membrane will either become buckled if compressed -- in this state, the membrane appears as a mosaic of locally flat domains the orientation of which is random \cite{Guitter89} -- or overstretched if sufficiently dilated. Because our main concern is the origin of long-range order in two-dimensional systems, we will focus here on the overstretched phase in which a strong long-range orientational order is present. In the overstretched phase, the external stress screens the effects of flexurons \cite{Roldan11,Burmistrov18} which become energetically disfavored. As a result, the massless infrared spectrum of overstretched membranes contains only type-A Goldstone bosons. Macroscopically, the shape of the overstretched membrane is still flat, with weaker height fluctuations compared to the flat phase. Microscopically, the stress dominates the local rearrangement of atoms, and the ancient lattice positional order is broken. Contrary to the flat phase examined in Sec. \ref{secII}, the ground state in the overstretched phase cannot be uniquely characterized by its metric \cite{David88}. Indeed, the local arrangement of atoms depends both on the intrinsic properties of the material (captured by $\zeta$) and the external stress. Consequently, the previous argument that allowed us to disentangle mixed rotations and external translations does not hold anymore: the situation is analogous to that of Fig. \ref{figLM} rather than Fig. \ref{figRotMix}. To avoid confusion, and to ensure the disentanglement between the action of translations and mixed rotations on the ground state, the symmetry-breaking mechanism will be written as : \begin{equation} \label{eqM4} \text{\underline{\textbf{Mechanism 4:}}} \quad \text{ISO}(d)\rightarrow\text{SO}(d-D)\ . \end{equation} The set broken generators is the same as for the third mechanism, but now the unit of length at the surface of the membrane is determined by the applied stress rather than the sole extension parameter $\zeta$. The set of independent broken generators hence reduces to the $d$ translations, so that \begin{equation} \text{dim}(G/H)=d\,,\quad\rho=0\ , \end{equation} and the counting rule Eq. (\ref{eqWata}) leads to: \begin{equation} \label{eqG4} \left\{\begin{split} &n_A=d\\ &n_B=0\ , \end{split}\right. \end{equation} namely $D$ type-A acoustic phonons and $d-D$ type-A flexurons, as expected. Finally, mechanism 4 Eq. (\ref{eqM4}) provides an example in which long-range orientational order can be maintained in two dimensions without requiring the help of type-B Goldstone bosons. Indeed in the overstretched phase, the previous argument with regard to the Hohenberg-Mermin-Wagner's theorem still holds: despite the fact that the flexurons are screened by the stress, they remain Goldstone bosons, and an effective theory of interacting flexurons can still be built, in which the phonons, which are also massless, carry an effective long-range interaction between flexurons. Note, however, that in the ground state of overstretched membranes the local pseudo-ordering persists, which causes the breaking of the group isometries inside the membrane's plane, which is in strong contrast with the second mechanism Eq. (\ref{eqM2}), in which the microcopic constituents are free to move and no phonon is generated (and therefore no long range interaction can occur and the ordered phase is destroyed by the thermal fluctuations). A remaining question involves the role of the type-B Goldstone bosons. To address it, we must compare the mechanisms 3 Eq. (\ref{eqM3}) and 4 Eq. (\ref{eqM4}), which differ only by the dispersion relation of the flexurons. The most striking difference between the flat phase and the overstretched phase is the presence in the former of a strong anomalous exponent $\eta\simeq0.85$ \cite{Nelson04,Kownacki09,Essafi14,Los16} , at the origin of the highly anharmonic behavior of the thermal fluctuations, which leads to many unusual effects, as well as a modified elasticity theory, which is in total contrast with the overstretched phase in which conventional elasticity is restored and $\eta=0$ (see \cite{Roldan11,Kosmrlj16,Burmistrov18} and \cite{Coquand19} for a comparative study). The role of the type-B Goldstone bosons is thus probably related to the generation of such an anharmonic behavior and unusual scaling relations. To sum up on the Goldstone physics, we have analyzed various non-trivial symmetry breaking patterns related to the physics of crystalline membranes, which have highlighted a number of general features of the physics of Goldstone modes in theories without Lorentz invariance. First, mechanism 1 Eq. (\ref{eqM1}) illustrates the fact that whenever different broken symmetry generators generate linearly dependent transformations of the ground state, the total number of associated Goldstone modes is smaller than the total number of broken generators, which is the main lesson of reference \cite{Low02}. This feature is quite common in theories presenting spacetime symmetry breakings, since the action of rotations and translations are rarely independent (it is much more difficult to see this if an internal symmetry group is broken). Mechanism 2 Eq. (\ref{eqM2}) teaches us the importance of the underlying microscopics, even in continuum theories. Indeed, taking into account the mere overall shape of the membrane leads to a spontaneous symmetry breaking mechanism without phonons, which is much more sensitive to thermal fluctuations (as such a system cannot sustain long range order in two dimensions). The comparison between mechanisms 3 Eq. (\ref{eqM3}) and 4 Eq. (\ref{eqM4}) shows that the presence of two different types of interacting Goldstone modes is a sufficient condition to generate a stable ordered phase in two dimensions, as whenever one can reexpress the theory as an effective theory of a single mode, the effective interaction carried by the second Goldstone mode must be long range, because of its massless character. Having two different types of Goldstone modes, however, requires particular patterns of symmetry breaking. Finally, we also showed the necessity of having a non-trivial algebra of independently acting broken generators to generate type-B Goldstone bosons, which can be seen as an obvious consequence of the Goldstone counting rule Eq. (\ref{eqWata}), but which we have shown is not so easy to achieve in practice. We also give hints at the possible link between the presence of such type-B Goldstone bosons and unusual scaling behaviors in the ordered phase, related to the generation of a non-trivial field anomalous dimension $\eta$ . \section{Conclusion} To conclude, the corrections of the symmetry breaking mechanism at the origin of the flat phase teach us two main lessons on the physics of crystalline membranes. First, acoustic phonons cannot be overlooked, even though crytalline order is broken by thermal fluctuations, and by the fact that they are subdominant at large distances. It is all the more important that the presence of a massless effective interaction carrier is required to ensure that long-range orientational order is not destroyed by fluctuations. This is why, genuine bidimensional crystals or fluid membranes, in which only phonons alone, or flexurons alone survive at large distances, do not present any stable ordered phase in two dimensions, but crystalline membranes, which possess both modes, also present long-range order. Second, the origin of the flat phase anomalous scaling laws can be traced back to a delicate geometrical interplay between the intrinsic properties of the material, and its embedding in the three dimensional space. In the presence of a strong enough external stress field -- which drives the system into the overstretched phase -- this subtle balance is broken, and conventional elasticity is restored, probably due to the absence of type-B Goldstone bosons. The key to understanding all these results is the Goldstone counting rule Eq. (\ref{eqWata}). We hope that our work will motivate further studies in the context of condensed matter physics, in which Lorentz invariance is frequently absent, spacetime symmetries are often at play, and therefore such tools are certainly of interest. \section*{Acknowledgements} The author thanks D. Mouhanna for useful discussions and a careful reading of this manuscript.
2024-02-18T23:40:09.428Z
2019-09-10T02:23:37.000Z
algebraic_stack_train_0000
1,489
6,592
proofpile-arXiv_065-7310
\section*{Abstract} {\bf Non-equilibrium physics is a particularly fascinating field of current research. Generically, driven systems are gradually heated up so that quantum effects die out. In contrast, we show that a driven central spin model including controlled dissipation in a highly excited state allows us to distill quantum coherent states, indicated by a substantial reduction of entropy; the key resource is the commensurability between the periodicity of the pump pulses and the internal processes. The model is experimentally accessible in purified quantum dots or molecules with unpaired electrons. The potential of preparing and manipulating coherent states by designed driving potentials is pointed out. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:introduction} Controlling a quantum mechanical system in a coherent way is one of the long-standing goals in physics. Obviously, coherent control is a major ingredient for handling quantum information. In parallel, non-equilibrium physics of quantum systems is continuing to attract significant interest. A key issue in this field is to manipulate systems in time such that their properties can be tuned and changed at will. Ideally, they display properties qualitatively different from what can be observed in equilibrium systems. These current developments illustrate the interest in understanding the dynamics induced by time-dependent Hamiltonians $H(t)$. The unitary time evolution operator $U(t_2,t_1)$ induced by $H(t)$ is formally given by \begin{equation} U(t_2,t_1) = {\cal T}\exp\left(-i\int_{t_1}^{t_2}H(t)dt\right) \end{equation} where ${\cal T}$ is the time ordering operator. While the explicit calculation of $U(t_2,t_1)$ can be extremely difficult it is obvious that the dynamics induced by a time-dependent Hamiltonian maps quantum states at $t_1$ to quantum states at $t_2$ bijectively and conserves the mutual scalar products. Hence, if initially the system is in a mixed state with high entropy $S>0$ it stays in a mixed state for ever with exactly the same entropy. No coherence can be generated in this way even for a complete and ideal control of $H(t)$ in time. Hence, one has to consider open systems. The standard way to generate a single state is to bring the system of interest into thermal contact with a cold system. Generically, this is an extremely slow process. The targeted quantum states have to be ground states of some given system. Alternatively, optical pumping in general and laser cooling in particular \cite{phill98} are well established techniques to lower the entropy of microscopic systems using resonant pumping and spontaneous decay. Quite recently, engineered dissipation has been recognized as a means to generate targeted entangled quantum states in small \red{\cite{witth08,verst09,vollb11}} and extended systems \cite{kraus08,diehl08}. Experimentally, entanglement has been shown for two quantum bits \cite{lin13,shank13} and for two trapped mesoscopic cesium clouds \cite{kraut11}. In this article, we show that periodic driving can have a quantum system converge to coherent quantum states if an intermediate, highly excited and decaying state is involved. The key aspect is the commensurability of the \red{period of the pump pulses to the time constants of the internal processes, here Larmor precessions}. This distinguishes our proposal from established optical pumping protocols. The completely disordered initial mixture can be made almost coherent. The final mixture only has an entropy $S\approx k_\text{B}\ln2$ corresponding to a mixture of two states. An appealing asset is that once the driving is switched off the Lindbladian decay does not matter anymore and the system is governed by Hamiltonian dynamics only. The focus of the present work is to exemplarily demonstrate the substantial reduction of entropy in a small spin system subject to periodic laser pulses. The choice of system is motivated by experiments on the electronic spin in quantum dots interacting with nuclear spins \cite{greil06a,greil07b,petro12,econo14,beuge16,jasch17,scher18,klein18}. The model studied is also applicable to the electronic spin in molecular radicals \cite{bunce87} or to molecular magnets, see Refs.\ \cite{blund07,ferra17,schna19}. In organic molecules the spin bath is given by the nuclear spins of the hydrogen nuclei in organic ligands. \section{Model} \label{sec:model} The model comprises a central, electronic spin $S=1/2$ which is coupled to nuclear spins \begin{equation} \label{eq:hamil_spin} H_\text{spin} = H_\text{CS} + H_\text{eZ} + H_\text{nZ} \end{equation} where $H_\text{eZ}=h S^x$ is the electronic Zeeman term with $h=g\mu_\text{B} B$ ($\hbar$ is set to unity \red{here and henceforth) with the gyromagnetic factor $g$, the Bohr magneton $\mu_\text{B}$, the external magnetic field $B$ in $x$-direction and the $x$-component $S^x$ of the central spin. The nuclear Zeeman term is given by $H_\text{nZ} = z h \sum_{i=1}^N I^x_i$ where $z$ is the ratio of the nuclear $g$-factor multiplied by the nuclear magneton and their electronic counterparts $g_\text{nuclear}\mu_\text{nuclear}/(g\mu_\text{B})$. The operator $I^x_i$ is the $x$-component of the nuclear spin $i$. For simplicity we take $I=1/2$ for all nuclear spins.} Due to the large nuclear mass, the factor $z$ is of the order of $10^{-3}$, but in principle other $z$-values can be studied as well, \red{see also below}. In the central spin part $H_\text{CS}=\vec{S}\cdot\vec{A}$ the \red{so-called Overhauser field $\vec{A}$ results from the combined effect of all nuclear spins each of which is interacting via the hyperfine coupling $J_i$ with} the central spin \begin{equation} \vec{A} = \sum_{i=1}^N J_i \vec{I}_i. \end{equation} \red{If the central spin results from an electron the hyperfine coupling is a contact interaction at the location of the nucleus stemming from relativistic corrections to the non-relativistic Schr\"odinger equation with a Coulomb potential. It is proportional to the probability of the electron to be at the nucleus, i.e., to the modulus squared of the electronic wave function \cite{schli03,coish04}. Depending on the positions of the nuclei and on the shape of the wave function various distributions of the $J_i$ are plausible. A Gaussian wave function in one dimension implies a parametrization by a Gaussian while in two dimensions an exponential parametrization is appropriate \cite{farib13a,fause17a} distribution. We will first use a uniform distribution for simplicity and consider the Gaussian and exponential case afterwards.} Besides the spin system there is an important intermediate state given by a single trion state $\ket{\mathrm{T}}$ \red{consisting of the single fermion providing the central spin bound to an additional exciton. This trion is polarised in $z$-direction at the very high energy $\varepsilon$ ($\approx 1$ eV). The other polarisation exists as well, but using circularly polarised light it is not excited. A Larmor precession of the trion is not considered here for simplicity. Then,} the total Hamiltonian reads \begin{equation} H = H_\text{spin} + \varepsilon \ket{\mathrm{T}}\bra{\mathrm{T}}. \end{equation} The laser pulse is taken to be very short as in experiment where its duration $\tau$ is of the order of picoseconds. Hence, we describe \red{its effect by a unitary time evolution operator $\exp(-i\tau H_\text{puls})=U_\text{puls}$} which excites the $\ket{\uparrow}$ state of the central spin to the trion state or de-excites it \begin{equation} \label{eq:puls} U_\text{puls} = c^\dag + c +\ket{\downarrow} \bra{\downarrow}. \end{equation} where $c:=\ket{\uparrow}\bra{\mathrm{T}}$ and $c^\dagger:=\ket{\mathrm{T}}\bra{\uparrow}$. \red{This unitary operator happens to be hermitian as well, but this is not an important feature. One easily verifies $U_\text{puls}U_\text{puls}^\dag=\mathbb{1}$.} Such pulses are applied in long periodic trains lasting seconds and minutes. The repetition time between two consecutive pulses is ${T_\mathrm{rep}}$ of the order of 10 ns. The decay of the trion is described by the Lindblad equation for the density matrix $\rho$ \begin{equation} \label{eq:lind} \partial_t \rho(t) = -i[H,\rho] - \gamma (c^\dag c\rho + \rho c^\dag c- 2c\rho c^\dag) \end{equation} where the prefactor $\gamma>0$ of the dissipator term \cite{breue06} defines the decay rate. The corresponding process with $c$ and $c^\dag$ swapped needs not be included because its decay rate is smaller by $\exp(-\beta\varepsilon)$, i.e., it vanishes for all physical purposes. \red{We emphasize that we deal with an open quantum system by virtue of the Lindblad dynamics in \eqref{eq:lind}. Since the decay of the trion generically implies the emission of a photon at high energies the preconditions for using Lindblad dynamics are perfectly met \cite{breue06}.} \section{Mathematical Properties of Time Evolution} \label{sec:math-proper} The key observation is that the dynamics from just before the $n$th pulse at $t=n{T_\mathrm{rep}} -$ to just before the $n+1$st pulse at $t=(n+1){T_\mathrm{rep}} -$ is a \emph{linear} mapping $M: \rho(n{T_\mathrm{rep}}-) \rightarrow \rho((n+1){T_\mathrm{rep}}-)$ which does not depend on $n$. Since it is acting on operators one may call it a superoperator. Its matrix form is derived explicitly in Appendix \ref{app:matrix}. If no dissipation took place ($\gamma=0$) the mapping $M$ would be unitary. But in presence of the dissipative trion decay it is a general matrix with the following properties: \begin{enumerate} \item The matrix $M$ has an eigenvalue $1$ which may be degenerate. If the dynamics of the system takes place in $n$ separate subspaces without transitions between them the degeneracy is at least $n$. \item All eigenoperators to eigenvalues different from 1 are traceless. \item At least one eigenoperator to eigenvalue 1 has a finite trace. \item The absolute values of all eigenvalues of $M$ are not larger than 1. \item If there is a non-real eigenvalue $\lambda$ with eigenoperator $C$, the complex conjugate $\lambda^*$ is also an eigenvalue with eigenoperator $C^\dag$. \item The eigenoperators to eigenvalues 1 can be scaled to be hermitian. \end{enumerate} While the above properties can be shown rigorously, see Appendix \ref{app:properties}, for any Lindblad evolution, the following ones are observed numerically in the analysis of the particular model \eqref{eq:lind} under study here: \begin{itemize} \item[(a)] The matrix $M$ is diagonalizable; it does not require a Jordan normal form. \item[(b)] For pairwise different couplings $i\ne j\Rightarrow J_i\ne J_j$ the eigenvalue $1$ is non-degenerate. \item[(c)] The eigenoperators to eigenvalue 1 can be scaled to be hermitian and non-negative. In the generic, non-degenerate case we denote the properly scaled eigenoperator $V_0$ with $\text{Tr}(V_0)=1$. \item[(d)] No eigenvalue $\neq 1$, but with absolute value 1, occurs, i.e., all eigenvalues different from 1 are smaller than 1 in absolute value. \item[(e)] Complex eigenvalues and complex eigenoperators do occur. \end{itemize} The above properties allow us to understand what happens in experiment upon application of long trains of pulses corresponding to $10^{10}$ and more applications of $M$. Then it is safe to conclude that all contributions from eigenoperators to eigenvalues smaller than 1 have died out completely. Only the (generically) single eigenoperator $V_0$ to eigenvalue 1 is left such that \begin{equation} \lim_{n\to\infty} \rho(n{T_\mathrm{rep}}-) = V_0. \end{equation} The quasi-stationary state after long trains of pulses is given by $V_0$ \footnote{We use the term `quasi-stationary' state because it is stationary only if we detect it stroboscopically at the time instants $t=n{T_\mathrm{rep}}-$.}. This observation simplifies the calculation of the long-time limit greatly compared to previous quantum mechanical studies \cite{econo14,beuge16,beuge17,klein18}. One has to compute the eigenoperator of $M$ to the eigenvalue 1. Below this is performed by diagonalization of $M$ which is a reliable approach, but restricted to small systems $N\lessapprox6$. We stress that no complete diagonalization is required to know $V_0$ because only the eigenoperator to the eigenvalue 1 is needed. Hence we are optimistic that further computational improvements are possible. If, however, the speed of convergence is of interest more information on the spectrum and the eigenoperators of $M$ is needed, see also Sect.\ \ref{sec:convergence}. \section{Results on Entropy} \label{sec:entropy} It is known that in pulsed quantum dots nuclear frequency focusing occurs (NFF) \cite{greil06a,greil07b,evers18} which can be explained by a significant change in the distribution of the Overhauser field \cite{petro12,econo14,beuge16,beuge17,scher18,klein18} which is Gaussian initially. This distribution develops a comb structure with equidistant spikes. The difference $\Delta A_x$ between consecutive spikes is such that it corresponds to a full additional revolution of the central spin ${T_\mathrm{rep}} \Delta A_x=2\pi$. A comb-like probability distribution is more structured and contains more information than the initial featureless Gaussian. For instance, the entropy reduction of the Overhauser field distributions computed in Ref.\ \cite{klein18}, Fig.\ 12, relative to the initial Gaussians is $\Delta S =-0.202k_\text{B}$ at $B=0.93$T and $\Delta S =-0.018k_\text{B}$ at $B=3.71$T. Hence, NFF decreases the entropy, but only slightly for large spin baths. This observation inspires us to ask to which extent continued pulsing can reduce entropy and which characteristics the final state has. Inspired by the laser experiments on quantum dots \cite{greil06a,greil07b,evers18} we choose an (arbitrary) energy unit $J_\text{Q}$ and thus $1/J_\text{Q}$, \red{recalling that we have set $\hbar=1$}, as time unit which can be assumed to be of the order of 1ns. The repetition time ${T_\mathrm{rep}}$ is set to $4\pi/J_\text{Q}$ which is on the one hand close to the experimental values where ${T_\mathrm{rep}}=13.2\text{ns}$ and on the other hand makes it easy to recognize resonances, see below. The trion decay rate is set to $2\gamma=2.5 J_\text{Q}$ to reflect a trion life time of $\approx 0.4\red{\text{ns}}$. The bath size is restricted to $N\in\{1,2,\ldots,6\}$, but still allows us to draw fundamental conclusions and to describe electronic spins coupled to hydrogen nuclear spins in small molecules \cite{bunce87,blund07,ferra17,schna19}. The individual couplings $J_i$ are chosen to be distributed according to \begin{equation} \label{eq:equidistant} J_i = J_\text{max}(\sqrt{5}-2)\left(\sqrt{5}+2({i-1})/({N-1})\right), \end{equation} which is a uniform distribution between $J_\text{min}$ and $J_\text{max}$ with $\sqrt{5}$ inserted to avoid accidental commensurabilities \red{of the different couplings $J_i$}. \red{The value $J_\text{min}$ results from $J_i$ for $i=1$. Other parametrizations are motivated by the shape of the electronic wave functions \cite{merku02,schli03,coish04}.} Results for a frequently used exponential parameterization \cite{farib13a} \begin{equation} \label{eq:expo} J_i = J_\text{max}\exp(-\alpha(i-1)/(N-1)) \end{equation} with $\alpha\in\{0.5, 1\}$ and for a Gaussian parametrization, motivated by the electronic wave function in quantum dots \cite{coish04}, \begin{equation} \label{eq:gaus} J_i = J_\text{max}\exp(-\alpha[(i-1)/(N-1)]^2). \end{equation} are given in the next section and in Appendix \ref{app:other}. \red{For both parametrizations the minimum value $J_\text{min}$ occurs for $i=N$ and takes the value $J_\text{min}=J_\text{max}\exp(-\alpha)$.} \begin{figure}[htb] \centering \includegraphics[width=0.60\columnwidth]{fig1a} \includegraphics[width=0.59\columnwidth]{fig1b} \caption{(a) Residual entropy of the limiting density matrix $V_0$ obtained after infinite number of pulses vs.\ the applied magnetic field for $J_\text{max}=0.02J_\text{Q}$ and $z=1/1000$; 1 Tesla corresponds roughly to $50J_\text{Q}$. Resonances of the electronic spin occur every $\Delta h=0.5J_\text{Q}$; resonances of the nuclear spins occur every $\Delta h = 500J_\text{Q}$. The blue dashed line depicts an offset of $\Delta h=\pmJ_\text{max}/(2z)$ from the nuclear resonance. (b) Zooms into intervals of the magnetic field where the lowest entropies are reached. The blue dashed lines depict an offset of $\Delta h =\pm A_\text{max}$ from the electronic resonance.} \label{fig:overview} \end{figure} Figure \ref{fig:overview} displays a generic dependence on the external magnetic field $h=g\mu_\text{B}B_x$ of the entropy of the limiting density matrix $V_0$ obtained after infinite number of pulses. Two nested resonances of the Larmor precessions are discernible: the central electronic spin resonates for \begin{equation} \label{eq:res-central} h{T_\mathrm{rep}} = 2\pi n, \qquad n\in\mathbb{Z} \end{equation} \red{where $n$ is the number of Larmor revolutions that fit into the interval ${T_\mathrm{rep}}$ between two pulses. This means that for an increase of the magnetic field from $h$ to $h+\Delta h$ with $\Delta h=2\pi/{T_\mathrm{rep}}$ the central spin is in the same state before the pulse as it was at $h$.} \red{The other resonance is related to the Larmor precession of the nuclear bath spins which leads to the condition \begin{equation} \label{eq:res-bath} zh{T_\mathrm{rep}}=2\pi n', \qquad n'\in\mathbb{Z} \end{equation} where $n'$ indicates the number of Larmor revolutions of the nuclear spins which fit between two pulses. Upon increasing the magnetic field $h$, the nuclear spins are in the same state before the next pulse if $h$ is changed to $h+\Delta h$ with $\Delta h=2\pi/(z{T_\mathrm{rep}})$.} \red{But the two resonance conditions \eqref{eq:res-central} and \eqref{eq:res-bath} for the central spin and for the bath spins apply precisely as given only without coupling between the spins. The coupled system displays important shifts. The nuclear resonance appears to be shifted by $z\Delta h \approx \pm J_\text{max}/2$, see right panel of Fig.\ \ref{fig:overview}(a). The explanation is that the dynamics of the central spin $S=1/2$ creates an additional magnetic field \red{similar to a Knight shift} acting on each nuclear spin of the order of $J_i/2$ which is estimated by $J_\text{max}/2$. Further support \red{of the validity of this} explanation is given in Appendix \ref{app:shift}.} The electronic resonance is shifted by \begin{equation} \label{eq:over-shift} \Delta h = \pmA_\text{max} \end{equation} where $A_\text{max}$ is the \red{maximum possible value of the} Overhauser field given by $A_\text{max}:=(1/2)\sum_{i=1}^N J_i$ for maximally polarized bath spins. This is shown in the right panel of Fig.\ \ref{fig:overview}(b). \red{Fig.\ \ref{fig:overview} shows that the effect of the periodic driving on the entropy strongly depends on the precise value of the magnetic field. The entropy reduction is largest \emph{close} to the central resonance \eqref{eq:res-central} and to the bath resonance \eqref{eq:res-bath}. This requires that both resonances must be approximately commensurate. In addition, the \emph{precise} position of the maximum entropy reduction depends on the two above shifts, the approximate Knight shift and the shift by the maximum Overhauser field \eqref{eq:over-shift}.} We pose the question to which extent the initial entropy of complete disorder $S_\text{init}=k_\text{B}(N+1)\ln2$ (in the figures and henceforth $k_\text{B}$ is set to unity) can be reduced by commensurate periodic pumping. The results in Fig.\ \ref{fig:overview} clearly show that remarkably low values of entropy can be reached. The residual value of $S\approx 0.5k_\text{B}$ in the minima of the right panel of Fig.\ \ref{fig:overview}(b) corresponds to a contribution of less than two states ($S=\ln2k_\text{B} \approx 0.7k_\text{B}$) while initially 16 states were mixed for $N=3$ so that the initial entropy is $S_\text{init}=4\ln2k_\text{B}\approx2.77k_\text{B}$. This represents a remarkable distillation of coherence. \begin{figure}[htb] \centering \includegraphics[width=0.60\columnwidth]{fig2} \caption{(a) Residual entropy of the limiting density matrix $V_0$ for various bath sizes; other parameters as in Fig.\ \ref{fig:overview}. The dashed lines indicate the shifts of the electronic resonance by $-A_\text{max}$. (b) Corresponding normalized polarization of the spin bath in the external field direction, i.e.\ the $x$-direction.} \label{fig:size-mag} \end{figure} Hence, we focus on the minima and in particular on the left minimum. We address the question whether the distillation of coherence still works for larger systems. Unfortunately, the numerical analysis cannot be extended easily due to the dramatically increasing dimension $D= 2^{2(N+1)}$ because we are dealing with the Hilbert space of density matrices of the spin bath and the central spin. Yet a trend can be deduced from results up to $N=6$ displayed in Fig.\ \ref{fig:size-mag}(a). The entropy reduction per $N+1$ spins is $-0.58k_\text{B}$ for $N=3$, $-0.57k_\text{B}$ for $N=4$, $-0.55k_\text{B}$ for $N=5$, and $-0.52k_\text{B}$ for $N=6$. The reduction is substantial, but slowly decreases with system size. Presently, we cannot know the behavior for $N\to\infty$. The finite value $\approx-0.2k_\text{B}$ found in the semiclassical simulation \cite{scher18,klein18} indicates that the effect persists for large baths. In Appendix \ref{app:other}, results for the couplings defined in \eqref{eq:expo} or in \eqref{eq:gaus} are given which corroborate our finding. The couplings may be rather close to each other, but not equal. It appears favorable that the spread of couplings is not too large. Which state is reached in the minimum of the residual entropy? The decisive clue is provided by the lower panel Fig.\ \ref{fig:size-mag}(b) displaying the polarization of the spin bath. It is normalized such that its saturation value is unity. Clearly, the minimum of the residual entropy coincides with the maximum of the polarization. The latter is close to its saturation value though not quite with a minute decrease for increasing $N$. This tells us that the limiting density matrix $V_0$ essentially corresponds to the polarized spin bath. The central electronic spin is also almost perfectly polarized (not shown), but in $z$-direction. These observations clarify the state which can be retrieved by long trains of pulses. Additionally, Fig.\ \ref{fig:size-mag}(b) explains the shift of the electronic resonance. The polarized spin bath renormalizes the external magnetic field by (almost) $\pmA_\text{max}$. To the left of the resonance, it enhances the external field ($+A_\text{max}$) while the external field is effectively reduced ($-A_\text{max}$) to the right of the resonance. Note that an analogous direct explanation for the shift of the nuclear resonance in the right panel of Fig.\ \ref{fig:overview} is not valid. The computed polarization of the central spin points in $z$-direction and thus does not shift the external field. \section{Results on Convergence} \label{sec:convergence} In order to assess the speed of convergence of the initially disordered density matrix $\rho_0=\mathbb{1}/Z$ to the limiting density matrix $V_0$ we proceed as follows. Let us assume that the matrices $v_i$ are the eigen matrices of $M$ and that they are normalized $||v_i||^2:=\text{Tr}(v_i^\dag v_i)=1$. Since the mapping $M$ is not unitary, orthogonality of the eigenmatrices cannot be assumed. Note that the standard normalization generically implies that there is some factor between $V_0$ with $\text{Tr}(V_0)=1$ and $v_0$. The initial density matrix $\rho_0$ can be expanded in the $\{v_i\}$ \begin{equation} \rho_0 = \sum_{j=0}^{D-1} \alpha_j v_j. \end{equation} After $n$ pulses, the density matrix $\rho_n$ is given by \begin{equation} \rho_n = \sum_{j=0}^{D-1} \alpha_j \lambda_j^n v_j \end{equation} where $\lambda_j$ are the corresponding eigenvalues of $M$ and $\lambda_0=1$ by construction. We aim at $\rho_0$ being close to $V_0$ within $p_\text{thresh}$, i.e., \begin{equation} \label{eq:converged} || \rho_n -V_0 ||\le p_\text{thresh} ||V_0|| \end{equation} should hold for an appropriate $n$. A generic value of the threshold $p_\text{thresh}$ is $1\%$. To this end, the minimum $n$ which fulfills \eqref{eq:converged} has to be estimated. \begin{figure}[htb] \centering \includegraphics[width=0.60\columnwidth]{fig3} \caption{Number of pulses for a convergence within $1\%$ ($p_\text{thresh}=0.0$) are plotted for various bath sizes; couplings given by \eqref{eq:equidistant}, other parameters as in Fig.\ \ref{fig:overview}. The corresponding residual entropies and magnetizations are depicted in Fig.\ \ref{fig:size-mag}. The vertical dashed lines indicate the estimates \eqref{eq:over-shift} for the entropy minima as before.} \label{fig:number-N} \end{figure} Such an estimate can be obtained by determining \begin{equation} n_j := 1+\text{trunc}\left[\frac{\ln(|p_\text{thresh}\alpha_0/\alpha_j|)}{\ln(|\lambda_j|)}\right] \end{equation} for $j\in\{1, 2,3, \ldots,D-1 \}$. The estimate of the required number of pulses is the maximum of these number, i.e., \begin{equation} n_\text{puls} := \max_{1\le j < D} n_j. \end{equation} We checked exemplarily that the number determined in this way implies that the convergence condition \eqref{eq:converged} is fulfilled. This is not mathematically rigorous because it could be that there are very many slowly decreasing contributions which add up to a significant deviation from $V_0$. But generically, this is not the case. In Fig.\ \ref{fig:number-N} the results are shown for various bath sizes and the parameters for which the data of the previous figures was computed. Since the entropy minima are located at the positions of the vertical dashed lines to good accuracy one can read off the required number of pulses at the intersections of the solid and the dashed lines. Clearly, about $2\cdot 10^{12}$ pulses are necessary to approach the limiting, relatively pure density matrices $V_0$. Interestingly, the number of required pulses does not depend much on the bath size, at least for the accessible bath sizes. This is a positive message in view of the scaling towards larger baths in experimental setups. \begin{figure}[htb] \centering \includegraphics[width=0.60\columnwidth]{fig4} \caption{Number of pulses for a convergence within $1\%$ ($p_\text{thresh}=0.01$) for $N=5$, $J_\text{max}=0.02J_\text{Q}$, and $z=10^{-3}$ for the exponential parametrization in \eqref{eq:expo} (legend ``expo'') and the Gaussian parametrization in \eqref{eq:gaus} (legend ``gaus''). The corresponding residual entropies and magnetizations are depicted in Figs.\ \ref{fig:expo} and \ref{fig:gaus}, respectively. The vertical dashed lines indicate the estimates for the entropy minima which are shifted from the resonances without interactions according to \eqref{eq:over-shift}.} \label{fig:parametrization} \end{figure} Figure \ref{fig:parametrization} depicts the required minimum number of pulses for the two alternative parametrizations of the couplings \eqref{eq:expo} and \eqref{eq:gaus}. Again, the range is about $3\cdot 10^{12}$. Still, there are relevant differences. The value $n_\text{puls}$ is higher for $\alpha=1$ ($\approx 4\cdot 10^{12}$) than for $\alpha=1/2$ ($\lessapprox 2\cdot 10^{12}$). This indicates that the mechanism of distilling quantum states by commensurability with periodic external pulses works best if the couplings \red{$J_i$ are similar, i.e., if their spread given by $J_\text{min}/J_\text{max}=\exp(-\alpha)$} is small. The same qualitative result is obtained for the residual entropy, see Appendix \ref{app:other}. Note that this argument also explains why the Gaussian parametrized couplings \eqref{eq:gaus} require slightly less pulses than the exponential parametrized couplings \eqref{eq:expo}. \red{The couplings $J_i$ cumulate at their maximum $J_\text{max}$ in the Gaussian case so that their variance is slightly smaller than the one of the exponential parametrization.} One could have thought that the cumulated couplings $J_i \approx J_\text{max}$ in the Gaussian case require longer pulsing in order to achieve a given degree of distillation because mathematically equal couplings $J_i=J_{i'}$ imply degeneracies preventing distillation, see the mathematical properties discussed in Sect.\ \ref{sec:math-proper}. \red{But this appears not to be the case.} \begin{figure}[htb] \centering \includegraphics[width=0.60\columnwidth]{fig5} \caption{Residual entropies (panel a) and number of pulses (panel b) for a convergence within $1\%$ ($p_\text{thresh}=0.0$) for $N=3$, $J_\text{max}=0.1J_\text{Q}$, and $z=0.1$ for the equidistant parametrization in \eqref{eq:equidistant} (legend ``equidist''), the exponential parametrization in \eqref{eq:expo} (legend ``expo'') and the Gaussian parametrization in \eqref{eq:gaus} (legend ``gaus''). The vertical dashed lines indicate the estimates for the entropy minima which are shifted from the resonances without interactions according to \eqref{eq:over-shift}.} \label{fig:faster} \end{figure} The total numbers of pulses is rather high. As many as $2\cdot 10^{12}$ pulses for a repetition time ${T_\mathrm{rep}}\approx 10$ns imply about six hours of pulsing. This can be achieved in the lab, but the risk that so far neglected decoherence mechanisms spoil the process is real. If, however, the pulses can be applied more frequently, for instance with ${T_\mathrm{rep}}=1$ns, the required duration shrinks to about 30 minutes. The question arises why so many pulses are required. While a comprehensive study of this aspect is beyond the scope of the present article, first clue can be given. It suggests itself that the slow dynamics in the bath is responsible for the large number of pulses required for convergence. This idea is corroborated by the results displayed in Fig.\ \ref{fig:faster} where a larger maximum coupling and, importantly, a larger $z$ factor is assumed. Recall that the $z$-factor is the ratio of the Larmor frequency of the bath spins to the Larmor frequency of the central spin. If it is increased, here by a factor of 100, the bath spins precess much quicker. Indeed, the range of the required number of pulses is much lower with $2\cdot 10^7$ which is five orders of magnitude less than for the previous parameters. The former six hours then become fractions of seconds. Of course, the conventional $g$-factors of nuclear and electronic spins do not allow for $z=0.1$. But the central spin model as such, built by a central spin and a bath of spins supplemented by a damped excitation can also be realized in a different physical system. \red{Alternatively, optimized pulses can improve the efficiency of the distillation by periodic driving. One may either consider modulated pulses of finite duration \cite{pasin08a} or repeated cycles of several instantaneous pulses applied at optimized time instants \cite{uhrig07,uhrig07err} or combinations of both schemes \cite{uhrig10a}. Thus, further research is called for. The focus, however, of the present work is to establish the fundamental mechanism built upon periodic driving, dissipation and commensurability.} \section{Conclusion} \label{sec:conclusion} Previous work has established dynamic nuclear polarization (DNP), for a review see Ref.\ \cite{maly08}. But it must be stressed that the mechanism of this conventional DNP is fundamentally different from the one described here. Conventionally, the polarization of an electron is transferred to the nuclear spins, i.e., the polarization of the electrons induces polarization of the nuclei in the \emph{same} direction. In contrast, in the setup studied here, the electron is polarized in $z$-direction while the nuclear spins are eventually polarized perpendicularly in $x$-direction. Hence, the mechanism is fundamentally different: it is NFF stemming essentially from commensurability. This is also the distinguishing feature compared to standard optical pumping. States in the initial mixture which do not allow for a time evolution commensurate with the repetition time ${T_\mathrm{rep}}$ of the pulses are gradually suppressed \red{while those whose time evolution is commensurate are enhanced. This means that the weight of the former in the density matrix is reduced upon periodic application of the pulses while the weight of the latter is enhanced. Note that the trace of the density matrix is conserved so that the suppression of the weight of some states implies that the weight of other states is increased. The effect of the pulses on other norms of the density matrix is not obvious since the dynamics is not unitary, but dissipative.} \red{For particular magnetic fields, there may be only one particular state allowing for a dynamics commensurate with ${T_\mathrm{rep}}$. This case leads to the maximum entropy reduction. Such a} mechanism can be used also for completely different physical systems, e.g., in ensembles of oscillators. The studied case of coupled spins extends the experimental and theoretical observations of NFF for large spin baths \cite{greil06a,greil07b,petro12,econo14,beuge16,jasch17,scher18,klein18} where many values of the polarization of the Overhauser field can lead to commensurate dynamics. Hence, only a partial reduction of entropy occurred. The above established DNP by NFF comprises the potential for a novel experimental technique for state preparation: laser pulses instead of microwave pulses as in standard NMR can be employed to prepare coherent states which can be used for further processing, either to perform certain quantum protocols or for analysis of the systems under study. The combination of optical and radio frequency pulsing appears promising because it enlarges the possibilities of experimental manipulations. Another interesting perspective is to employ the concept of state distillation by commensurability to physical systems other than localized spins, for instance to spin waves in quantum magnets. A first experimental observations of commensurability effects for spin waves in ferromagnets are already carried out \cite{jackl17}. \red{Studies on how to enhance the efficiency of the mechanism by optimization of the shape and distribution of the pulses constitute an interesting route for further research.} In summary, we showed that dissipative dynamics of a highly excited state is sufficient to modify the dynamics of energetically low-lying spin degrees of freedom away from unitarity. The resulting dynamic map acts like a contraction converging towards a single density matrix upon iterated application. The crucial additional ingredient is \emph{commensurability} \red{between the external periodic driving and the internal dynamic processes, for instance Larmor precessions. If commensurability is possible a substantial entropy reduction can be induced}, almost to a single pure state. This has been explicitly shown for an exemplary small central spin model including electronic and nuclear Zeeman effect. This model served as proof-of-principle model to establish the mechanism of distillation by commensurability. Such a model describes the electronic spin in quantum dots with diluted nuclear spin bath or the spin of unpaired electrons in molecules, hyperfine coupled to nuclear hydrogen spins. We stress that the mechanism of commensurability can also be put to use in other systems with periodic internal processes. The fascinating potential to create and to manipulate coherent quantum states by such approaches deserves further investigation. \section*{Acknowledgements} The author thanks A.\ Greilich, J.\ Schnack, and O.~P.\ Sushkov for useful discussions and the School of Physics of the University of New South Wales for its hospitality. \paragraph{Funding information} This work was supported by the Deutsche Forschungsgemeinschaft (DFG) and the Russian Foundation of Basic Research in TRR 160, by the DFG in project no.\ UH 90-13/1, and by the Heinrich-Hertz Foundation of Northrhine-Westfalia. \begin{appendix} \section{Derivation of the Linear Mapping} \label{app:matrix} The goal is to solve the time evolution of $\rho(t)$ from just before a pulse until just before the next pulse. Since the pulse leads to a unitary time evolution which is linear \begin{equation} \rho(n{T_\mathrm{rep}}-)\to\rho(n{T_\mathrm{rep}}+)=U_\text{puls}\rho(n{T_\mathrm{rep}}-)U_\text{puls}^\dag \end{equation} with $U_\text{puls}$ from (5) and the subsequent Lindblad dynamics defined by the linear differential equation (6) is linear as well the total propagation in time is given by a linear mapping $M: \rho(n{T_\mathrm{rep}}-) \rightarrow \rho((n+1){T_\mathrm{rep}}-)$. This mapping is derived here by an extension of the approach used in Ref.\ \cite{klein18}. The total density matrix acts on the Hilbert space given by the direct product of the Hilbert space of the central spin comprising three states ($\ket{\uparrow},\ket{\downarrow}, \ket{\text{T}}$) and the Hilbert space of the spin bath. We focus on $\rho_\text{TT}:=\bra{\text{T}}\rho\ket{\text{T}}$ which is a $2^N\times2^N$ dimensional density matrix for the spin bath alone because the central degree of freedom is traced out. By $\rho_\text{S}$ we denote the $d\times d$ dimensional density matrix of the spin bath and the central spin, i.e., $d=2^{N+1}$ since no trion is present: $\rho_\text{S}\ket{\text{T}}=0$. The number of entries in the density matrix is $D=d^2$, i.e., the mapping we are looking for can be represented by a $D\times D$ matrix. The time interval ${T_\mathrm{rep}}$ between two consecutive pulses is sufficiently long so that all excited trions have decayed before the next pulse arrives. In numbers, this means $2\gamma{T_\mathrm{rep}}\gg 1$ and implies that $\rho(n{T_\mathrm{rep}}-)=\rho_\text{S}(n{T_\mathrm{rep}}-)$ and hence inserting the unitary of the pulse (5) yields \begin{subequations} \label{eq:initial} \begin{align} \rho(n{T_\mathrm{rep}}+) &= U_\text{puls}\rho_\text{S}(n{T_\mathrm{rep}}-)U_\text{puls}^\dag \\ \label{eq:initial-TT} \rho_\text{TT}(n{T_\mathrm{rep}}+) &=\bra{\uparrow} \rho_\text{S}(n{T_\mathrm{rep}}-) \ket{\uparrow} \\ \rho_\text{S}(n{T_\mathrm{rep}}+) &=\ket{\downarrow}\bra{\downarrow} \rho_\text{S}(n{T_\mathrm{rep}}-) \ket{\downarrow}\bra{\downarrow} \ =\ S^-S^+ \rho_\text{S}(n{T_\mathrm{rep}}-) S^-S^+ \label{eq:initial-S} \end{align} \end{subequations} where we used the standard ladder operators $S^\pm$ of the central spin to express the projection $\ket{\downarrow}\bra{\downarrow}$. The equations \eqref{eq:initial} set the initial values for the subsequent Lindbladian dynamics which we derive next. For completeness, we point out that there are also non-diagonal contributions of the type $\bra{\text{T}}\rho\ket{\uparrow}$, but they do not matter for $M$. Inserting $\rho_\text{TT}$ into the Lindblad equation (6) yields \begin{equation} \label{eq:TT} \partial_t \rho_\text{TT}(t) = -i [H_\text{nZ},\rho_\text{TT}(t)] -2\gamma \rho_\text{TT}(t). \end{equation} No other parts contribute. The solution of \eqref{eq:TT} reads \begin{equation} \label{eq:TT_solution} \rho_\text{TT}(t) = e^{-2\gamma t} e^{-iH_\text{nZ}t} \rho_\text{TT}(0+) e^{iH_\text{nZ}t}. \end{equation} By the argument $0+$ we denote that the initial density matrix for the Lindbladian dynamics is the one just after the pulse. For $\rho_\text{S}$, the Lindblad equation (6) implies \begin{equation} \partial_t \rho_\text{S}(t) = -i[H_\text{spin},\rho_\text{S}(t)] +2\gamma \ket{\uparrow} \rho_\text{TT}(t) \bra{\uparrow}. \end{equation} Since we know the last term already from its solution in \eqref{eq:TT_solution} we can treat it as given inhomogeneity in the otherwise homogeneous differential equation. With the definition $U_\text{S}(t):= \exp(-iH_\text{spin}t)$ we can write \begin{equation} \partial_t \left(U_\text{S}^\dag(t) \rho_\text{S}(t) U_\text{S}(t) \right) = 2\gamma U_\text{S}^\dag(t) \ket{\uparrow}\rho_\text{TT}(t)\bra{\uparrow} U_\text{S}(t). \end{equation} Integration leads to the explicit solution \begin{equation} \rho_\text{S}(t) = U_\text{S}(t) \rho_\text{S}(0+) U_\text{S}^\dag(t) +2\gamma\int_0^t U_\text{S}^\dag(t-t') \ket{\uparrow}\rho_\text{TT}(t')\bra{\uparrow} U_\text{S}(t-t')dt'. \label{eq:S_solut} \end{equation} If we insert \eqref{eq:TT_solution} into the above equation we encounter the expression \begin{equation} \ket{\uparrow} \exp(-iH_\text{nZ} t) = \exp(-iH_\text{nZ} t)\ket{\uparrow} \ =\ \exp(-izh I^x_\text{tot} t) \exp(izh S^x t) \ket{\uparrow}. \end{equation} where $I^x_\text{tot} :=S^x+\sum_{i=1}^NI^x_i$ is the total momentum in $x$-direction. It is a conserved quantity commuting with $H_\text{spin}$ so that a joint eigenbasis with eigenvalues $m_\alpha$ and $E_\alpha$ exists. We determine such a basis $\{\ket{\alpha}\}$ by diagonalization in the $d$-dimensional Hilbert space ($d=2^{N+1}$) of central spin and spin bath and convert \eqref{eq:S_solut} in terms of the matrix elements of the involved operators. For brevity, we write $\rho_{\alpha\beta}$ for the matrix elements of $\rho_\text{S}$. \begin{align} \rho_{\alpha\beta}(t) &= e^{-i(E_\alpha-E_\beta)t}\Big\{\rho_{\alpha\beta}(0+) \nonumber \\ &+2\gamma \int_0^t e^{i(E_\alpha-E_\beta-zh(m_\alpha-m_\beta))t'} \bra{\alpha} e^{izhS^xt'}\ket{\uparrow} \rho_\text{TT}(0+) \bra{\uparrow} e^{izhS^xt'}\ket{\beta}dt'\Big\}. \label{eq:matrix1} \end{align} Elementary quantum mechanics tells us that \begin{equation} \label{eq:spin-precess} e^{izhS^xt'}\ket{\uparrow} = \frac{1}{2}e^{ia}(\ket{\uparrow}+\ket{\downarrow}) + \frac{1}{2}e^{-ia}(\ket{\uparrow}-\ket{\downarrow}) \end{equation} with $a:=zht'/2$ which we need for the last row of equation \eqref{eq:matrix1}. Replacing $\rho_\text{TT}(0+)$ by $\bra{\uparrow} \rho_\text{S}(n{T_\mathrm{rep}}-) \ket{\uparrow}$ according to \eqref{eq:initial-TT} and inserting \eqref{eq:spin-precess} we obtain \begin{subequations} \begin{align} \bra{\alpha} e^{izhS^xt'}\ket{\uparrow} \rho_\text{TT}(0+) \bra{\uparrow} e^{izhS^xt'}\ket{\beta} & =\bra{\alpha} e^{izhS^xt'}\ket{\uparrow} \bra{\uparrow} \rho_\text{S}(0-) \ket{\uparrow} \bra{\uparrow} e^{izhS^xt'}\ket{\beta} \\ & = \frac{1}{2} \left( R^{(0)} + e^{izht'} R^{(1)} + e^{-izht'} R^{(-1)} \right)_{\alpha\beta} \label{eq:spin2} \end{align} \end{subequations} with the three $d\times d$ matrices \begin{subequations} \begin{align} R^{(0)} &:= S^+S^- \rho_\text{S}(0-) S^+S^- + S^- \rho_\text{S}(0-) S^+ \\ R^{(1)} &:= \frac{1}{2}(S^++\mathbb{1}_d)S^- \rho_\text{S}(0-) S^+(S^--\mathbb{1}_d) \\ R^{(-1)} &:= \frac{1}{2}(S^+-\mathbb{1}_d)S^- \rho_\text{S}(0-) S^+(S^-+\mathbb{1}_d). \end{align} \end{subequations} In this derivation, we expressed ket-bra combinations by the spin ladder operators according to \begin{equation} \ket{\uparrow} \bra{\uparrow} = S^+S^- \qquad \ket{\uparrow} \bra{\downarrow} = S^+ \qquad \ket{\downarrow} \bra{\uparrow} = S^-. \end{equation} The final step consists in inserting \eqref{eq:spin2} into \eqref{eq:matrix1} and integrating the exponential time dependence straightforwardly from 0 to ${T_\mathrm{rep}}$. Since we assume that $2\gamma{T_\mathrm{rep}}\gg1$ so that no trions are present once the next pulse arrives the upper integration limit ${T_\mathrm{rep}}$ can safely and consistently be replaced by $\infty$. This makes the expressions \begin{equation} G_{\alpha\beta}(\tau) := \frac{\gamma}{2\gamma-i[E_\alpha-E_\beta+zh(m_\beta-m_\alpha+\tau)]} \end{equation} appear where $\tau\in\{-1,0,1\}$. Finally, we use \eqref{eq:initial-S} and summarize \begin{equation} \rho_{\alpha\beta}(t) = e^{-i(E_\alpha-E_\beta)t} \Big\{ (S^-S^+ \rho_\text{S}(0-) S^-S^+ )_{\alpha\beta} +\sum_{\tau=-1}^1 G_{\alpha\beta}(\tau) R^{(\tau)}_{\alpha\beta} \Big\}. \label{eq:complete} \end{equation} This provides the complete solution for the dynamics of $d\times d$ matrix $\rho_\text{S}$ from just before a pulse ($t=0-$) till just before the next pulse for which we set $t={T_\mathrm{rep}}$ in \eqref{eq:complete}. In order to set up the linear mapping $M$ as $D\times D$ dimensional matrix with $D=d^2$ we denote the matrix elements $M_{\mu'\mu}$ where $\mu$ is a combined index for the index pair $\alpha\beta$ and $\mu'$ for $\alpha'\beta'$ with $\alpha,\beta,\alpha',\beta'\in\{1,2\ldots,d\}$. For brevity, we introduce \begin{equation} P_{\alpha\beta} := [(S^++\mathbb{1}_d)S^-]_{\alpha\beta} \qquad Q_{\alpha\beta} := [(S^+-\mathbb{1}_d)S^-]_{\alpha\beta}. \end{equation} Then, \eqref{eq:complete} implies \begin{align} M_{\mu'\mu} &= \frac{1}{2}e^{-i(E_{\alpha'}-E_{\beta'}){T_\mathrm{rep}}}\Big\{ 2(S^-S^+)_{\alpha'\alpha} (S^-S^+)_{\beta\beta'} \nonumber \\ &+2G_{\alpha'\beta'}(0) \left[(S^+S^-)_{\alpha'\alpha} (S^+S^-)_{\beta\beta'} + S^-_{\alpha'\alpha} S^+_{\beta\beta'}\right] \nonumber \\ &+\left[ G_{\alpha'\beta'}(1) P_{\alpha'\alpha} Q^*_{\beta'\beta} + G_{\alpha'\beta'}(-1) Q_{\alpha'\alpha} P^*_{\beta'\beta} \right]\Big\}. \label{eq:matrix2} \end{align} This concludes the explicit derivation of the matrix elements of $M$. Note that they are relatively simple in the sense that no sums over matrix indices are required on the right hand side of \eqref{eq:matrix2}. This relative simplicity is achieved because we chose to work in the eigenbasis of $H_\text{spin}$. Other choices of basis are possible, but render the explicit respresentation significantly more complicated. \section{Properties of the Time Evolution} \label{app:properties} \paragraph{Preliminaries} Here we state several mathematical properties of the mapping $M$ which hold for any Lindblad dynamics over a given time interval which can be iterated arbitrarily many times. We assume that the underlying Hilbert space is $d$ dimensional so that $M$ acts on the $D=d^2$ dimensional Hilbert space of $d\times d$ matrices, i.e., $M$ can be seen as $D\times D$ matrix. We denote the standard scalar product in the space of operators by \begin{equation} (A|B):=\text{Tr}(A^\dag B) \end{equation} where the trace refers to the $d\times d$ matrices $A$ and $B$. Since no state of the physical system vanishes in its temporal evolution $M$ conserves the trace of any density matrix \begin{equation} \text{Tr}(M\rho)=\text{Tr}(\rho). \end{equation} This implies that $M$ conserves the trace of \emph{any} operator $C$. This can be seen by writing $C=(C+C^\dag)/2+ (C-C^\dag)/2=R+iG$ where $R$ and $G$ are hermitian operators. They can be diagonalized and split into their positive and their negative part $R=p_1-p_2$ and $G=p_3-p_4$. Hence, each $p_i$ is a density matrix up to some real, positive scaling and we have \begin{equation} \label{eq:C-darst} C = p_1-p_2+i(p_3-p_4). \end{equation} Then we conclude \begin{subequations} \begin{align} \text{Tr}(MC) &= \text{Tr}(Mp_1)-\text{Tr}(Mp_2)+ i(\text{Tr}(Mp_3)-\text{Tr}(Mp_4)) \\ & = \text{Tr}(p_1)-\text{Tr}(p_2)+i(\text{Tr}(p_3)-\text{Tr}(p_4)) \ =\ \text{Tr}(C). \end{align} \end{subequations} \paragraph{Property 1.} The conservation of the trace for any $C$ implies \begin{equation} \label{eq:gen_trace_conserv} \text{Tr}(C)=(\mathbb{1}_d|C)=(\mathbb{1}_d|MC)=(M^\dag\mathbb{1}_d|C) \end{equation} where $\mathbb{1}_d$ is the $d\times d$-dimensional identity matrix and $M^\dag$ is the $D\times D$ hermitian conjugate of $M$. From \eqref{eq:gen_trace_conserv} we conclude \begin{equation} M^\dag \mathbb{1}_d = \mathbb{1}_d \end{equation} which means that $\mathbb{1}_d$ is an eigenoperator of $M^\dag$ with eigenvalue 1. Since the characteristic polynomial of $M$ is the same as the one of $M^\dag$ up to complex conjugation we immediately see that $1$ is also an eigenvalue of $M$. If the dynamics of the system takes place in $n$ independent subspaces without transitions between them, the $n$ different traces over these subspaces are conserved separately Such a separation occurs in case conserved symmetries split the Hilbert space, for instance the total spin is conserved in the dynamics given by (6) if all couplings are equal. Then, the above argument implies the existence of $n$ different eigenoperators with eigenvalue 1. Hence the degeneracy is (at least) $n$ which proves property 1. in the main text. \paragraph{Properties 2. and 3.} As for property 2, we consider an eigenoperator $C$ of $M$ with eigenvalue $\lambda\neq1$ so that $MC=\lambda C$. Then \begin{equation} \text{Tr}(C) = \text{Tr}(MC) \ = \ \lambda \text{Tr}(C) \end{equation} implies $\text{Tr}(C)=0$, i.e., tracelessness as stated. Since all density matrices can be written as linear combinations of eigenoperators there must be at least one eigenoperator with finite trace. In view of property 2., this needs to be an eigenoperator with eigenvalue 1 proving property 3. The latter conclusion holds true if we assume that $M$ cannot be diagonalized, but only has a Jordan normal form. If $d_\text{J}$ is the dimension of the largest Jordan block, the density matrix $M^{d_\text{J}-1}\rho$ will be a linear combination of eigenoperators while still having the trace 1. \paragraph{Property 4.} Next, we show that no eigenvalue $\lambda$ can be larger than 1 in absolute value. The idea of the derivation is that the iterated application of $M$ to the eigenoperator belonging to $|\lambda|>1$ would make this term grow exponentially $\propto |\lambda|^n$ beyond any bound which cannot be true. The formal proof is a bit intricate. First, we state that for any two density matrices $\rho$ and $\rho'$ their scalar product is non-negative $0\le (\rho|\rho')$ because it can be viewed as expectation value of one of them with respect to the other and both are positive operators. In addition, the Cauchy-Schwarz inequality implies \begin{equation} \label{eq:rhorho} 0 \le (\rho|\rho') \le \sqrt{(\rho|\rho)(\rho'|\rho')} \ =\ \sqrt{\text{Tr}(\rho^2)\text{Tr}((\rho')^2)} \ \le\ 1. \end{equation} Let $C$ be the eigenoperator of $M^\dag$ belonging to $\lambda$; it may be represented as in \eqref{eq:C-darst} and scaled such that the maximum of the traces of the $p_i$ is 1. Without loss of generality this is the case for $p_1$, i.e., $\text{Tr}(p_1)=1$. Otherwise, $C$ is simply rescaled: by $C\to-C$ to switch $p_2$ to $p_1$, by $C\to-iC$ to switch $p_3$ to $p_1$, or by $C\to iC$ to switch $p_4$ to $p_1$. On the one hand, we have for any density matrix $\rho_n$ \begin{equation} |(C|\rho_n)| \le |\Re(C|\rho_n)| + |\Im(C|\rho_n)| \le 2 \end{equation} where the last inequality results form \eqref{eq:rhorho}. On the other hand, we set $\rho_n:= M^np_1$ and obtain \begin{subequations} \begin{align} 2 &\ge |(C|\rho_n)| = |((M^\dag)^nC|p_1)| = |\lambda^*|^n |(C|p_1)| =|\lambda|^n \sqrt{(\Re(C|p_1))^2+ (\Im(C|p_1))^2} \\ &\ge |\lambda|^n|\Re(C|p_1)| = |\lambda|^n(p_1|p_1) \end{align} \end{subequations} where we used $(p_1|p_2)=0$ in the last step; this holds because $p_1$ and $p_2$ result from the same diagonalization, but refer to eigenspaces with eigenvalues of different sign. In essence we derived \begin{equation} 2 \ge |\lambda|^n(p_1|p_1) \end{equation} which clearly implies a contradiction for $n \to \infty$ because the right hand side increases to infinity for $|\lambda|>1$. Hence there cannot be eigenvalues with modulus larger than 1. \paragraph{Property 5.} The matrix $M$ can be represented with respect to a basis of the Krylov space spanned by the operators \begin{equation} \label{eq:krylov} \rho_n:=M^n\rho_0 \end{equation} where $\rho_0$ is an arbitrary initial density matrix which should contain contributions from all eigenspaces of $M$. For instance, a Gram-Schmidt algorithm applied to the Krylov basis generates an orthonormal basis $\tilde \rho_n$. Due to the fact, that all the operators $\rho_n$ from \eqref{eq:krylov} are hermitian density matrices $\tilde \rho_n = \tilde \rho_n^\dag$, we know that all overlaps $(\rho_m|\rho_n)$ are real and hence the constructed orthonormal basis $\tilde \rho_n$ consists of hermitian operators. Also, all matrix elements $(\rho_m|M\rho_n)=(\rho_m|\rho_{n+1})$ are real so that the resulting representation $\tilde M$ is a matrix with real coefficients whence \begin{subequations} \begin{equation} \tilde M c = \lambda c \end{equation} implies \begin{equation} \tilde M c^* = \lambda^* c^* \end{equation} \end{subequations} by complex conjugation. Here $c$ is a vector of complex numbers $c_n$ which define the corresponding eigenoperators by \begin{equation} \label{eq:cC} C = \sum_{n=1}^D c_n \tilde \rho_n. \end{equation} Thus, $c$ and $c^*$ define $C$ and $C^\dag$, respectively. \paragraph{Property 6.} In view of the real representation $\tilde M$ of $M$ with respect to an orthonormal basis of hermitian operators derived in the previous paragraph the determination of the eigenoperators with eigenvalue 1 requires the computation of the kernel of $\tilde M-\mathbb{1}_D$. This is a linear algebra problem in $\mathbb{R}^D$ with real solutions which correspond to hermitian operators by means of \eqref{eq:cC}. This shows the stated property 6.. \begin{figure}[hbt] \centering \includegraphics[width=0.49\columnwidth]{fig6a} \includegraphics[width=0.49\columnwidth]{fig6b} \includegraphics[width=0.49\columnwidth]{fig6c} \caption{(a) Residual entropy as function of the applied magnetic field for $N=3, J_\text{max}=0.02$, and $z=1/1000$ to show the position at $h=2\pi/(z{T_\mathrm{rep}})$ and the shift, dashed line at $\approx 500J_\text{Q}J_\text{max}/(2z)$ of the nuclear magnetic resonance. (b) Same as (a) for $z=1/500$. (c) Same as (a) for $z=1/250$.} \label{fig:z-depend} \end{figure} \section{Shift of the Nuclear Resonance} \label{app:shift} \begin{figure}[hbt] \centering \includegraphics[width=0.5\columnwidth]{fig7} \caption{Residual entropy as function of the applied magnetic field for $N=3, z=1/1000$ and various values of $J_\text{max}$. The shifts indicated by the dashed lines correspond to the estimate \eqref{eq:nuclear-shift2}. \label{fig:jm-depend}} \end{figure} In the main text, the shift of the nuclear resonance due to the coupling of the nuclear spins to the central, electronic spin was shown in the right panel of Fig.\ 1(a). The effect can be estimated by \begin{equation} \label{eq:nuclear-shift2} z\Delta h \approx \pm J_\text{max}/2. \end{equation} This relation is highly plausible, but it cannot be derived analytically because no indication for a polarization of the central, electronic spin in $x$-direction was found. Yet, the numerical data corroborates the validity of \eqref{eq:nuclear-shift2}. In Fig.\ \ref{fig:z-depend}, we show that the nuclear resonance without shift occurs for \begin{equation} zh{T_\mathrm{rep}}=2\pi n' \end{equation} where $n'\in\mathbb{Z}$. But it is obvious that an additional shift occurs which is indeed captured by \eqref{eq:nuclear-shift2}. In order to support \eqref{eq:nuclear-shift2} further, we also study various values of $J_\text{max}$ in Fig.\ \ref{fig:jm-depend}. The estimate \eqref{eq:nuclear-shift2} captures the main trend of the data, but it is not completely quantitative because the position of the dashed lines relative to the minimum of the envelope of the resonances varies slightly for different values of $J_\text{max}$. Hence, a more quantitative explanation is still called for. \section{Entropy Reduction for Other Distributions of Couplings} \label{app:other} In the main text, we analyzed a uniform distribution of couplings, see Eq.\ (8). In order to underline that our results are generic and not linked to a special distribution, we provide additional results for two distributions which are often considered in literature, namely an exponential parameterization as defined in \eqref{eq:expo} and a Gaussian parametrization as defined in \eqref{eq:gaus}. \begin{figure}[htb] \centering \includegraphics[width=0.49\columnwidth]{fig8a} \includegraphics[width=0.48\columnwidth]{fig8b} \caption{Residual entropy as function of the applied magnetic field for various bath sizes $N$ for the exponentially distributed couplings given by \eqref{eq:expo}; panel (a) for $\alpha=1$ and panel (b) for $\alpha=0.5$ and hence smaller ratio $J_\text{min}/J_\text{max}$. \label{fig:expo}} \end{figure} The key difference between both parametrizations \eqref{eq:expo} and \eqref{eq:gaus} is that due to the quadratic argument in \eqref{eq:gaus} the large couplings in this parametrization are very close to each other, in particular for increasing $N$. Hence, one can study whether this feature is favorable of unfavorable for entropy reduction. \begin{figure}[htb] \centering \includegraphics[width=0.49\columnwidth]{fig9a} \includegraphics[width=0.48\columnwidth]{fig9b} \caption{Residual entropy as function of the applied magnetic field for various bath sizes $N$ for the Gaussian distributed couplings given by \eqref{eq:gaus}; panel (a) for $\alpha=1$ and panel (b) for $\alpha=0.5$ and hence smaller ratio $J_\text{min}/J_\text{max}$. \label{fig:gaus}} \end{figure} Additionally, the difference between $\alpha=0.5$ and $\alpha=1$ consists in a different spread of the couplings. For $\alpha=1$, one has $J_\text{min}/J_\text{max}=1/e$ in both parametrizations while one has $J_\text{min}/J_\text{max}=1/\sqrt{e}$ for $\alpha=0.5$, i.e., the spread is smaller. Figure \ref{fig:expo} displays the results for the exponential parametrization \eqref{eq:expo} while Fig.\ \ref{fig:gaus} depicts the results for the Gaussian parametrization \eqref{eq:gaus}. Comparing both figures shows that the precise distribution of the couplings does not matter much. Exponential and Gaussian parametrization lead to very similar results. They also strongly ressemble the results shown in Fig.\ 2a in the main text for a uniform distribution of couplings. This is quite remarkable since the Gaussian parametrization leads to couplings which are very close to each other and to the maximum coupling. This effect does not appear to influence the achievable entropy reduction. The ratio $J_\text{min}/J_\text{max}$ between the smallest to the largest coupling appears to have an impact. If it is closer to unity, here for $\alpha=0.5$, the reduction of entropy works even better than for smaller ratios. \end{appendix}
2024-02-18T23:40:09.473Z
2020-02-07T02:00:59.000Z
algebraic_stack_train_0000
1,492
9,759
proofpile-arXiv_065-7317
\section{Introduction} Deep neural networks have been achieving remarkable success in a wide range of classification tasks in recent years. Accompanying increasingly accurate prediction of the classification probability, it is of equal importance to quantify the uncertainty of the classification probability produced by deep neural networks. Without a careful characterization of such an uncertainty, the prediction of deep neural networks can be questionable, unusable, and in the extreme case incur considerable loss \cite{wang2016deep}. For example, deep reinforcement learning suffers from a strikingly low reproducibility due to high uncertainty of the predictions \cite{henderson2017deep}. Uncertainty quantification can be challenging though; for instance, \cite{guo2017calibration} argued that modern neural networks architectures are poor in producing well-calibrated probability in binary classification. Recognizing such challenges, there have been recent proposals to estimate and quantify the uncertainty of output from deep neural networks, and we review those methods in Section \ref{sec:related}. Despite the progress, however, uncertainty quantification of deep neural networks remains relatively underdeveloped \cite{kendall2017uncertainties}. In this paper, we propose deep Dirichlet mixture networks to produce, in addition to a point estimator of the classification probabilities, an associated credible interval (region) that covers the true probabilities at a desired level. We begin with the binary classification problem and employ the Beta mixture model to approximate the probability distribution of the true but \text{random} probability. We then extend to the general multi-class classification using the Dirichlet mixture model. Our key idea is to view the classification probability as a random quantity, rather than a deterministic value in $[0,1]$. We seek to estimate the distribution of this random quantity using the Beta or the Dirichlet mixture, which we show is flexible enough to model any continuous distribution on $[0,1]$. We achieve the estimation by adding an extra layer in a typical deep neural network architecture, without having to substantially modify the overall structure of the network. Then based on the estimated distribution, we produce both a point estimate and a credible interval for the classification probability. This credible interval provides an explicit quantification of the classification variability, and can greatly facilitate our decision making. For instance, a point estimate of high probability to have a disease may be regarded as lack of confidence if the corresponding credible interval is wide. By contrast, a point estimate with a narrow credible interval may be seen as a more convincing diagnosis. The feasibility of our proposal is built upon a crucial observation that, in many classification applications such as medical diagnosis, there exist more than one class labels. For instance, a patient's computed tomography image may be evaluated by two doctors, each giving a binary diagnosis of existence of cancer. In Section \ref{sec:realdata}, we illustrate with an example of diagnosis of Alzheimer's disease (AD) using patients' anatomical magnetic resonance imaging. For each patient, there is a binary diagnosis status as AD or healthy control, along with additional cognitive scores that are strongly correlated with and carry crucial information about one's AD status. We thus consider the dichotomized version of the cognitive scores, combine them with the diagnosis status, and feed them together into our deep Dirichlet mixture networks to obtain a credible interval of the classification probability. We remark that, existence of multiple labels is common rather than an exception in a variety of real world applications. Our proposal provides a useful addition to the essential yet currently still growing inferential machinery to deep neural networks learning. Our method is simple, fast, effective, and can couple with any existing deep neural network structure. In particular, it adopts a frequentist inference perspective, but produces a Bayesian-style outcome of credible intervals. \subsection{Related Work} \label{sec:related} There has been development of uncertainty quantification of artificial neural networks since two decades ago. Early examples include the delta method \cite{hwang1997prediction}, and the Bootstrap methods \cite{efron1994introduction,heskes1997practical,carney1999confidence}. However, the former requires computing the Hessian matrix and is computationally expensive, whereas the latter hinges on an unbiased prediction. When the prediction is biased, the total variance is to be underestimated, which would in turn result in a narrower credible interval. Another important line of research is Bayesian neural networks \cite{mackay1992evidence, mackay1992practical}, which treat model parameters as distributions, and thus can produce an explicit uncertainty quantification in addition to a point estimate. The main drawback is the prohibitive computational cost of running MCMC algorithms. There have been some recent proposals aiming to address this issue, most notably, \cite{gal2016dropout, li2017dropout} that used the dropout tricks. Our proposal, however, is a frequentist solution, and thus we have chosen not to numerically compare with those Bayesian approaches. Another widely used uncertainty quantification method is the mean variance estimation (MVE) approach \cite{nix1994estimating}. It models the data noise using a normal distribution, and employs a neural network to output the mean and variance. The optimization is done by minimizing the negative log-likelihood function. It has mainly been designed for regression tasks, and is less suitable for classification. There are some more recent proposals of uncertainty quantification. One is the lower and upper bound estimation (LUBE) \cite{khosravi2011lower,quan2014short}. LUBE has been proven successful in numerous applications. However, its loss function is non-differentiable and gradient descent cannot be applied for optimization. The quality-driven prediction interval method (QD) has recently been proposed to improve LUBE \cite{pearce2018high}. It is a distribution-free method by outputting prediction's upper bound and lower bound. The uncertainty can be estimated by measuring the distance between the two bounds. Unlike LUBE, the objective function of QD can be optimized by gradient descent. But similar to MVE, it is designed for regression tasks. Confidence network is another method to estimate confidence by adding an output node next to the softmax probabilities \cite{devries2018confidence}. This method is suitable for classification. Although its original goal was for out-of-distribution detection, its confidence can be used to represent the intrinsic uncertainty. Later in Section \ref{sec:compare}, we numerically compare our method with MVE, QD, and confidence network. We also clarify that our proposed framework is different from the mixture density network \cite{bishop1994mixture}. The latter trains a neural network to model the distribution of the outcome using a mixture distribution. By contrast, we aim to learn the distribution of the classification probabilities and to quantify their variations. \section{Dirichlet Mixture Networks} In this section, we describe our proposed Dirichlet mixture networks. We begin with the case of binary classification, where the Dirichlet mixture models reduce to the simpler Beta mixture models. Although a simpler case, the binary classification is sufficient to capture all the key ingredients of our general approach and thus loses no generality. At the end of this section, we discuss the extension to the multi-class case. \subsection{Loss Function} We begin with a description of the key idea of our proposal. Let $\{1, 2\}$ denote the two classes. Given an observational unit $\x$, e.g., an image, we view the probability $p_{\x}$ that $\x$ belongs to class 1 as a random variable, instead of a deterministic value in $[0, 1]$. We then seek to estimate the probability density function $f(p; \x)$ of $p_{\x}$. This function encodes the \textit{intrinsic} uncertainty of the classification problem. A point estimate of the classification probability only focuses on the mean, $\int_0^1 f(p; \x) \dx p$, which is not sufficient for an informed decision making without an explicit quantification of its variability. For example, it can happen that, for two observational units $\x$ and $\x'$, their mean probabilities, and thus their point estimates of the classification probability, are the same. However, the densities are far apart from each other, leading to completely different variabilities, and different interpretations of the classification results. Figure \ref{fig:illlustration} shows an illustration. Our proposal then seeks to estimate the density function $f(p; \x)$ for each $\x$. \begin{figure}[t!] \centering \includegraphics[width=8.3cm]{images/introduction.PNG} \caption{Illustration of the setting. Two or more labels are generated with the \textit{same} probability $p_{\x}$, which is \textit{randomly} drawn from a distribution that we wish to estimate.} \label{fig:illlustration} \end{figure} A difficulty arising from this estimation problem is that $f$ in general can be any density function on $[0, 1]$. To address this, we propose to simplify the problem by restricting to the case where $f$ is a Beta mixture; i.e., \begin{equation}\label{eq:beta_density_mix} f(p; \x) = \sum_{k=1}^K w^k \frac{p^{\alpha^k_1 -1} (1-p)^{\alpha^k_2 -1}}{\B(\alpha^k_1, \alpha^k_2)}, \end{equation} where $\B(\cdot, \cdot)$ is the Beta function, and the parameters $w^k, \bm\alpha^k = (\alpha^k_1, \alpha^k_2)$ are smooth functions of $\x$, $k = 1, \ldots, K$. The weights $w^k$ satisfy that $w^1 + \cdots + w^K = 1$. Later we show that this Beta mixture distribution is flexible enough to adequately model almost any distribution $f$ on $[0,1]$. With the form of density function \eqref{eq:beta_density_mix} in place, our goal then turns to estimate the positive parameters $\alpha_1^k, \alpha_2^k$, and $w^k$. To do so, we derive the loss function that is to be minimized by deep neural networks. We employ the negative log-likelihood function from \eqref{eq:beta_density_mix} as the loss function. For the $j$th observational unit of the training data, $j = 1, \ldots, n$, let $\x_j$ denote the input, e.g., the subject's image scan, and $\y_j = \left( y_j^{(1)}, \ldots, y_j^{(m_j)} \right)$ denote the vector of labels taking values from $\{1, 2\}$. Here we assume $m_j \geq 2$, reflecting that there are more than one class label for each observational unit. Write $\w = (w_1, \ldots, w_K)$ and $\bm \alpha = (\bm\alpha^1, \ldots, \bm\alpha^K)$. By integrating out $p$, the likelihood function for the observed pair $(\x_j, \y_j)$ is \[ \begin{aligned} &L_j(\w, \bm\alpha; \x_j, \y_j) \\ &= \int_0^1 p^{\sum_{l=1}^{m_j} \bm{1}(y_j^{(l)}= 1)} (1-p)^{\sum_{l=1}^{m_j} \bm{1}(y_j^{(l)}=2)} f(p; \x_j) \dx p. \end{aligned} \] Write $S_{ij} = \sum_{l=1}^{m_j} \bm{1} \left( y_j^{(l)} = i \right)$, where $\bm{1}(\cdot)$ is the indicator function, $i = 1, 2$, $j = 1, \ldots, n$, and this term quantifies the number of times $\x_j$ is labeled $i$. Plugging \eqref{eq:beta_density_mix} into $L_j$, we get \[ \begin{aligned} & L_j(\w, \bm\alpha; \x_j, \y_j) \\ &= \int_0^1 p^{S_{1j}} (1-p)^{S_{2j}} \sum_{k=1}^K \frac{w^k p^{\alpha^k_1 -1} (1-p)^{\alpha^k_2 -1}}{\B(\alpha^k_1, \alpha^k_2)} \dx p\\ &= \sum_{k=1}^K \int_0^1 \frac{w^k}{\B(\alpha^k_1, \alpha^k_2)} p^{\alpha^k_1 -1 + S_{1j}} (1-p)^{\alpha^k_2 -1 + S_{2j}} \dx p. \end{aligned} \] By a basic property of Beta functions, we further get \[ L_j(\w, \bm\alpha; x_j, \bm y_j) = \sum_{k=1}^K \frac{w^k\B(\alpha^k_1 + S_{1j}, \alpha^k_2 + S_{2j})}{\B(\alpha^k_1, \alpha^k_2)}. \] Aggregating all $n$ observational units, we obtain the full negative log-likelihood function, \begin{equation}\label{eq:full_like} \begin{aligned} & -\ell(\w, \bm\alpha; \x_1, \y_1, \ldots, \x_n, \y_n) \\ & = -\sum_{j=1}^n \log \left[\sum_{k=1}^K \frac{w^k\B(\alpha^k_1 + S_{1j}, \alpha^k_2 + S_{2j})}{\B(\alpha^k_1, \alpha^k_2)} \right]. \end{aligned} \end{equation} We then propose to employ a deep neural network learner to estimate $\w$ and $\bm \alpha$. \subsection{Credible Intervals} \label{sec:credible-interval} To train our model, we simply replace the existing loss function of a deep neural network, e.g., the cross-entropy, with the negative log-likelihood function given in (2). Therefore, we can take advantage of current deep learning framework such as PyTorch for automatic gradient calculation. Then we use the mini-batch gradient descent to optimize the entire neural network’s weights. Once the training is finished, we obtain the estimate of the parameters of the mixture distribution, $\{\bm{w}, \bm{\alpha}\}$. One implementation detail to notice is that the Beta function has no closed form derivative. To address this issue, we used fast log gamma algorithm to obtain an approximation of the Beta function, which is available in PyTorch. Also, we applied the softmax function to the weights of the mixtures to ensure that $w_1 + ... + w_K = 1$, and took the exponential of $\bm{\alpha}^1, \ldots \bm{\alpha}^K$ to ensure that these parameters remain positive as required. Given the estimated parameters $\widehat{\w}, \widehat{\bm \alpha}$ from the deep mixture networks, we next construct the credible interval for explicit uncertainty quantification. For a new observation $\x_0$, the estimated distribution of the classification probability $p_{\x_0}$ takes the form \[ \hat f(p; \x_0) = \sum_{k=1}^K \hat w^k(\x_0) \frac{p^{\hat\alpha^k_1(\x_0) -1} (1-p)^{\hat\alpha^k_2(\x_0) -1}}{\B(\hat\alpha^k_1(\x_0), \hat\alpha^k_2(\x_0))}, \] where we write $\hat{w}^k, \hat{\alpha}^k_1, \hat{\alpha}^k_2$ in the form of explicit functions of $\x_0$. The expectation of this estimated density $\int_0^1 \hat f(p; \x_0) \dx p$ is an approximately unbiased estimator of $p_{\x_0}$. Meanwhile, we can construct the two-sided credible interval of $p_{\x_0}$ with the nominal level $\alpha \in (0,1)$ as \[ \left[ \widehat Q_{\frac{\alpha}{2}}, \widehat Q_{1 - \frac{\alpha}{2}} \right], \] where $\widehat Q_{\frac{\alpha}{2}}$ and $\widehat Q_{1 - \frac{\alpha}{2}}$ are the $\alpha/2$ and $1 - \alpha/2$ quantiles of the estimated density $\hat{f}(p; \x_0)$. Similarly, we can construct the upper and lower credible intervals as \[ \left[0, \widehat Q_{1 - \alpha}\right], \text{ and } \left[\widehat Q_{\alpha}, 1\right], \] respectively, where $\widehat Q_{\alpha}$ and $\widehat Q_{1 - \alpha}$ are the $\alpha$ and $1 - \alpha$ quantiles of the estimated density $\hat{f}(p; \x_0)$. Next we justify our choice of Beta mixture for the distribution of classification probability, by showing that any density function under certain regularity conditions can be approximated well by a Beta mixture. Specifically, denote by $\mathcal P$ the set of all probability density functions $f$ on $[0, 1]$ with at most countable discontinuities that satisfy \begin{align}\nonumber \int_0^1 f(p) \left|\log f(p)\right| \dx p < \infty. \end{align} It is shown in \cite{article} that any $f \in \mathcal{P}$ can be approximated arbitrarily well by a sequence of Beta mixtures. That is, for any $f \in \mathcal{P}$ and any $\epsilon > 0$, there exists a Beta mixture distribution $f_{\B}$ such that \begin{align}\nonumber \mathrm{D}_{\mathrm{KL}} \left(f \| f_{\B} \right) \leq \epsilon, \end{align} where $\mathrm{D}_{\mathrm{KL}} (\cdot \| \cdot)$ denotes the Kullback-Leibler divergence. This result establishes the validity of approximating a general distribution function using a Beta mixture. The proof of this result starts by recognizing that $f$ can be accurately approximated by piecewise constant functions on $[0,1]$ due to a countable number of discontinuities. Next, each constant piece is a limit of a sequence of Bernstein polynomials, which are infinite Beta mixtures with integer parameters \cite{verdinelli1998bayesian, petrone2002consistency}. \subsection{Multiple-class Classification} \label{sec:extens-mult-labels} We next extend our method to the general case of multi-class classification. It follows seamlessly from the prior development except that now the labels $\y_j = \left( y_j^{(1)}, \ldots, y_j^{(m_j)} \right)$ take values from $\{1, 2, \ldots, d\}$, where $d$ is the total number of classes. Given an observation $\x$, the multinomial distribution over $\{1, 2, \ldots, d\}$ is represented by $\p = (p_1, \ldots, p_d)$, which, as a point in the simplex $\Delta = \{(c_1, \ldots, c_d): c_i \ge 0, c_1 + \cdots + c_d = 1\}$, is assumed to follow a Dirichlet mixture \[ f(\p; \x) = \sum_{k=1}^K w^k \frac1{\B(\bm\alpha^k)} \prod_{i=1}^d p_i^{\alpha^k_i -1}, \] where the generalized Beta function takes the form \[ \B(\bm\alpha) = \frac{\prod_{i=1}^d \Gamma(\alpha_i)}{\Gamma(\alpha_1 + \cdots + \alpha_d)}. \] The likelihood of the $j$th observation is \[ L_j = \int_{\Delta} \left( \prod_{i=1}^d p_i^{S_{ij}} \right) \sum_{k=1}^K w^k \frac1{\B (\bm\alpha^k)} \prod_{i=1}^d p_i^{\alpha^k_i -1} \dx \bm p, \] where $S_{ij} = \sum_{l=1}^{m_j} \bm{1}\left( y_j^{(l)}= i \right)$. Accordingly, the negative log-likelihood function is \[ \footnotesize \begin{aligned} &-\ell(\w, \bm\alpha; \x_1, \y_1, \ldots, \x_n, \y_n) \\ &= -\sum_{j=1}^n \log \left[\sum_{k=1}^K \frac{w^k\B\left( \alpha^k_1 + S_{1j}, \ldots, \alpha^k_d + S_{dj} \right)}{\B(\bm\alpha^k)} \right]. \end{aligned} \normalsize \] This is the loss function to be minimized in the Dirichlet mixture networks. \section{Simulations} \label{sec:simulations} \subsection{Simulations on Coverage Proportion} We first investigate the empirical coverage of the proposed credible interval. We used the MNIST handwritten digits data, and converted the ten outcomes (0-9) first to two classes (0-4 as Class 1, and 5-9 as Class 2), then to three classes (0-2 as Class 1, 3-6 as Class 2, and 7-9 as Class 3). In order to create multiple labels for each image, we trained a LeNet-5 \cite{lecun1998gradient} to output the classification probability $p_i$, then sampled multiple labels for the same input image based on a binomial or multinomial distribution with $p_i$ as the parameter. We further divided the simulated data into training and testing sets. We calculated the empirical coverage as the proportion in the testing set that the corresponding $p_i$ falls in the constructed credible interval. We assessed the coverage performance by examining how close the empirical coverage is to the nominal coverage between the interval of 75\% and 95\%. Ideally, the empirical coverage should be the same as the nominal level. \begin{figure}[h] \centering \subfloat[Two labels]{{\includegraphics[width=4cm]{images/sim_2o2o.png}}} \, \subfloat[Three labels]{{\includegraphics[width=4cm]{images/sim_2o3o.png}}} \caption{Empirical coverage of the estimated credible interval for a two-class classification task, with the two-label setting shown in (a), and the three-label setting in (b). The blue line represents the empirical coverage of the estimated credible interval. The orange 45-degree line represents the ideal estimation. The closer the two lines, the better the estimation.} \label{sim2} \end{figure} Figure \ref{sim2} reports the simulation results for the two-class classification task, where panel (a) is when there are two labels available for each input, and panel (b) is when there are three labels available. The orange 45-degree line represents the ideal coverage. The blue line represents the empirical coverage of the credible interval produced by our method. It is seen that our constructed credible interval covers 98.19\% of the truth with the 95\% nominal level for the two-label scenario, and 98.17\% for the three-label scenario. In general, the empirical coverage is close or slightly larger than the nominal value, suggesting that the credible interval is reasonably accurate. Moreover, the interval becomes more accurate with more labels on each input. \begin{figure}[h] \centering \subfloat[Class 1 with two labels]{{\includegraphics[width=4cm]{images/sim_3c2o_p0.png}}} \, \subfloat[Class 2 with two labels]{{\includegraphics[width=4cm]{images/sim_3c2o_p1.png}}} \, \subfloat[Class 1 with three labels]{{\includegraphics[width=4cm]{images/sim_3c3o_p0.png}}} \, \subfloat[Class 2 with three labels]{{\includegraphics[width=4cm]{images/sim_3c3o_p1.png}}} \caption{Empirical coverage of the estimated credible interval for a three-class classification task, with the two-label setting shown in (a) and (b), and the three-label setting in (c) and (d). The blue line represents the empirical coverage of the estimated credible interval. The orange 45-degree line represents the ideal estimation. The closer the two lines, the better the estimation. For each graph, the probability is calculated in the one-vs-all fashion; e.g., (a) represents the credible interval of Class 1 versus Classes 2 and 3 combined.} \label{sim3} \end{figure} Figure \ref{sim3} reports the simulation results for the three-class classification task, where panels (a) and (b) are when there are two labels available, and panels (c) and (d) are when there are three labels available. A similar qualitative pattern is observed in Figure \ref{sim3} as in Figure \ref{sim2}, indicating that our method works well for the three-class classification problem. \subsection{Comparison with Alternative Methods} \label{sec:compare} We next compare our method with three alternatives that serve as the baselines, the confidence network \cite{devries2018confidence}, the mean variance estimation (MVE) \cite{nix1994estimating}, and the quality-driven prediction interval method (QD) \cite{pearce2018high}. We have chosen those methods as baselines, as they also targeted to quantify the intrinsic variability and represented the most recent state-of-the-art solutions to this problem. \begin{figure}[h] \centering \subfloat[Plot for $f_1=\frac{\psi_1}{\psi_2} + 1$]{{\includegraphics[width=4.1cm]{images/function0.png}}} \subfloat[Plot for $f_2=\frac{\psi_2}{\psi_1} + 1$]{{\includegraphics[width=4.1cm]{images/function1.png}}} \, \subfloat[Scatter Plot for 1000 samples]{{\includegraphics[width=6cm]{images/simSamples.png}}} \caption{Data is generated from a Bernoulli distribution whose parameter is sampled from a $\B$ distribution with parameter($f_1$, $f_2$). (a) and (b) show the 3D landscapes. (c) shows 1,000 samples from this distribution with two labels for each data point. Green means all labels are 1. Red means all labels are 2. Yellow means that labels are a mix of 1 and 2.} \label{samples} \end{figure} To facilitate graphical presentation of the results, we simulated the input data $\x$ from two-dimensional Gaussian mixtures. Specifically, we first sampled $\x$ from a mixture of two Gaussians with means at $(-2, 2)$ and $(2, -2)$, and denote its probability density function as $\psi_1$. We then sampled $\x$ from another mixture of two Gaussians with means at $(2, 2)$ and $(-2, -2)$, and denote its probability density function as $\psi_2$. For each Gaussian component, the variance is set at 0.7. We then sampled the probability $p$ of belonging to Class 1 from a Beta distribution with the parameters $\psi_1 / \psi_2 + 1$ and $\psi_2 / \psi_1 + 1$. Finally, we sampled the class labels from a Bernoulli distribution with the probability of success $p$. At each input sample $\x$, we sampled two class labels. For a fair comparison, we duplicate the data for the baseline methods that only use one class label. Figure \ref{samples} (c) shows a scatter plot of 1,000 samples, each with two labels. The green dots correspond to the samples whose class labels are 0 in both replications, the red dots are 1 in both replications, and the yellow dots are those samples whose class labels are different in two replications. Most of the yellow dots are located along the two axis that separate the four quadrants. \begin{figure}[h] \centering \subfloat[Ideal]{{\includegraphics[width=4.5cm]{images/Ideal.png}}} \hspace{3cm} \subfloat[Our Approach]{{\includegraphics[width=4cm]{images/Beta.png}}} \, \subfloat[MVE]{{\includegraphics[width=4cm]{images/MVE.png}}} \, \subfloat[QD]{{\includegraphics[width=4cm]{images/QD.png}}} \, \subfloat[Confidence network]{{\includegraphics[width=4cm]{images/confidence.png}}} \, \caption{Variance contour plots of our approach and baselines. (a) shows the ideal variance plot. (b) is the result of our approach. (c), (d), (e) are the results of baselines. Blue means low data-noise, and yellow means high data-noise. From the results, (b) our approach looks most similar to the ideal.} \label{baselines} \end{figure} Figure \ref{baselines} reports the contour of the estimated variance. Panel (a) is the true variance contour for the simulated data, obtained numerically from the data generation. It shows that the largest variance occurs along the two axises that separate the four quadrants. Panel (b) is the result of our approach. We used ten mixtures here. The predicted mean and variance were calculated using the law of total expectation and total variance. Our method achieved a 98.4\% classification accuracy. More importantly, it successfully captured the variability of the classification probability and produced a variance contour that looks similar to (a). Panel (c) is the result of the mean variance estimation \cite{devries2018confidence}. It also achieved a 98.4\% classification accuracy, but it failed to correctly characterize the variability. This is partly due to that it models the variability as Gaussian. (d) is the result of the quality-driven prediction interval method \cite{pearce2018high}. It only obtained a 89.1\% classification accuracy. As a distribution-free method, it predicted a higher variability in the center, but ignored other highly variable regions. (e) is the result of the confidence network \cite{devries2018confidence}. It achieved a 98.1\% classification accuracy, a reasonably well variability estimation. Overall, our method achieved the best performance while maintaining a high classification accuracy. \begin{figure}[h] \centering \subfloat[Point (0,0)]{{\includegraphics[width=4cm]{images/point0_0.png}}} \, \subfloat[Point (1,1)]{{\includegraphics[width=4cm]{images/point1_1.png}}} \caption{Beta mixture density functions outputted by the neural network. (a) is the result at point (0,0). (b) is the result at point (1,1). Point (0,0) clearly has a higher variance.} \label{PDFplots} \end{figure} Figure \ref{PDFplots} shows the density function of the outputted distributions. At point (0,0), it indeed has a higher variance. \section{Real Data Analysis} \label{sec:realdata} \subsection{Data Description } We illustrate our proposed method on a medical imaging diagnosis application. We remark that, although the example dataset is small in size, with only thousands of image scans, our method is equally applicable to both small and large datasets. Alzheimer's Disease (AD) is the leading form of dementia in elderly subjects, and is characterized by progressive and irreversible impairment of cognitive and memory functions. With the aging of the worldwide population, it has become an international imperative to understand, diagnose, and treat this disorder. The goal of the analysis is to diagnose patients with AD based on their anatomical magnetic resonance imaging (MRI) scans. Being able to provide an explicit uncertainty quantification for this classification task, which is potentially challenging and of a high-risk, is especially meaningful. The dataset we analyzed was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). For each patient, in addition to his or her diagnosis status as AD or normal control, two cognitive scores were also recorded. One is the Mini-Mental State Examination (MMSE) score, which examines orientation to time and place, immediate and delayed recall of three words, attention and calculation, language and vision-constructional functions. The other is the Global Clinical Dementia Rating (CDR-global) score, which is a combination of assessments of six domains, including memory, orientation, judgment and problem solving, community affairs, home and hobbies, and personal care. Although MMSE and CDR-global are not used directly for diagnosis, their values are strongly correlated with and carry crucial information about one's AD status. Therefore, we took the dichotomized cognitive scores, and used them as labels in addition to the diagnosis status. We used the ADNI 1-Year 1.5T dataset, with totally 1,660 images. We resized all the images to the dimension $96\times96\times80$. The diagnosis contains three classes: normal control (NC), mild cognitive impairment (MCI), and Alzheimer disease (AD). Among them, MCI is a prodromal stage of AD. Since the main motivation is to identify patients with AD, we combined NC and MCI as one class, referred as NC+MCI, and AD as the other class, and formulated the problem as a binary classification task. We used three types of assessments to obtain the three classification labels: the doctor's diagnostic assessment, the CDR-global score, and the MMSE score. For the CDR-global score, we used 0 for NC, 0.5 for MCI, and 1 for AD. For the MMSE score, we used 28-30 as NC, 24-27 as MCI, and 0-23 as AD. Table \ref{data} summarizes the number of patients in each class with respect to the three different assessments. \begin{table}[t] \begin{center} \begin{tabular}{l*{6}{c}r} & Diagnosis & CDR-global & MMSE \\ \hline NC & 500 & 664 & 785 \\ MCI & 822 & 830 & 570 \\ AD & 338 & 166 & 305 \\ \hline Total & 1660 & 1660 & 1660\\ \end{tabular} \caption{Detailed patient statistics} \label{data} \end{center} \end{table} \subsection{Classifier and Results} Figure~\ref{arch} describes the architecture of our neural network based classifier. We used two consecutive 3D convolutional filters followed by max pooling layers. The input is $96\times96\times80$ image. The first convolutional kernel size is $ 5\times5\times5$, and the max pooling kernel is $5\times5\times5$. The second convolutional kernel is $3\times3\times3$, and the following max pooling kernel is $2\times2\times2$. We chose sixteen as the batch size, and $1e-6$ as the learning rate. We chose a $K=3$-component Beta mixture. \begin{figure}[t!] \centering \includegraphics[width=8.2cm]{images/Diagram.png} \caption{Architecture of the neural network used in the real data experiment.} \label{arch} \end{figure} We randomly selected 90\% of the data for training and the remaining 10\% for testing. We plotted the credible interval of all the 166 testing subjects with respect to their predicted probability of having AD or not in Figure \ref{real}(a). We then separated the testing data into three groups: the subjects with their assessments unanimously labeled as NC+MCI (green dots), the subjects with their assessments unanimously labeled as AD (red dots), and the subjects with their assessments with a mix of NC+MCI and AD (blue dots, and referred as MIX). \begin{figure}[t!] \centering \subfloat[Credible interval for 166 subjects]{{\includegraphics[width=4cm]{images/real_2c3o_scatter.png}}} \, \subfloat[Patients with all NC+MCI label]{{\includegraphics[width=4cm]{images/2c3o_CI_NCMCI.png}}} \, \subfloat[Patients with all AD label]{{\includegraphics[width=4cm]{images/2c3o_CI_AD.png}}} \, \subfloat[Patients with mixed AD and NC+MCI label]{{\includegraphics[width=4cm]{images/2c3o_CI_MIX.png}}} \caption{Credible intervals constructed in the real data experiment.} \label{real} \end{figure} We observe that, for the patients in the NC+MCI category, 95\% of them were estimated to have a smaller than 0.1 probability of being AD, and a tight credible interval with the width smaller than 0.15. We further randomly selected five patients in the NC+MCI category and plotted their credible intervals in Figure \ref{real}(b). Each has a close to 0 probability of having AD, and each with a tight credible interval. For patients in the AD category, most exhibit the same pattern of having a tight credible interval, with a few potential outliers. For the patients in the MIX category, we randomly selected five patients and plotted their predicted classification probability with the associated credible interval in Figure~\ref{real}(c). We see that Subject 4 was classified as AD with only 0.45 probability but has a large credible interval of width 0.3. We took a closer look at this subject, and found that the wide interval may be due to inaccurate labeling. The threshold value we applied to dichotomize the MMSE score was 23, in that a subject with the MMSE below or equal to 23 is classified as AD. Subject 4 happens to be on the boundary line of 23. This explains why the classifier produced a wide credible interval. In Figure \ref{real}(a), we also observe that the classifier is less confident in classifying the patients in the MIX category, in that almost all the blue dots are above the 0.15 credible interval. We again randomly selected five patients in the MIX category and plotted their predicted classification probabilities with the corresponding credible intervals in Figure~\ref{real}(d). Comparing to Figure~\ref{real}(b), the credible intervals for patients in the MIX category are much wider than those in the unanimous NC+MCI category. \section{Conclusion} We present a new approach, deep Dirichlet mixture networks, to explicitly quantify the uncertainty of the classification probability produced by deep neural networks. Our approach, simple but effective, takes advantage of the availability of multiple class labels for the same input sample, which is common in numerous scientific applications. It provides a useful addition to the inferential machinery for deep neural networks based learning. There remains several open questions for future investigation. Methodologically, we currently assume that multiple class labels for each observational sample are of the same quality. In practice, different sources of information may have different levels of accuracy. It is warranted to investigate how to take this into account in our approach. Theoretically, Petrone and Wasserman \cite{petrone2002consistency} obtained the convergence rate of the Bernstein polynomials. Our Dirichlet mixture distribution should at least have a comparable convergence rate. This rate can guide us theoretically on how many distributions in the mixture should we need. We leave these problems as our future research. \newpage \section{Dirichlet Mixture Networks} In this section, we introduce the Dirichlet mixture networks, highlighting their flexibility of dealing with a large range of uncertainty quantification tasks. We begin with the case of binary classification, where Dirichlet mixtures reduce to the simpler Beta mixtures. Although perhaps the simplest example, the binary case is sufficient to capture all ingredients of our general approach and, thus, no generality is lost. For completeness, extensions to the multi-class case are considered in Section \ref{sec:extens-mult-labels}. \begin{figure}[h] \centering \includegraphics[width=8cm]{images/introduction.PNG} \caption{Illustration of the setting. Two or more labels are generated with the \textit{same} probability $p_x$, which is \textit{randomly} drawn from a distribution that we wish to estimate.} \label{fig:cat} \end{figure} Let $\{1, 2\}$ denote the two classes. Given an observational unit $x$ (e.g., the image in Figure~\ref{fig:cat}), our model assumes that the probability $p_x$ that $x$ belongs to the first class is a \textit{random variable}, instead of a deterministic value in $[0, 1]$. Denote by $f(p; x)$ defined on $[0, 1]$ the probability density function of $p_x$, which encodes the \textit{intrinsic} uncertainty of the classification problem. In particular, merely knowing the mean probability $\int_0^1 f(p) \dx p$ is not sufficient for informed decision-making. For example, it can happen that, for $x, x'$, their mean probabilities are the same but the densities are far apart from each other. Our Dirichlet mixture networks aim to estimate the density function $f(p;x)$ for each $x$. We begin by introducing the loss function to be minimized via deep learning as below. \subsection{Loss function}\label{sec:likelihood} A difficulty arising from this estimation problem is that $f$ in general can be any density function on $[0, 1]$. Recognizing this difficulty, we propose to simplify the problem by restricting our focus to the case where $f$ is a Beta mixture: \begin{equation}\label{eq:beta_density_mix} f(p; x) = \sum_{k=1}^K w^k \frac{p^{\alpha^k_1 -1} (1-p)^{\alpha^k_2 -1}}{\B(\alpha^k_1, \alpha^k_2)}, \end{equation} where $\B(\cdot, \cdot)$ is the Beta function, and the parameters $w^k, \bm\alpha^k = (\alpha^k_1, \alpha^k_2)$ are (smooth) functions of $x$, that is, $w^k = w^k(x), \alpha^k_1 = \alpha^k_1(x)$, and $\alpha^k_2 = \alpha^k_2(x)$. The weights $w^k$ satisfy $w^1 + \cdots + x^K = 1$. With this density form in place, our goal is to estimate the parameters $\alpha_1^k, \alpha_2^k > 0$, and $w^k$ for $1 \le k \le K$. Before proceeding to the next step, it is worth pointing out that the restriction on the density function class is nearly harmless due to the universal approximation of Beta mixtures. Section~\ref{sec:universaility} will discuss this in detail. The next step is to obtain the negative log-likelihood function, which serves as the the loss function used in the mixture networks. Let $x_1, \ldots, x_n$ be the features of $n$ observational units, each of which are associated with labels $y_j^{(1)}, \ldots, y_j^{(m_j)}$ taking values from $\{1, 2\}$ for $1 \le j \le n$. Through integrating out $p$, the likelihood of $w^k, \bm\alpha^k$ for the observed pair $x_j, \bm y_j = (y_j^{(1)}, \ldots, y_j^{(m_j)})$ is \[ \begin{aligned} &L_j(\bm w, \bm\alpha^1, \ldots, \bm\alpha^K; x_j, \bm y_j) \\ &= \int_0^1 p^{\sum_{l=1}^{m_j} \bm{1}(y_j^{(l)}= 1)} (1-p)^{\sum_{l=1}^{m_j} \bm{1}(y_j^{(l)}=2)} f(p; x_j) \dx p. \end{aligned} \] Write $S_{ij} = \sum_{l=1}^{m_j} \bm{1}(y_j^{(l)} = i)$ for $i = 1, 2$ and $j = 1, \ldots, n$. Note that $S_{ij}$ is the number of times $x_j$ is labeled $i$. Plugging \eqref{eq:beta_density_mix} into the latest display gives \[ \begin{aligned} &L_j(\bm w, \bm\alpha^1, \ldots, \bm\alpha^K; x_j, \bm y_j)\\ &= \int_0^1 p^{S_{1j}} (1-p)^{S_{2j}} \sum_{k=1}^K \frac{w^k p^{\alpha^k_1 -1} (1-p)^{\alpha^k_2 -1}}{\B(\alpha^k_1, \alpha^k_2)} \dx p\\ &= \sum_{k=1}^K \int_0^1 \frac{w^k}{\B(\alpha^k_1, \alpha^k_2)} p^{\alpha^k_1 -1 + S_{1j}} (1-p)^{\alpha^k_2 -1 + S_{2j}} \dx p. \end{aligned} \] Making use of a basic property of Beta functions, we get \[ L_j(\bm w, \bm\alpha^1, \ldots, \bm\alpha^K; x_j, \bm y_j) = \sum_{k=1}^K \frac{w^k\B(\alpha^k_1 + S_{1j}, \alpha^k_2 + S_{2j})}{\B(\alpha^k_1, \alpha^k_2)}. \] Aggregating all the $n$ observed units, we finally obtain the full negative log-likelihood given as \begin{equation}\label{eq:full_like} \begin{aligned} &-\ell(\bm w, \bm\alpha^1, \ldots, \bm\alpha^K; x_1, \bm y_1, \ldots, x_n, \bm y_n) = \\ &-\sum_{j=1}^n \log \left[\sum_{k=1}^K \frac{w^k(x_j)\B(\alpha^k_1(x_j) + S_{1j}, \alpha^k_2(x_j) + S_{2j})}{\B(\alpha^k_1(x_j), \alpha^k_2(x_j))} \right]. \end{aligned} \end{equation} Moving back to our neural network architecture, the negative log-likelihood $-\ell$ serves as the loss function. In short, our aim is to learn $\bm w(x), \bm\alpha^1(x), \ldots, \bm\alpha^K(x)$ as functions of $x$ by minimizing \eqref{eq:full_like} using the powerful machinery of deep learning. \subsection{Credible Intervals} \label{sec:credible-interval} In this section, we discuss how to explicitly quantify the uncertainty for a freshly collected unit, say $x$. Let $\hat{\bm w}(x), \hat{\bm\alpha}^1(x), \ldots, \hat{\bm\alpha}^K(x)$ be the functions learned by the deep mixture networks. For the unit $x$, the estimated distribution of the probability $p(x)$ takes the form \[ \hat f(p; x) = \sum_{k=1}^K \hat w^k(x) \frac{p^{\hat\alpha^k_1(x) -1} (1-p)^{\hat\alpha^k_2(x) -1}}{\B(\hat\alpha^k_1(x), \hat\alpha^k_2(x))}. \] Intuitively, if the density function $\hat f(p; x)$ is widely spread in the unit interval $[0, 1]$, much uncertainty is contained in the classification problem and, as a consequence, it is crucial to account for the uncertainty in interpreting the predictions of the neural networks. To be concrete, in addition to giving a point estimator of the random probability $p_x$, $\hat f(p; x)$ can be used to construct one-sided and two-sided credible intervals for $p_x$. Intuitively, the expectation of this estimated density $\int_0^1 \hat f(p; x)\dx p$ is an unbiased estimator of $p_x$. Next, to construct credible intervals, let $\alpha \in (0, 1)$ be the nominal level. For a two-sided interval, let $\widehat Q_{\frac{\alpha}{2}}$ and $\widehat Q_{1 - \frac{\alpha}{2}}$ be the $\alpha/2$ and $1 - \alpha/2$ quantiles of the estimated density $\hat f$, respectively. Using the quantiles, the two-sided credible interval for $p_x$ at level $1 - \alpha$ takes the form \[ \left[ \widehat Q_{\frac{\alpha}{2}}, \widehat Q_{1 - \frac{\alpha}{2}} \right]. \] Likewise, the upper and lower credible intervals both at level $1 - \alpha$ are given as \[ \left[0, \widehat Q_{1 - \alpha}\right], \text{ and } \left[\widehat Q_{\alpha}, 1\right], \] respectively. \subsection{Theoretical Analysis}\label{sec:universaility} The uncertainty assessment carried out in Sections~\ref{sec:likelihood} and \ref{sec:credible-interval} considers the case where the density function is a Beta mixture. In this section, we show that this relaxation is mild in the sense that any density functions under certain regularity conditions can be approximated well by Beta mixtures. Explicitly, denote by $\mathcal P$ the set of all probability density functions $f$ on $[0, 1]$ with at most countable discontinuities that satisfy \begin{align}\nonumber \int_0^1 f(p) \left|\log f(p)\right| \dx p < \infty. \end{align} In \cite{article}, it is shown that any $f \in \mathcal{P}$ can be approximated arbitrarily close by a sequence of Beta mixtures. Precisely, we have the following theorem. \begin{theorem}[\cite{article}]\label{thm1} For any $f \in \mathcal{P}$ and any $\epsilon > 0$, there exists a Beta mixture distribution $f_{\B}$ such that \begin{align}\nonumber \mathrm{D}_{\mathrm{KL}} \left(f \| f_{\B} \right) \leq \epsilon, \end{align} where $\mathrm{D}_{\mathrm{KL}} (\cdot \| \cdot)$ denotes the Kullback-Leibler divergence. \end{theorem} This theorem shows the validity of approximating general distribution functions using Beta mixtures. In particular, Pinsker's inequality implies that the density can be approximated arbitrarily close in the total variance distance. In short, the proof of this result starts by recognizing that $f$ can be accurately approximated by piecewise constant functions on $[0,1]$ due to a countable number of discontinuities. Next, as shown in \cite{verdinelli1998bayesian,petrone2002consistency}, each constant piece is a limit of a sequence of Bernstein polynomials, which are infinite Beta mixtures with integer parameters. For completeness, we remark it is unclear how to approximate $f(p; x)$ uniformly over $x$ using a sequence of Beta mixtures whose parameters are functions of $x$. It is likely that the approximation remains if the function $f(p, x)$ is smooth in $x$ in a certain sense, for example, $\int^1_0 |f(p; x) - f(p; x')| \dx p \le c \|x - x'\|$ for some $c > 0$. Provided that this is true, the universal approximation theorem ensures that the functional $f$ can be arbitrarily approximated by Beta mixtures with parameters taking form of feed-forward networks. We leave this challenging problem for future investigation. \subsection{Extension to Multiple Classes} \label{sec:extens-mult-labels} To conclude, we extend our method to the general case of multi-class labels. The new setting follows seamlessly from the prior development except that now the labels $y_j^{(1)}, \ldots, y_j^{(m_j)}$ take values from $\{1, 2, \ldots, d \}$, where $d$ is the number of classes. Given a unit $x$, the multinomial distribution over $1, 2, \ldots, d$ is represented by $(p_1, \ldots, p_d)$, which, as a point in the simplex $\Delta = \{(c_1, \ldots, c_d): c_i \ge 0, c_1 + \cdots + c_d = 1\}$, is assumed to follow a Dirichlet mixture \[ f(\bm p; x) = \sum_{k=1}^K w^k \frac1{\B(\bm\alpha^k)} \prod_{i=1}^d p_i^{\alpha^k_i -1}. \] Above, the generalized Beta function takes the form \[ \B(\bm\alpha) = \frac{\prod_{i=1}^d \Gamma(\alpha_i)}{\Gamma(\alpha_1 + \cdots + \alpha_d)}. \] The likelihood of the observations $y_j^{(1)}, \ldots, y_j^{(m_j)}$ is \[ L_j = \int_{\Delta} \left( \prod_{i=1}^d p_i^{S_{ij}} \right) \sum_{k=1}^K w^k(x_j) \frac1{\B (\bm\alpha^k(x_j))} \prod_{i=1}^d p_i^{\alpha^k_i(x_j) -1} \dx \bm p. \] Recall that $S_{ij} = \sum_{l=1}^{m_j} \bm{1}(y_j^{(l)}= i)$. As such, the (full) negative log-likelihood is \[ \footnotesize \begin{aligned} &-\ell(\bm w, \bm\alpha^1, \ldots, \bm\alpha^K; x_1, \bm y_1, \ldots, x_n, \bm y_n) \\ &= -\sum_{j=1}^n \log \left[\sum_{k=1}^K \frac{w^k(x_j)\B(\alpha^k_1(x_j) + S_{1j}, \ldots, \alpha^k_d(x_j) + S_{dj})}{\B(\bm\alpha^k(x_j))} \right]. \end{aligned} \normalsize \] This is the loss function to be minimized in the Dirichlet mixture networks.
2024-02-18T23:40:09.497Z
2019-08-15T02:15:22.000Z
algebraic_stack_train_0000
1,496
7,356
proofpile-arXiv_065-7429
\section{Introduction} During the last few years the van der Waals and Casimir interactions have attracted widespread interest due to important role they play in many physical phenomena \cite{1,2}. In most cases, however, the emphasis has been made on the forces acting between two closely spaced bodies, be it two atoms or molecules, an atom or a molecule and a macroscopic surface, or two macroscopic surfaces. It is common knowledge that the van der Waals and Casimir forces are caused by the zero-point and thermal fluctuations of the electromagnetic field and are described by the Lifshitz theory of dispersion forces \cite{3}. At the moment these forces are actively investigated not only theoretically, but also experimentally (see Refs. \cite{4,5} for a review) and are used in technological applications \cite{6,7,8}. Another important role of dispersion interactions is that they contribute to the free energy of free-standing material films and films deposited on some material plates. The formulation of this problem goes back to Derjaguin who took into account the dispersion-force contribution in studies of stability of thin films and introduced the concept of disjoining pressure (see Refs. \cite{9,10} for a review). During a few decades this contribution to the free energy, which depends on the film thickness, was estimated using the power-type force law and the Hamaker constant. In the present state of the art, the question of the Casimir energy for a free-standing or sandwiched between two dielectric plates metallic film was raised in Ref. \cite{11}. Then, the Casimir energy of a free-standing in vacuum metallic film was considered in Refs. \cite{11a,11b}. In doing so, the dielectric properties of metal were described by either the Drude or the plasma model. When employing the plasma model, the Lifshitz theory at nonzero temperature has been used in calculations. However, all calculations employing the Drude model have been performed at zero temperature. This did not allow to reveal significant differences in theoretical results for the free energy of metallic films predicted by the Lifshitz theory combined with either the Drude or the plasma model. Full investigation of the Casimir free energy and pressure for metallic films in the framework of the Lifshitz theory at nonzero temperature was performed in Refs. \cite{12,13,14}. The cases of a free-standing or sandwiched between two dielectric plates \cite{12}, deposited on a metal plate \cite{13} or made of magnetic metal \cite{14} metallic films have been considered. The dielectric properties of metals were described by using the optical data for the complex index of refraction extrapolated to zero frequency by the Drude or plasma models. It was shown that magnitudes of the free energy of metallic films of less than 150 nm thickness differ by up to a factor of 1000 depending on the calculation approach used \cite{12,13,14}. So great difference is explained by the fact that the Casimir free energy of metallic films drops to zero exponentially fast when the plasma model is used for extrapolation and goes to the classical limit when the optical data are extrapolated by the Drude model \cite{12,13,14}. This limit is already reached for the film of 150 nm thickness. Here we note that although routinely it is quite natural to use the Drude model for extrapolation of the optical data to lower frequencies because it takes into account the relaxation properties of conduction electrons, there are also strong reasons for using the lossless plasma model for this purpose in the case of fluctuating fields. The point is that the measurement data of all precise experiments on measuring the Casimir interaction between two material bodies separated with a vacuum gap exclude theoretical predictions of the Lifshitz theory combined with the Drude model and are consistent with predictions of the same theory using the plasma model \cite{15,16,17,18,19,20,21}. For the gap width below 1 $\mu$m, used in these experiments, the variation in theoretical predictions of both approaches is below a few percent. Recently, however, the differential force measurement scheme has been proposed \cite{22,23,24}, where this variation is by up to a factor 1000. The results of one of these experiments, already performed \cite{25,26}, exclude with certainty the predictions of the Drude model and are consistent with the plasma model. Basing on this, it was hypothesized that reaction of a physical system to real and fluctuating electromagnetic fields (having a nonzero and zero expectation values, respectively) might be different \cite{14,27}. On theoretical side, it was shown \cite{28,29} that for two metallic plates, separated by more than 6 $\mu$m distance, the classical statistical physics predicts the same Casimir force as does the Lifshitz theory combined with the plasma model. By contrast, for metals with perfect crystal lattices the Lifshitz theory was shown to violate the third low of thermodynamics (the Nernst heat theorem) when the Drude model is used \cite{30,31,32,33,34}. In this respect, one may guess that even at separations exceeding 6 $\mu$m, where the major contribution to the Casimir force between two parallel plates becomes classical, the quantum effects still remain important and make the classical treatment inapplicable. In view of the above problem, which is often called ``the Casimir puzzle", it is desirable to present additional arguments regarding an applicability of the Drude and plasma models in calculations of the Casimir free energy of metallic films. Here, the calculation results differ greatly, and the subject is not of only an academic character because the obtained values should be taken into account in the conditions of film stability. In this paper, we derive the asymptotic expressions at low temperature for thermal corrections to the Casimir free energy and pressure of metallic films described by the plasma model. The asymptotic behavior of the Casimir entropy is also obtained. Unlike the familiar case of two parallel plates separated with a gap, all these quantities decrease exponentially fast with increasing film thickness and do not have the classical limit by depending on $\hbar$ at arbitrarily large film thicknesses. It is shown that the Casimir entropy of a film preserves the positive values and, in the limiting case of zero temperature, goes to zero. Thus, it is proved that the Casimir entropy of metallic films described by the plasma model satisfies the Nernst heat theorem, i.e., the Lifshitz theory is thermodynamically consistent. Then, the low-temperature behavior of the Casimir free energy and entropy for metallic films described by the Drude model is considered. We show that in the limiting case of zero temperature the Casimir entropy goes to a positive value depending on the parameters of a film. Therefore, the Nernst heat theorem is violated \cite{35,36}. Furthermore, it is demonstrated that in this case the Casimir free energy does not go to zero in the limiting case of ideal metal film, which is in contradiction to the fact that electromagnetic oscillations cannot penetrate in an interior of ideal metal. Thus, the description of a film metal by the Drude model in the Lifshitz theory results in violation of basic thermodynamic demands. Because of this, the dispersion-force contribution to the free energy of metallic films might need a reconsideration taking into account that the low-frequency behavior of the film metal is described by the plasma model. The paper is organized as follows. In Sec. II, we present general formalism and derive the low-temperature behavior of the Casimir free energy, pressure and entropy for metallic films described by the plasma model. In Sec. III, we consider the low-temperature behavior of the Casimir free energy and entropy of metallic films with perfect crystal lattices described by the Drude model and demonstrate violation of the Nernst heat theorem. Section IV contains our conclusions and discussion. In Appendix, some details of the mathematical derivations are presented. \section{Metals described by the plasma model} The free energy per unit area of a free-standing metallic film of thickness $a$ in vacuum at temperature $T$ in thermal equilibrium with an environment is given by the Lifshitz formula \cite{2,3} \begin{eqnarray} && {\cal F}(a,T)=\frac{k_BT}{2\pi}\sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \int_{0}^{\infty}k_{\bot}dk_{\bot} \label{eq1}\\ &&~~~~~~~~~ \times\sum_{\alpha}\ln\left[1- r_{\alpha}^2(i\xi_l,k_{\bot})e^{-2ak(i\xi_l,k_{\bot})}\right]. \nonumber \end{eqnarray} \noindent Here, $k_B$ is the Boltzmann constant, $k_{\bot}$ is the magnitude of the projection of the wave vector on the film plane, $\xi_l=2\pi k_BTl/\hbar$, $l=0,\,1,\,2,\,\ldots$ are the Matsubara frequencies, the prime on the summation sign multiplies the term with $l=0$ by 1/2, and \begin{equation} k(i\xi_l,k_{\bot})=\sqrt{k_{\bot}^2+\varepsilon_l\frac{\xi_l^2}{c^2}}, \label{eq2} \end{equation} \noindent where $\varepsilon_l\equiv\varepsilon(i\xi_l)$ is the frequency-dependent dielectric permittivity of film metal calculated at the pure imaginary Matsubara frequencies. The reflection coefficients for two independent polarizations of the electromagnetic field, transverse magnetic ($\alpha={\rm TM}$) and transverse electric ($\alpha={\rm TE}$), are given by \begin{eqnarray} && r_{\rm TM}(i\xi_l,k_{\bot})=\frac{k(i\xi_l,k_{\bot})-\varepsilon_l q(i\xi_l,k_{\bot})}{k(i\xi_l,k_{\bot})+\varepsilon_l q(i\xi_l,k_{\bot})}, \nonumber \\ && r_{\rm TE}(i\xi_l,k_{\bot})=\frac{k(i\xi_l,k_{\bot})- q(i\xi_l,k_{\bot})}{k(i\xi_l,k_{\bot})+q(i\xi_l,k_{\bot})}, \label{eq3} \end{eqnarray} \noindent where \begin{equation} q(i\xi_l,k_{\bot})=\sqrt{k_{\bot}^2+\frac{\xi_l^2}{c^2}}. \label{eq4} \end{equation} Equation (\ref{eq1}) is obtained \cite{12} from the standard Lifshitz formula for a three-layer system \cite{37,38,39}, where the metallic plate is sandwiched between two vacuum semispaces. Note that the reflection coefficients (\ref{eq3}) have the opposite sign, as compared to the case of two plates separated by the vacuum gap \cite{2}. The reason is that here an incident wave inside the film material goes to its boundary plane with a vacuum, and not from the vacuum gap to the material boundary. Another distinctive feature of Eq.~(\ref{eq1}) from the standard Lifshitz formula is that here the dielectric permittivity of metal enters the power of the exponent [in the standard case this exponent contains the quantity $q$ defined in Eq.~(\ref{eq4})]. This makes the properties of the free energy (\ref{eq1}) quite different from those in the case of two parallel plates separated by a vacuum gap. It is convenient to introduce the dimensionless integration variable \begin{equation} y=2aq(i\xi_l,k_{\bot}). \label{eq5} \end{equation} \noindent Using the characteristic frequency $\omega_c\equiv c/(2a)$, we also pass on the dimensionless Matsubara frequencies \begin{equation} \zeta_l=\frac{\xi_l}{\omega_c}=4\pi\frac{k_BTa}{\hbar c}l\equiv\tau l. \label{eq6} \end{equation} \noindent Then, the Casimir free energy (\ref{eq1}) takes the form \begin{eqnarray} && {\cal F}(a,T)=\frac{k_BT}{8\pi a^2}\sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \int_{\zeta_l}^{\infty}y\,dy \label{eq7}\\ &&~~~~~~~~~ \times\sum_{\alpha}\ln\left[1- r_{\alpha}^2(i\zeta_l,y)e^{-\sqrt{y^2+(\varepsilon_l-1)\zeta_l^2}}\right]. \nonumber \end{eqnarray} \noindent In terms of the quantities (\ref{eq5}) and (\ref{eq6}), the reflection coefficients (\ref{eq3}) are given by \begin{eqnarray} && r_{\rm TM}(i\zeta_l,y)=\frac{\sqrt{y^2+(\varepsilon_l-1)\zeta_l^2}-\varepsilon_l y}{\sqrt{y^2+(\varepsilon_l-1)\zeta_l^2}+\varepsilon_l y}, \nonumber \\ && r_{\rm TE}(i\zeta_l,y)=\frac{\sqrt{y^2+(\varepsilon_l-1)\zeta_l^2}- y}{\sqrt{y^2+(\varepsilon_l-1)\zeta_l^2}+ y}. \label{eq8} \end{eqnarray} Now we assume that at the imaginary Matsubara frequencies the film metal is described by the lossless plasma model \begin{equation} \varepsilon_{l,p}=1+\frac{\omega_p^2}{\xi_l^2}, \label{eq9} \end{equation} \noindent where $\omega_p$ is the plasma frequency. In terms of dimensionless frequencies (\ref{eq6}), the dielectric permittivity (\ref{eq9}) takes the form \begin{equation} \varepsilon_{l,p}=1+\frac{\tilde{\omega}_p^2}{\zeta_l^2}, \quad \tilde{\omega}_p\equiv\frac{\omega_p}{\omega_c}=\frac{2a\omega_p}{c}. \label{eq10} \end{equation} Substituting Eq.~(\ref{eq10}) in Eq.~(\ref{eq8}), one obtains the reflection coefficients in the case when the plasma model is used \begin{eqnarray} && r_{{\rm TM},p}(i\zeta_l,y)=\frac{\zeta_l^2(\sqrt{y^2+\tilde{\omega}_p^2}-y)-\tilde{\omega}_p^2 y}{\zeta_l^2(\sqrt{y^2+\tilde{\omega}_p^2}+y)+\tilde{\omega}_p^2y}, \nonumber \\ && r_{{\rm TE},p}(i\zeta_l,y)=r_{{\rm TE},p}(y)= \frac{\sqrt{y^2+\tilde{\omega}_p^2}-y}{\sqrt{y^2+\tilde{\omega}_p^2}+y}. \label{eq11} \end{eqnarray} For the film described by the plasma model, it is convenient to rewrite the Casimir free energy (\ref{eq7}) as \begin{equation} {\cal F}_p(a,T)=\frac{k_BT}{8\pi a^2}\sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \Phi(\zeta_l)=\frac{k_BT}{8\pi a^2}\sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} [\Phi_{\rm TM}(\zeta_l)+\Phi_{\rm TE}(\zeta_l)], \label{eq12} \end{equation} \noindent where \begin{equation} \Phi_{\rm TM(TE)}(x)= \int_{x}^{\infty}\!\!\!\!y\,dy \ln\left[1- r_{{\rm TM(TE)},p}^2(ix,y)e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right]. \label{eq13} \end{equation} It is well known that the Casimir free energy can be presented in the form \begin{equation} {\cal F}_p(a,T)=E_p(a,T)+\Delta_T{\cal F}_p(a,T), \label{eq14} \end{equation} \noindent where the Casimir energy per unit area at zero temperature is given by \cite{2,3} \begin{equation} E_p(a,T)=\frac{\hbar c}{32\pi^2a^3}\int_{0}^{\infty}\!\!d\zeta\Phi(\zeta) \label{eq15} \end{equation} \noindent and $\Delta_T{\cal F}_p$ is the thermal correction to it. Applying the Abel-Plana formula to Eq.~(\ref{eq12}) and taking into account that $\zeta_l=\tau l$, one arrives at \begin{equation} \Delta_T{\cal F}_p(a,T)=i\frac{k_BT}{8\pi a^2}\int_{0}^{\infty}\!\!dt \frac{\Phi(i\tau t)-\Phi(-i\tau t)}{e^{2\pi t}-1}. \label{eq16} \end{equation} \noindent It is evident that the low-temperature behavior of the Casimir free energy of thin metallic films can be found from the perturbation expansion of Eq.~(\ref{eq16}) under the condition $\tau t\ll 1$. In doing so, it is convenient to consider the contributions of the TM and TE modes to Eq.~(\ref{eq16}) separately taking into account Eq.~(\ref{eq12}). Note that for two media with a gap in-between the low temperature expansion in the Lifshitz formula was performed in Refs.{\ }\cite{30,31,32,33,34}. These results were systemized and partly extended in Ref.{\ }\cite{34a}. We start from the TE mode because in this case the function under the integral in Eq.~(\ref{eq13}) does not depend on $x$ due to the second equality in Eq.~(\ref{eq11}). This mean that the total dependence of $\Phi_{\rm TE}(x)$ on $x$ is determined by only the lower integration limit in Eq.~(\ref{eq13}). Now we expand the function $\Phi_{\rm TE}(x)$ in a series in powers of $x$. The first term in this series is \begin{equation} \Phi_{\rm TE}(0)=\int_{0}^{\infty}\!\!\!ydy\ln[1-r_{{\rm TE},p}^2(y) e^{-\sqrt{y^2+\tilde{\omega}_p^2}}]. \label{eq17} \end{equation} \noindent This is a converging integral, which does not contribute to the difference \begin{equation} \Delta\Phi_{\rm TE}\equiv\Phi_{\rm TE}(i\tau t)-\Phi_{\rm TE}(-i\tau t), \label{eq18} \end{equation} \noindent entering Eq.~(\ref{eq16}). Then, calculating the first and second derivatives of Eq.~(\ref{eq13}), one finds \begin{equation} \Phi_{\rm TE}^{\prime}(0)=0,\qquad \Phi_{\rm TE}^{\prime\prime}(0)=-\ln(1-e^{-\tilde{\omega}_p}). \label{eq19} \end{equation} \noindent The respective terms of the power series again do not contribute to the difference (\ref{eq18}). Finally, we find \begin{equation} \Phi_{\rm TE}^{\prime\prime\prime}(0)=-\frac{8}{\tilde{\omega}_p}\, \frac{1}{e^{\tilde{\omega}_p}-1} \label{eq20} \end{equation} \noindent and, thus, \begin{equation} \Phi_{\rm TE}(x)=\Phi_{\rm TE}(0)-\frac{x^2}{2}\ln(1-e^{-\tilde{\omega}_p}) -\frac{4}{3\tilde{\omega}_p}\, \frac{x^3}{e^{\tilde{\omega}_p}-1}+O(x^4), \label{eq21} \end{equation} \noindent where $\Phi_{\rm TE}(0)$ is defined in Eq.~(\ref{eq17}). Restricting ourselves by the third perturbation order, Eqs.~(\ref{eq18}) and (\ref{eq21}) result in \begin{equation} \Delta\Phi_{\rm TE}\approx i\frac{8}{3\tilde{\omega}_p}\, \frac{\tau^3t^3}{e^{\tilde{\omega}_p}-1}. \label{eq22} \end{equation} We are coming now to the contribution of the TM mode to the quantity (\ref{eq16}). This case is more complicated because both the lower integration limit and the function under the integral in Eq.~(\ref{eq13}) depend on $x$. By calculating several first derivatives of Eq.~(\ref{eq13}), where the reflection coefficient is defined by the first equality in Eq.~(\ref{eq11}), one finds \begin{eqnarray} && \Phi_{\rm TM}(0)=\int_{0}^{\infty}\!\!\!ydy\ln(1- e^{-\sqrt{y^2+\tilde{\omega}_p^2}}), \nonumber \\ && \Phi_{\rm TM}^{\prime}(0)=0, \label{eq23}\\ && \Phi_{\rm TM}^{\prime\prime}(0)=\frac{8}{\tilde{\omega}_p^2} \int_{0}^{\infty}\!\!\!dy \frac{\sqrt{y^2+\tilde{\omega}_p^2}}{e^{\sqrt{y^2+\tilde{\omega}_p^2}}-1} -\ln(1-e^{-\tilde{\omega}_p}), \nonumber \\ && \Phi_{\rm TM}^{\prime\prime\prime}(0)=-\frac{16}{\tilde{\omega}_p}\, \frac{1}{e^{\tilde{\omega}_p}-1}. \nonumber \end{eqnarray} It is evident that the first two terms in the power series, defined by Eq.~(\ref{eq23}), \begin{equation} \Phi_{\rm TM}(x)=\Phi_{\rm TM}(0)+\frac{\Phi_{\rm TM}^{\prime\prime}(0)x^2}{2} -\frac{8}{3\tilde{\omega}_p}\, \frac{x^3}{e^{\tilde{\omega}_p}-1}+O(x^4), \label{eq24} \end{equation} \noindent do not contribute to the quantity \begin{equation} \Delta\Phi_{\rm TM}\equiv\Phi_{\rm TM}(i\tau t)-\Phi_{\rm TM}(-i\tau t). \label{eq25} \end{equation} Then, restricting ourselves by the third perturbation order, we arrive at \begin{equation} \Delta\Phi_{\rm TM}\approx i\frac{16}{3\tilde{\omega}_p}\, \frac{\tau^3t^3}{e^{\tilde{\omega}_p}-1}. \label{eq26} \end{equation} By summing up Eqs.~(\ref{eq22}) and (\ref{eq26}), one obtains \begin{equation} \Phi(i\tau t)-\Phi(-i\tau t)= \Delta\Phi_{\rm TM}+\Delta\Phi_{\rm TM}\approx i\frac{8}{\tilde{\omega}_p}\, \frac{\tau^3t^3}{e^{\tilde{\omega}_p}-1}. \label{eq27} \end{equation} Substituting this result in Eq.~(\ref{eq16}), integrating with respect to $t$ and returning to the dimensional variables, we find the behavior of the thermal correction to the Casimir energy of metallic film at low temperature \begin{equation} \Delta_T{\cal F}_p(a,T)=-\frac{2\pi^2(k_BT)^4}{15\hbar^3c^2\omega_p (e^{2a\omega_p/c}-1)}. \label{eq28} \end{equation} The respective thermal correction to the Casimir pressure of a free-standing metallic film at low $T$ takes the form \begin{equation} \Delta_TP_p(a,T)=-\frac{\partial{\cal F}_p(a,T)}{\partial a} =-\frac{4\pi^2(k_BT)^4}{15\hbar^3c^3} \frac{e^{2a\omega_p/c}}{(e^{2a\omega_p/c}-1)^2}. \label{eq29} \end{equation} An interesting feature of Eqs.~(\ref{eq28}) and (\ref{eq29}) is that the thermal corrections to the Casimir energy and pressure of metallic film, calculated using the plasma model, go to zero exponentially fast with increasing film thickness $a$. Thus, there is no classical limit in this case. Another important point is that for fixed film thickness the Casimir free energy and pressure of the film go to zero in the limiting case $\omega_p\to\infty$. This is true for both the thermal corrections (\ref{eq28}) and (\ref{eq29}) and for the zero-temperature quantities $E(a)$ and $P(a)$. Note that for $\omega_p\to\infty$ the magnitudes of both the TM and TE reflection coefficients (\ref{eq11}) go to unity, i.e., the film becomes perfectly reflecting. One can conclude that when the plasma model is used in calculations an ideal metal film is characterized by the zero Casimir energy and pressure, as it should be because the electromagnetic fluctuations cannot penetrate in an interior of ideal metal. {}From Eq.~(\ref{eq28}) one can also obtain the low-temperature behavior of the Casimir entropy of metallic film \begin{equation} S_p(a,T)=- \frac{\partial{\cal F}_p(a,T)}{\partial T}= \frac{8\pi^2k_B(k_BT)^3}{15\hbar^3c^2\omega_p (e^{2a\omega_p/c}-1)}. \label{eq30} \end{equation} \noindent It is seen that the Casimir entropy of a film is positive. When the temperature vanishes, one has from Eq.~(\ref{eq30}) \begin{equation} S_p(a,T)\to 0, \label{eq31} \end{equation} \noindent i.e., the Casimir entropy of metallic film calculated using the plasma model satisfies the Nernst heat theorem. In the end of this section, we discuss the application region of asymptotic Eqs.~(\ref{eq28})--(\ref{eq30}), which were derived under a condition $x\ll 1$, i.e., $\tau t\ll 1$. Taking into account that the dominant contribution to the integral (\ref{eq16}) is given by $t\sim 1/(2\pi)$ and considering the definition of $\tau$ in Eq.~(\ref{eq6}), one rewrites the application condition in the form \begin{equation} k_BT\ll\frac{\hbar c}{2a}=\hbar\omega_c. \label{eq32} \end{equation} \noindent For a typical film thickness $a=100\,$nm, this inequality results in $T\ll 11400\,$K, i.e., Eqs.~(\ref{eq28})--(\ref{eq30}) are well applicable under a condition $T\leqslant 1000\,$K. With increasing film thickness the application region of Eqs.~(\ref{eq28})--(\ref{eq30}) becomes more narrow, For example, for $a=1\,\mu$m these equations are applicable at $T\leqslant 100\,$K. \section{Metals described by the Drude model} Now we describe metal of the film by the Drude model which takes into account the relaxation properties of conduction electrons. At the pure imaginary Matsubara frequencies the dielectric permittivity of the Drude metal takes the form \begin{equation} \varepsilon_{l,D}=1+\frac{\omega_p^2}{\xi_l[\xi_l+\gamma(T)]}, \label{eq33} \end{equation} \noindent where $\gamma(T)$ is the temperature-dependent relaxation parameter. Using the dimensionless variables (\ref{eq6}) and (\ref{eq10}) and introducing the dimensionless relaxation parameter, \begin{equation} \tilde{\gamma}(T)=\frac{\gamma(T)}{\omega_c}, \label{eq34} \end{equation} \noindent Eq.~(\ref{eq33}) can be rewritten as \begin{equation} \varepsilon_{l,D}=1+\frac{\tilde{\omega}_p^2}{\zeta_l[\zeta_l+\tilde{\gamma}(T)]}. \label{eq35} \end{equation} It is convenient also to introduce one more dimensionless parameter \begin{equation} \delta_l(T)=\frac{\tilde{\gamma}(T)}{\zeta_l}=\frac{\gamma(T)}{\xi_l}= \frac{\hbar\gamma(T)}{2\pi k_BT}\,\frac{1}{l}, \label{eq36} \end{equation} \noindent where $l\geqslant 1$. It is easily seen that for metals with perfect crystal lattices this parameter satisfies a condition \begin{equation} \delta_l(T)\ll 1, \label{eq37} \end{equation} \noindent and becomes progressively smaller with decreasing temperature. Thus, at $T=300\,$K for good metals we have $\gamma\sim 10^{13}\,$rad/s (for Au $\gamma=5.3\times 10^{13}\,$rad/s), whereas $\xi_1=2.5\times 10^{14}\,$rad/s. In the temperature region $T_D/4<T<300\,$K, where $T_D$ is the Debye temperature (for Au we have $T_D=165\,$K \cite{40}), it holds $\gamma(T)\sim T$, i.e., the value of $\delta_l$ remains unchanged. In the region from $T_D/4$ down to liquid helium temperature $\gamma(T)\sim T^5$ in accordance to the Bloch-Gr\"{u}neisen law \cite{41} and at lower temperatures $\gamma(T)\sim T^2$ for metals with perfect crystal lattices \cite{40}. As a result, even the quantity $\delta_1(T)$ and, all the more, $\delta_l(T)$ go to zero when $T$ vanishes. For example, for Au at $T=30$ and 10\,K one has $\delta_1\approx 5\times 10^{-2}$ and $2\times 10^{-3}$, respectively. Now we express the permittivity (\ref{eq35}) in terms of the small parameter (\ref{eq37}) \begin{equation} \varepsilon_{l,D}=1+\frac{\tilde{\omega}_p^2}{\zeta_l^2[1+\delta_l(T)]} \label{eq38} \end{equation} \noindent and, in the first perturbation order in this parameter, obtain \begin{equation} \varepsilon_{l,D}\approx\varepsilon_{l,p}- \frac{\tilde{\omega}_p^2}{\zeta_l^2}\,\delta_l(T). \label{eq39} \end{equation} We next use the following identical representation for the Casimir free energy of metallic film calculated using the Drude model: \begin{equation} {\cal F}_D(a,T)={\cal F}_p(a,T)+{\cal F}_D^{(0)}(a,T)-{\cal F}_p^{(0)}(a,T) +{\cal F}^{(\gamma)}(a,T). \label{eq40} \end{equation} \noindent Here, ${\cal F}_p$ is the free energy (\ref{eq12}) calculated using the plasma model and ${\cal F}_p^{(0)}$ is its zero-frequency term \begin{eqnarray} && {\cal F}_p^{(0)}(a,T)=\frac{k_BT}{16\pi a^2}\int_{0}^{\infty}\!\!\!ydy \left\{\vphantom{\ln\left[r_{{\rm TE},p}^2e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right]} \ln\left(1-e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right)\right. \nonumber \\ &&~~~~ \left.+\ln\left[1-r_{{\rm TE},p}^2(y) e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right]\right\}, \label{eq41} \end{eqnarray} \noindent where the reflection coefficient $r_{{\rm TE},p}$ is defined in the second line of Eq.~(\ref{eq11}). The quantity ${\cal F}_D^{(0)}$ in Eq.~(\ref{eq40}) is the zero-frequency term in the Casimir free energy of a film when the Drude model is used in calculations. {}From Eqs.~(\ref{eq7}) and (\ref{eq8}) one obtains \begin{eqnarray} && {\cal F}_D^{(0)}(a,T)=\frac{k_BT}{16\pi a^2}\int_{0}^{\infty}\!\!\!ydy \ln\left(1-e^{-y}\right) \nonumber \\ &&~~~~~~~~ =-\frac{k_BT}{16\pi a^2}\,\zeta(3), \label{eq42} \end{eqnarray} \noindent where $\zeta(z)$ is the Riemann zeta function. Finally, the quantity ${\cal F}^{(\gamma)}$ in Eq.~(\ref{eq40}) is the difference of all nonzero-frequency Matsubara terms in the Casimir free energy (\ref{eq7}) calculated using the Drude and plasma models \begin{eqnarray} && {\cal F}^{(\gamma)}(a,T)=\frac{k_BT}{8\pi a^2}\sum_{l=1}^{\infty} \int_{\zeta_l}^{\infty}\!\!\!ydy \label{eq43} \\ &&~~\times \left\{\ln\left[1-r_{{\rm TM},D}^2(i\zeta_l,y)e^{-\sqrt{y^2+\tilde{\omega}_p^2(1-\delta_l)}}\right] \right. \nonumber\\ &&~~~~~~ +\ln\left[1-r_{{\rm TE},D}^2(i\zeta_l,y)e^{-\sqrt{y^2+\tilde{\omega}_p^2(1-\delta_l)}}\right] \nonumber \\ &&~~~~~~ -\ln\left[1-r_{{\rm TM},p}^2(i\zeta_l,y)e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right] \nonumber \\ &&~~~~~~ \left.-\ln\left[1-r_{{\rm TE},p}^2(i\zeta_l,y) e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right]\right\}, \nonumber \end{eqnarray} As shown in Appendix, \begin{equation} \lim_{T\to 0}{\cal F}^{(\gamma)}(a,T)=0,\quad \lim_{T\to 0}\frac{\partial{\cal F}^{(\gamma)}(a,T)}{\partial T}=0. \label{eq44} \end{equation} \noindent Because of this, we concentrate our attention on the other contributions to the right-hand side of Eq.~(\ref{eq40}). The quantity ${\cal F}_p$ is already found in Eqs.~(\ref{eq14}) and (\ref{eq28}), and the quantity ${\cal F}_D^{(0)}$ is presented in Eq.~(\ref{eq42}). Here, we calculate the quantity ${\cal F}_p^{(0)}$ defined in Eq.~(\ref{eq41}). Let us start with the integral \begin{equation} I_1(\tilde{\omega}_p)\equiv\int_{0}^{\infty}\!\!\!y\,dy \ln\left(1-e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right). \label{eq45} \end{equation} Expanding the logarithm in power series and introducing the new integration variable \begin{equation} t=n\sqrt{y^2+\tilde{\omega}_p^2}, \label{eq46} \end{equation} \noindent one obtains from Eq.~(\ref{eq45}) \begin{eqnarray} && I_1(\tilde{\omega}_p)=-\sum_{n=1}^{\infty}\frac{1}{n^3} \int_{n\tilde{\omega}_p}^{\infty}tdte^t \label{eq47} \\ && \phantom{I_1(\tilde{\omega}_p)} =-\sum_{n=1}^{\infty}\frac{1}{n^3} (1+n\tilde{\omega}_p)e^{-n\tilde{\omega}_p}. \nonumber \end{eqnarray} \noindent After a summation, Eq.~(\ref{eq47}) results in \begin{equation} I_1(\tilde{\omega}_p)=-\left[{\rm Li}_3(e^{-\tilde{\omega}_p})+ \tilde{\omega}_p{\rm Li}_2(e^{-\tilde{\omega}_p})\right], \label{eq48} \end{equation} where ${\rm Li}_k(z)$ is the polylogarithm function. Now we consider the second integral entering Eq.~(\ref{eq41}), i.e., \begin{equation} I_2(\tilde{\omega}_p)\equiv\int_{0}^{\infty}\!\!\!y\,dy \ln\left[1-r_{{\rm TE},p}^2(y)e^{-\sqrt{y^2+\tilde{\omega}_p^2}}\right], \label{eq49} \end{equation} \noindent where the reflection coefficient $r_{{\rm TE},p}$ is defined in Eq.~(\ref{eq11}). Note that for physical values of $\tilde{\omega}_p$ the quantity subtracted from unity under the logarithm in Eq.~(\ref{eq49}) is much smaller than unity. The reason is that if $\tilde{\omega}_p$ is not large the squared reflection coefficient $r_{{\rm TE},p}^2$ is rather small. Then, one can expand the logarithm up to the first power of this parameter and obtain \begin{equation} I_2(\tilde{\omega}_p)\approx-\int_{0}^{\infty}\!\!\!y\,dy\, r_{{\rm TE},p}^2(y)e^{-\sqrt{y^2+\tilde{\omega}_p^2}}. \label{eq50} \end{equation} Numerical computations show that Eqs.~(\ref{eq49}) and (\ref{eq50}) lead to nearly coincident results for $\tilde{\omega}_p\geqslant 0.5$. Taking into account the definition of $\tilde{\omega}_p$ in Eq.~(\ref{eq10}), this results in the condition $a\geqslant 5.4\,$nm for a thickness of Au film with $\omega_p=1.37\times 10^{16}\,$rad/s. This is quite sufficient for our purposes because here we consider metallic films of more than 7\,nm thickness, which can be described by the isotropic dielectric permittivity \cite{42} (for thinner Au films the effect of anisotropy should be taken into account \cite{43}). Now we introduce the variable $t=y/\tilde{\omega}_p$ and, using Eq.~(\ref{eq11}), identically represent the quantity $r_{{\rm TE},p}^2$ in the form \begin{equation} r_{{\rm TE},p}^2(y)=1+8t^2+8t^4-4t\sqrt{1+t^2}-8t^2\sqrt{1+t^2}. \label{51} \end{equation} \noindent Introducing the integration variable $t$ in Eq.~(\ref{eq50}), one finds \begin{eqnarray} && I_2(\tilde{\omega}_p)\approx -\tilde{\omega}_p^2\int_{0}^{\infty}\!\!\!t\,dt e^{-\tilde{\omega}_p\sqrt{1+t^2}} \label{eq52}\\ &&~~~ \times(1+8t^2+8t^4-4t\sqrt{1+t^2}-8t^2\sqrt{1+t^2}). \nonumber \end{eqnarray} Calculating all the five integrals in Eq.~(\ref{eq52}) \cite{44}, we arrive at \begin{eqnarray} && I_2(\tilde{\omega}_p)\approx -\left(\tilde{\omega}_p+17+\frac{112}{\tilde{\omega}_p} +\frac{432}{\tilde{\omega}_p^2} +\frac{960}{\tilde{\omega}_p^3} +\frac{960}{\tilde{\omega}_p^4}\right) e^{-\tilde{\omega}_p} \nonumber \\ &&~~~ +4\left[\tilde{\omega}_pK_1(\tilde{\omega}_p)+9K_2(\tilde{\omega}_p)+ \frac{30}{\tilde{\omega}_p}K_3(\tilde{\omega}_p)\right]. \label{eq53} \end{eqnarray} As a result, the Casimir free energy (\ref{eq40}), calculated using the Drude model, can be rewritten in the form \begin{eqnarray} && {\cal F}_D(a,T)={\cal F}_p(a,T)+{\cal F}^{(\gamma)}(a,T) \label{eq54}\\ &&~~ -\frac{k_BT}{16\pi a^2}\left[ \zeta(3)+I_1\left(\frac{2a\omega_p}{c}\right) +I_2\left(\frac{2a\omega_p}{c}\right)\right], \nonumber \end{eqnarray} \noindent where ${\cal F}_p$ and ${\cal F}^{(\gamma)}$ are presented in Eqs.~(\ref{eq14}), (\ref{eq28}), (\ref{eq43}), and $I_1$ and $I_2$ are found in Eqs.~(\ref{eq48}), (\ref{eq53}). Now we calculate the negative derivative of Eq.~(\ref{eq54}) with respect to $T$ and find the limiting value of this derivative when $T$ goes to zero using Eqs.~(\ref{eq28}) and (\ref{eq44}). The result is \begin{equation} S_D(a,0)=\frac{k_B}{16\pi a^2}\left[ \zeta(3)+I_1\left(\frac{2a\omega_p}{c}\right) +I_2\left(\frac{2a\omega_p}{c}\right)\right]. \label{eq55} \end{equation} As is seen in Eq.~(\ref{eq55}), the Casimir entropy of metallic film at zero temperature, calculated using the Drude model, is not equal to zero and depends on the parameters of a film (the thickness $a$ and the plasma frequency $\omega_p$). Thus, in this case the Nernst heat theorem is violated \cite{35,36}. Calculations using Eqs.~(\ref{eq48}) and (\ref{eq53}) show that \begin{equation} S_D(a,0)>0. \label{eq56} \end{equation} \noindent Thus, for $\tilde{\omega}_p=1$ (i.e., for a Au film of approximately 11\,nm thickness) one has $I_1=-0.79575$, $I_2=-0.02456$, which leads to the number in square brackets in Eq.~(\ref{eq55}) $C=0.38175$. For $\tilde{\omega}_p=5$ ($a=55\,$nm) the respective results are: $I_1=-0.04049$, $I_2=-0.006684$, and $C=1.15489$. Finally, for $\tilde{\omega}_p=15$ ($a=165\,$nm) $I_1=-4.894\times 10^{-6}$, $I_2=-1.5966\times 10^{-6}$, and $C=1.20205$. We see that with increasing film thickness the magnitudes of the quantities $I_1$ and $I_2$ become negligibly small, as compared with $\zeta(3)$. \section{Conclusions and discussion} In the foregoing, we have considered the low-temperature behavior of the Casimir free energy, entropy and pressure of metallic films in vacuum. It was shown that the calculation results are quite different depending on whether the plasma or the Drude model is used to describe the dielectric response of a film metal. If the lossless plasma model is used, as is suggested by the results of several precise experiments on measuring the Casimir force, we have obtained explicit analytic expressions for the thermal corrections to the Casimir energy and pressure and for the Casimir entropy of a film, which are applicable over the wide temperature region down to zero temperature. These expressions do not have a classical limit and go to zero when the film material becomes perfectly reflecting. The Casimir entropy is shown to be positive and satisfying the Nernst heat theorem, i.e., it goes to zero in the limiting case of zero temperature. If the film metal is described by the Drude model taking into account the relaxation properties of conduction electrons at low frequencies, the calculation results are quite different, both qualitatively and quantitatively. In accordance to what was shown in previous work \cite{12,13,14}, the Casimir free energy and pressure reach the classical limit for rather thin metallic films of approximately 150 nm thickness. However, in contradiction to physical intuition, the Casimir free energy does not go to zero in the limiting case of ideal metal film. We have found analytically the Casimir entropy of metallic films with perfect crystal lattices, described by the Drude model, at zero temperature. It is demonstrated that this quantity takes a positive value depending on the parameters of a film, i.e., the Nernst heat theorem is violated. Thus, the case of a free-standing film is different from the case of two nonmagnetic metal plates described by the Drude model interacting through a vacuum gap. In the latter case the Nernst heat theorem is also violated if the Drude model is used in calculations, but the Casimir entropy takes a negative value at $T$=0 \cite{30,31,32}. The obtained results raise a problem on what is the proper way to calculate the dispersion-force contribution to the free energy of metallic films. As discussed in Sec. I, the resolution of this problem is important for investigations of stability of thin films. Previous precise experiments on measuring the Casimir force between metallic test bodies \cite{15,16,17,18,19,20,21,25,26} have always been found in agreement with theoretical predictions of the thermodynamically consistent approach using the plasma model and excluded the theoretical predictions obtained using the Drude model. Recently it was shown \cite{45} that theoretical description of the Casimir interaction in graphene systems by means of the polarization tensor, which is in agreement \cite{46} with the experimental data \cite{47}, also satisfies the Nernst heat theorem. Thus, there is good reason to suppose that the contribution of dispersion forces to the free energy of metallic films should also be calculated in a thermodynamically consistent way, i.e., using the plasma model. An experimental confirmation to this hypothesis might be expected within the next few years. \section*{Acknowledgments} The work of V.M.M. was partially supported by the Russian Government Program of Competitive Growth of Kazan Federal University.
2024-02-18T23:40:09.963Z
2017-02-07T02:13:01.000Z
algebraic_stack_train_0000
1,516
6,201
proofpile-arXiv_065-7470
\section{Introduction} Financial engineering has become popular. Hedging derivatives nowadays accounts for a large portion of financial practice. Ironically, the spread of financial engineering has broken its premise that the underlying asset price of derivatives is not affected by hedging activities. Suppose a large amount of put options are sold and the buyers commit themselves to delta hedging, which amounts to buy the underlying asset when its price goes down and sell it when its price goes up. This hedging demand is strong and so restrains the underlying asset price movement. Eventually the volatility of the underlying asset becomes smaller than before, which results in a loss for the buyers due to the overestimation of the volatility at their purchase. According to Bookstaber~\cite{Book}, this is exactly what happened when Salomon Brothers suffered a huge loss at Japanese market in the late 90s. A lot of market crashes, such as the Black Monday, are attributed to the feedback effect of hedging strategies to markets. The market illiquidity has always been a keyword to explain financial crises. This paper addresses a hedging problem under a tractable model which captures endogenously such phenomena as nonlinearity in liquidation, permanent market impact and market crashes due to illiquidity observed in actual markets. A crash is a rare event; an exogenous statistical modeling of liquidity costs is therefore not sufficient for our purpose. An economic consideration is required for a deeper understanding of the liquidity risk. This paper provides a utility-based asset pricing model with analytically tractable structure. The effect of derivative contracts to an equilibrium price was studied by Brennan and Schwartz~\cite{BS}, where a derivative contract affects the equilibrium via a modification of representative agent's terminal wealth. Frey and Stremme~\cite{FS} studied the feedback effect of a dynamic hedging under an equilibrium model with supply and demand functions given exogenously. Frey~\cite{Fr} treated a perfect hedging problem under such an equilibrium model. Cvitani\'c and Ma~\cite{CM} formulated a hedging problem with a feedback effect in terms of backward stochastic differential equation (BSDE). In the special case of a Markovian setting BSDEs are closely connected to semi-linear PDEs. On one hand, these studies succeeded to explain some qualitative phenomena such as enlargement of the underlying asset volatility by hedging convex options. On the other hand, they are not very useful for quantitative analysis or financial practice due to difficulties in specifying model parameters and in computing prices and strategies. We start by modeling nonlinear market prices from an economic point of view. In a standard limit order market, the roles of suppliers and demanders of liquidity are not symmetric. A liquidity supplier submits a limit order that quotes a price for a specified volume of an asset. They can trade with each other to maximize their own utilities. Once an equilibrium is achieved, no more trade would occur among them until new information comes in. However, each of liquidity suppliers still should have an incentive to submit a limit order as long as the corresponding transaction improves her utility. The remaining limit orders form a price curve which is a nonlinear function of volume. Taking a Bertrand-type competition among liquidity suppliers into account, it would be then reasonable to begin with modeling the price curve as the utility indifference curve of a representative liquidity supplier. When the utility functions of the liquidity suppliers are of von Neumann-Morgenstern type, the existence of the representative liquidity supplier (or, market maker) is proved by Bank and Kramkov~\cite{BK1, BK2}. In this paper, as the suppliers' utility functions, we adopt $g$-expectations introduced by Peng~\cite{Peng}. Exponential utilities are in the intersection of these two frameworks. An advantage of a $g$-expectation from an economic point of view is that ambiguity aversion is taken into account. An advantage from a technical point of view is that it admits an analytic manipulation of stochastic calculus. The existence of the representative agent under such utility functions follows from Horst et al.~\cite{Horst}. In the present paper, we simply assume there is a representative liquidity supplier, called the Market, who quotes a price for each volume based on the utility indifference principle and her utility is a $g$-expectation with a cash-invariance property. If the driver of the $g$-expectation is a linear function, then the price curve becomes linear in volume and we recover the standard framework of financial engineering. If the Market is risk-neutral, then the utility indifference price of an asset coincides with the expected value of the future cash-flow associated with the asset. In particular, the price curve is linear in volume. This simplest framework was adopted by many studies such as Glosten and Milgrom~\cite{GM}. Our approach differs from the classical works including Garman~\cite{Gar1}, Amihud and Mendelson~\cite{AM}, Ho and Stoll~\cite{HS}, Ohara and Oldfield~\cite{OO}, where a price quote is a solution of a utility maximization problem for a market maker with exogenously given order-flow. Here, we consider a hedging problem and so, an order-flow is endogenously determined. Bank and Kramkov~\cite{BK1,BK2} analyzed the market impact of a large trade and formulated a nonlinear stochastic integral as the profit and loss associated with a given strategy of a large trader. When the Market's utility is a $g$-expectation with the cash invariance property, we show in this paper that the nonlinear stochastic integral has an expression in terms of the solutions of a family of BSDEs. Then, we show that the existence of a perfect hedging strategy follows from that of the solution of a BSDE. The model represents a permanent market impact which is endogenously determined, while exogenously modeled instantaneous or temporary market impact models have been extensively considered in the literature. See e.g. Cetin et al.~\cite{Cetin}, Fukasawa~\cite{F}, Gu\'eant~\cite{Gueant} and the references therein. A linear permanent market impact model is studied in Gu\'eant and Pu~\cite{GP}. In Section~2, we describe the model of nonlinear prices. In Section~3, we introduce several conditions under which the P\&L of a large trader admits a BSDE representation and the perfect hedging of derivatives is possible. In Section~4, we consider a class of models which admits more explicit computations and verifies the conditions in Section~3. In Section~5, we consider the hedging of European options and discuss how the model captures illiquidity phenomena. \section{Model of permanent market impact} We assume zero risk-free rates. Let $T>0$ be the end of an accounting period. Each agent evaluates her utility based on her wealth at $T$. Consider a security whose value at $T$ is exogenously determined. We denote the value by $S$ and regard it as an $\mathcal{F}_T$ measurable random variable defined on a filtered probability space $(\Omega, \mathcal{F}, P, \{\mathcal{F}_t\})$ satisfying the usual conditions. The price of this security at $T$ is trivially $S$, but the price at $t < T$ should be $\mathcal{F}_t$ measurable and will be endogenously determined by a utility-based mechanism. There are two agents in our model: A Large trader and a Market. The Market quotes a price for each volume of the security where we have a limit order book in mind. She can be risk-averse and so her quotes can be nonlinear in volume and depend on her inventory of this security. The Large trader refers to the quotes and makes a decision. She cannot avoid affecting the quotes by her trading due to the inventory consideration of the Market, and seeks an optimal strategy under this endogenous market impact. As the pricing rule of the Market, our model adopts the utility indifference principle. As the utility evaluation of the Market, we consider a family $\{(\Pi_\tau, \mathcal{D}_\tau)\}_\tau$ of functionals $\Pi_\tau : \mathcal{D}_T \to \mathcal{D}_\tau$ with the following properties, where $\tau$ is a $[0,T]$-valued stopping time and $\mathcal{D}_\tau$ is a linear space of $\mathcal{F}_\tau$-measurable random variables : For any $X, Y \in \mathcal{D}_T$, \begin{enumerate} \item $\Pi_\tau(0) = 0$, \item $\Pi_\tau(X+Y) = \Pi_\tau(X) + Y$ if $Y \in \mathcal{D}_\tau$, \item $\Pi_\tau(\lambda X + (1-\lambda)Y) \geq 0 $ for all $\lambda \in [0,1]$ if $\Pi_\tau(X) \geq 0$ and $\Pi_\tau(Y)\geq 0$, \item $\Pi_\tau(X) \geq \Pi_\tau(Y)$ if there exists $\sigma \geq \tau$ such that $\Pi_\sigma(X) \geq \Pi_\sigma(Y)$. \end{enumerate} Comments on this axiomatic approach follow in order: \begin{enumerate} \item The simplest example is \begin{equation}\label{rn} \Pi_t(X) = E[X|\mathcal{F}_t] \end{equation} with $ \mathcal{D}_t = L^p(\Omega, \mathcal{F}_t,P)$ with $p \geq 1$. When $p=2$, this evaluation can be interpreted as the orthogonal projection of future cash-flows. \item A more interesting example is an exponential utility : \begin{equation}\label{exp} \Pi_t(X) = -\frac{1}{\gamma} \log E[ \exp\{-\gamma X\} | \mathcal{F}_t] \end{equation} with $ \mathcal{D}_t = L^\infty(\Omega, \mathcal{F}_t,P)$, where $\gamma > 0$ is a parameter of risk-aversion. By letting $\gamma \to 0$, we recover the previous example. By letting $\gamma \to \infty$, we have \begin{equation*} \Pi_t(X) = \inf\left\{E^Q[X|\mathcal{F}_t] ; Q \sim P, Q=P \text{ on } \mathcal{F}_t\right\}. \end{equation*} which essentially represents the infimum value of $X$ under the conditional probability given $\mathcal{F}_t$. By Kupper and Schachermayer~\cite{KS}, no other utility of von Neumann-Morgenstern type is equivalent to an evaluation satisfying the four axioms. \item More generally, $\Pi_t(X) = -\rho_t(X)$ satisfies the four axioms, if $\{\rho_t\}$ is a dynamic convex risk measure, see e.g., Barrieu and El Karoui~\cite{BE}, Riedel~\cite{Riedel}, Delbaen~\cite{Delbaen2006}, Delbaen et al.~\cite{DPR}, Kl\"oppel and Schweizer~\cite{KlSc}, Cheridito, Delbaen and Kupper~\cite{CDK}, Rusczcy\'nsky and Shapiro~\cite{RS}, Detlefsen and Scandolo~\cite{DS}, and Cherny and Madan~\cite{CM09}. Convex risk measures play an important role for the risk managements in banks or insurance companies. \item When $ \mathcal{D}_t = L^\infty(\Omega, \mathcal{F}_t,P)$, under an additional assumption of the so-called Fatou property, $\Pi$ admits a representation \begin{equation*} \Pi_t(X) = \mathrm{ess.inf}\left\{E^Q[X|\mathcal{F}_t] + c_t(Q) ; Q \sim P, Q=P \text{ on } \mathcal{F}_t\right\}, \end{equation*} where \begin{equation*} c_t(Q) = \mathrm{ess.sup}\left\{\Pi_t(X) -E^Q[X|\mathcal{F}_t] ; X \in \mathcal{D}_t\right\}. \end{equation*} Based on this representation an agent who uses $\Pi$ as her utility evaluation can be interpreted as being ambiguity averse in the spirit of the multiple priors decision theory of Gilboa and Schmeidler~\cite{GS} and the variational preferences of Maccheroni et al.~\cite{MMR}, see also Cerreia-Vioglio et al.~\cite{CMMM}. In the case of multiple priors $c_t(Q)$ can only take the values zero or infinity while variational preferences allow for general penalty functions $c$. $c_t(Q)$ can be seen as attaching a certain plausibility to the model $Q$ at time $t$ with $c_t(Q)=\infty$ meaning that the model is fully unreliable and is effectively excluded from the analysis. For sufficient and necessary conditions under which such evaluations are time-consistent see for instance \cite{DPR}. Robust expectations of the form above are also known in robust statistics, see Huber~\cite{Huber} or the earlier Wald~\cite{Wald}. \item In the theory of no-arbitrage pricing, attempts have been made to narrow the no-arbitrage bounds by restricting the set of pricing kernels considered. One of these approaches is the good-deal bounds ansatz introduced in Cochrane and Sa\'{a}-Requejo~\cite{CS00} which corresponds to excluding pricing kernels which induce a too high Sharpe ratio. The intuition is that these deals are ``too good to be true'' and will be eliminated in a competitive market. Using the Hansen-Jagannathan bound it is shown in \cite{CS00} that this corresponds to only considering pricing kernels which are close to the physical measure in the sense that their variance or in a continuous-time setting their volatility is bounded, see also Bj\"ork and Slinko \cite{TBIS}. Hence, the penalty function for a good-deal bound evaluation in a Brownian setting is zero for local martingale measures whose volatility is bounded by a constant $\Lambda>0$ (which depends on the highest possible Sharpe ratio) and infinity else. So if we let $M$ be the set of local equivalent martingale measures and identify each measure $\mathbb{Q} \ll \mathbb{P}$ with a Radon-Nikodym derivative $\frac{d \mathbb{Q}}{d \mathbb{P}} = \mathcal{E}\left((q\cdot W)_T\right)$, with $\mathcal E$ denoting the stochastic exponential, we can define $\mathcal{A}^n := \{\mathbb{Q} \ll \mathbb{P} \big| |q|^2 \leq \Lambda\}$. Then the good-deal bound evaluation is given by \begin{equation}\label{good_deal} \Pi_t(X) = \mathrm{ess.sup}_{\mathbb{Q} \in M \cap \mathcal{A}^n} \mathbb{E}_{\mathbb{Q}}[X|\mathcal{F}_t]. \end{equation} That this evaluation is time-consistent follows for instance from \cite{Delbaen2006}. \end{enumerate} We assume $S \in \mathcal{D}_T$ in the sequel. Suppose that the Market is initially endowed with a risky asset which yields a cash-flow at time $T$, represented by $H_{\mathrm{M}} \in \mathcal{D}_T$. If the Market at time $t \in [0, T]$ is holding $z$ units of the security in question and the inventory $H_{\mathrm{M}}$, then her utility is measured as $\Pi_t(H_{\mathrm{M}} + zS)$. According to the utility indifference principle, the Market quotes a selling price for $y$ units of the security by \begin{equation}\label{pzy} \begin{split} P_t(z,y) :=& \inf\{ p \in \mathbb{R} ; \Pi_t(H_{\mathrm{M}}+zS-yS + p) \geq \Pi_t(H_{\mathrm{M}}+zS) \\ =& \Pi_t(H_{\mathrm{M}} + zS) - \Pi_t(H_{\mathrm{M}} + (z-y)S). \end{split} \end{equation} For the equality we have used the second axiom of $\Pi$ (cash invariance). Note that in the risk-neutral case (\ref{rn}), $P_t(z,y) = y E[S|\mathcal{F}_t]$. In general, the price depends on the inventory $z$ of the securities, which describes permanent market impact. In the literature of modeling permanent market impacts, the absence of price manipulation has been a key issue; see e.g., Gu\'eant~\cite{Gueant} and references therein. Our model does not allow any price manipulation in the sense that a round-trip cost is always $0$: \begin{equation*} P_t(z,y) + P_t(z-y,-y) = 0. \end{equation*} For all $t$ and $z$, $P_t(z,y)$ is a convex function of $y$ with $P_t(z,0) = 0$ by the third axiom of $\Pi$ (concavity). This implies in particular that \begin{equation*} -P_t(z,-y) \leq P_t(z,y) \end{equation*} for any $y$ and $z$, which means that the selling price for an amount is higher than or equal to the buying price for the same amount. This represents bid-ask spread that is a measure of market liquidity. Let $\mathcal{S}_0$ be the set of the simple predictable processes $Y$ with $Y_{0} = 0$. The Large trader is allowed to take any element $Y \in \mathcal{S}_0$ as her trading strategy. The price for the $y$ units of the security at time $t$ is $P_t(-Y_t,y)$. This is because the Market holds $-Y_t$ units of the security due to the preceding trades with the Large trader. Then the profit and loss at time $T$ associated with $Y \in \mathcal{S}_0$ (i.e., the terminal wealth corresponding to the self-financing strategy $Y$) is given by \begin{equation*} \mathcal{I}(Y) := Y_TS - \sum_{0 \leq t < T} P_t(-Y_t, \Delta Y_t). \end{equation*} Due to (\ref{pzy}), $\mathcal{I}(Y)$ has the form of a nonlinear stochastic integral studied in Kunita~\cite{Kunita}; see (\ref{iy}) below. Note that in the risk-neutral case (\ref{rn}), \begin{equation*} \mathcal{I}(Y) = Y_TS_T - \sum_{0 \leq t < T} S_t \Delta Y_t = \int_0^T Y_t \mathrm{d}S_t \end{equation*} by integration-by-parts, where \begin{equation*} S_t = E[S|\mathcal{F}_t]. \end{equation*} In Section~3, we show that $\mathcal{I}(Y)$ admits a representation in terms of BSDEs when $\Pi$ is a $g$-expectation, which enables us to extend the domain $\mathcal{S}_0$ to a larger set $\mathcal{S}$ of predictable processes. Now, suppose that the Large trader has an option contract which amounts to pay $-H_\mathrm{L} \in \mathcal{D}_T$ at $T$. The hedging problem is then to find a unique element $(a, Y) \in \mathbb{R}\times \mathcal{S}$ such that \begin{equation*} -H_\mathrm{L} = a + \mathcal{I}(Y). \end{equation*} \section{Hedging in a market with $g$-expectation} In a continuous-time setting where the filtration is generated by a Brownian motion it is well known that $\Pi$ satisfying our axioms is essentially equivalent to $\Pi$ being a so called $g$-expectation. $g$-expectations also give a convenient representation of $\mathcal{I}(Y)$. More precisely, we work under the following condition on the utility function $(\Pi_t, \mathcal{D}_t)$: \begin{cond}\label{cond1} The filtration $\{\mathcal{F}_t\}$ is the augmentation of the one generated by a standard Brownian motion $W$. Let $g = \{g_t(z)\}: \Omega \times [0,T] \times \mathbb{R} \to \mathbb{R}$ be a $\mathcal{P}\otimes \mathcal{B}(\mathbb{R})$ measurable function, where $\mathcal{P}$ is the progressively measurable $\sigma$ field, such that $z \mapsto g_t(z)(\omega)$ is a convex function with $g_t(0)(\omega)=0$ for each $(\omega ,t) \in \Omega \times [0,T]$ . For each $X \in \mathcal{D}_T$, \begin{equation*} \sup_{0 \leq t \leq T}|\Pi_t(X)| \in \mathcal{D}_T, \end{equation*} and there exists a progressively measurable process $Z(X)$ such that \begin{equation*} E[\int_0^T|Z_t(X)|^2 \mathrm{d}t] < \infty, \end{equation*} and \begin{equation}\label{bsdeX} X = \Pi_t(X) + \int_t^T g_s(Z_s(X))\mathrm{d}s - \int_t^T Z_s(X) \mathrm{d}W_s, \end{equation} for all $t \geq 0$. \end{cond} \begin{exa} \upshape Let $\mathcal{D}_t = L^2(\Omega,\mathcal{F}_t,P)$ and $G$ be a progressively measurable process such that \begin{equation*} E\left[\exp\left\{\frac{1}{2}\int_0^T G_t^2 \mathrm{d}t\right\}\right] < \infty. \end{equation*} If $\Pi(X)$ follows (\ref{bsdeX}) with $g_s(z) = G_sz$, then \begin{equation}\label{lin} X = \Pi_t(X) - \int_t^T Z_s(X) \mathrm{d}W^G_s \end{equation} and $W^G$ is a standard Brownian motion under $Q$, where \begin{equation*} W^G_t = W_t - \int_0^t G_s \mathrm{d}s, \ \ \frac{\mathrm{d}Q}{\mathrm{d}P}= \exp \left\{ \int_0^T G_t \mathrm{d}W_t - \frac{1}{2}\int_0^T G_t^2 \mathrm{d}t \right\}. \end{equation*} Therefore, \begin{equation*} \Pi_t(X) = E^Q[X|\mathcal{F}_t]. \end{equation*} Conversely, if $\Pi(X)$ is defined as the conditional expectation w.r.t. $Q$, then by the martingale representation theorem, there exists $Z(X)$ such that (\ref{lin}) holds, which is equivalent to (\ref{bsdeX}) with $g_s(z) = G_sz$. \end{exa} \begin{exa} \label{exa2} \upshape Let $\gamma > 0$ and $\mathcal{D}_t = \{ X \in L^0(\Omega,\mathcal{F}_t,P) ; E[\exp\{a|X|\}] < \infty \text{ for all } a > 0\}$, which is an Orlicz space. If $\Pi(X)$ follows (\ref{bsdeX}) with $g_s(z) = \gamma z^2/2$, then \begin{equation} \label{expM} \mathrm{d}M_t = \gamma M_t Z_t(X) \mathrm{d}W_t, \end{equation} where $M_t = \exp\{-\gamma \Pi_t(X)\}$. This implies \begin{equation*} E[\exp\{-\gamma X\}|\mathcal{F}_t] = \exp\{-\gamma \Pi_t(X)\}, \end{equation*} which is equivalent to (\ref{exp}). Conversely if $\Pi(X)$ is given by (\ref{exp}), then again by the martingale representation theorem, there exists $Z(X)$ such that (\ref{expM}) holds, which implies (\ref{bsdeX}). \end{exa} \begin{exa} \upshape In the good-deal bound example, suppose that we have a $d$-dimensional Brownian motion $W$ generating the economic noise and that the dynamics of the stock process is given by \begin{equation*} \frac{dS^i_t}{S_t^i}=\mu^i dt+\sigma^i dW_t ,\,\,\,\,\,\,\,\,i=1,\ldots,k. \end{equation*} We further suppose that the interest rate of the bond is zero. Let $A:=(\sigma^1,\ldots,\sigma^k)$ and $b:=-\mu^\intercal=-(\mu^1,\ldots,\mu^k)^\intercal$. Let $P_B(0)$ be the projection of $0$ onto the set $B:=\{x|Ax=b\}$ in the Euclidean $|\cdot|$ norm, and define $P_{\text{Kernel}(A)}(z)$ accordingly as the projection of $z$ in the $|\cdot|$ norm onto the space given by the kernel of the matrix $A$. One can prove (see \cite{KLL15}) that the evaluation $\Pi$ in (\ref{good_deal}) is given by a $g$-expectation following (\ref{bsdeX}) with driver function \begin{align*} g(t,z)=-\sqrt{\Lambda-|P_B(0)|^2}\Big|P_{\text{Kernel}(A)}(z) \Big|+zP_B(0). \end{align*} This concludes our examples. \end{exa} We remark that if $g_t(z)$ is Lipschitz in $z$ uniformly in $(\omega,t ) \in \Omega \times [0,T]$, then there exists a unique solution $(\Pi(X),Z(X))$ to (\ref{bsdeX}) for $X \in L^2(\Omega, \mathcal{F}_T,P)$ and $\Pi_t(X) \in L^2(\Omega,\mathcal{F}_t,P)$ and the four axioms of $\Pi$ are automatically satisfied. As mentioned above it is worthwhile to note that when the filtration is generated by a standard Brownian motion, under additional compactness or domination assumptions, every evaluation $\Pi$ satisfying our axioms corresponds to a $g$-expectation in the sense that there exists $g$ such that $\Pi$ satisfies (\ref{bsdeX}). For these and other related results, see Jiang~\cite{J08}, Barrieu and El Karoui~\cite{BE2}, Coquet et al.~\cite{CHMP}, Briand and Hu~\cite{BH1,BH2}, Hu et al.~\cite{HMPY} and the references therein.\\ \noindent Let $\Pi^y = \Pi(H_\mathrm{M} -yS)$ and $Z^y = Z(H_\mathrm{M}-yS)$ for $y \in \mathbb{R}$. We pose the following technical condition: \begin{cond}\label{cond2} There exist $\Omega_0 \in \mathcal{F}$ with $P(\Omega_0) = 1$ and a $\mathcal{P}\otimes \mathcal{B}(\mathbb{R})$ measurable function $$Z : \Omega \times [0,T] \times \mathbb{R}\to \mathbb{R}$$ such that $Z(\omega, t,y) = Z^y_t(\omega)$ for all $(\omega, t , y) \in \Omega_0 \times [0,T] \times \mathbb{R}$. \end{cond} We will see this condition is always satisfied for Markov models considered in Section~4. Even in non-Markov cases, it follows for instance from Ankirchner et al.~\cite{AID} that if $g_t(z)$ and its derivative in $z$ are globally Lipschitz and $H_M$ and $S$ are bounded, then $Z^y_t(\omega)$ is continuous in $t$ and differentiable in $y$ for almost all $\omega$, which in particular verifies Condition~\ref{cond2}. \begin{lem} \label{repni} Under Conditions \ref{cond1} and \ref{cond2}, \begin{equation*} \mathcal{I}(Y) = H_\mathrm{M} - \Pi_0(H_\mathrm{M}) - \int_0^T g_t(Z^Y_t)\mathrm{d}t + \int_0^T Z^Y_t\mathrm{d}W_t \end{equation*} for $Y \in \mathcal{S}_0$, where $Z^Y_t(\omega) = Z(\omega, t, Y_t(\omega))$. \end{lem} {\it Proof : } Denote the discontinuity points of $Y \in \mathcal{S}_0$ by \begin{equation*} 0 \leq \tau_1 < \tau_2 < \cdots. \end{equation*} Let $n$ be the number of the discontinuity points, $\tau_0 = 0$ and $\tau_k = T$ for $k \geq n+1$. By definition, \begin{equation}\label{iy} \begin{split} \mathcal{I}(Y) = & Y_TS - \sum_{0 \leq t < T}(\Pi_t(H_\mathrm{M} -Y_tS ) - \Pi_t(H_\mathrm{M}-Y_{t+}S)) \\ =& Y_TS - \sum_{j=1}^n (\Pi_{\tau_j}(H_\mathrm{M} -Y_{\tau_j}S ) - \Pi_{\tau_j}(H_\mathrm{M}-Y_{\tau_{j+1}}S)) \\ =& H_\mathrm{M} - \Pi_0(H_\mathrm{M}) - \sum_{j=0}^n (\Pi_{\tau_{j+1}} (H_\mathrm{M} -Y_{\tau_{j+1}}S ) - \Pi_{\tau_j}(H_\mathrm{M}-Y_{\tau_{j+1}}S)). \end{split} \end{equation} Here we have used that $\Pi_T(H_\mathrm{M}-Y_TS) = H_\mathrm{M}-Y_TS$ and $Y_0 = 0$. Again by definition, \begin{equation*} \Pi_{\tau_{j+1}}(H_\mathrm{M}-yS) - \Pi_{\tau_j}(H_\mathrm{M}-yS) = \int_{\tau_j}^{\tau_{j+1}} g_s(Z^y_s)\mathrm{d}s - \int_{\tau_j}^{\tau_{j+1}}Z^y_s\mathrm{d}W_s. \end{equation*} Since $Y$ is a simple predictable process, $Y_{\tau_{j+1}}$ is $\mathcal{F}_{\tau_j}$ measurable and so, we can substitute $y = Y_{\tau_{j+1}}$ to obtain \begin{equation*} \mathcal{I}(Y) = H_\mathrm{M} - \Pi_0(H_\mathrm{M}) - \sum_{j=0}^n \left\{\int_{\tau_j}^{\tau_{j+1}} g_s(Z^{Y}_s)\mathrm{d}s - \int_{\tau_j}^{\tau_{j+1}}Z^Y_s\mathrm{d}W_s \right\}, \end{equation*} which implies the result. \hfill////\\ \noindent By this lemma, we naturally extend the domain of $\mathcal{I}(Y)$ to \begin{equation*} \mathcal{S} := \left\{ Y: \Omega\times [0,T] \to \mathbb{R} ; \text{ predictable with } \int_0^T |Z^Y_t|^2 \mathrm{d}t < \infty \right\}. \end{equation*} Now we are ready to give the main result of the paper in an abstract framework. \begin{cond}\label{cond3} There exist $\Omega_0 \in \mathcal{F}$ with $P(\Omega_0) = 1$ and a $\mathcal{P}\otimes \mathcal{B}(\mathbb{R})$ measurable function $$Z^- : \Omega \times [0,T] \times \mathbb{R}\to \mathbb{R}$$ such that $Z(\omega, t,Z^-(\omega,t,z)) = z$ for all $(\omega, t , z) \in \Omega_0 \times [0,T] \times \mathbb{R}$. \end{cond} \begin{thm}\label{thm1} Under Conditions~\ref{cond1}, \ref{cond2} and \ref{cond3}, for any $H_\mathrm{L} \in \mathcal{D}_T$, we have \begin{equation*} - H_\mathrm{L} = \Pi_0(H_\mathrm{M}) - \Pi_0(H_\mathrm{M} + H_\mathrm{L})+ \mathcal{I}(Y^\ast), \end{equation*} where $Y^\ast$ is defined by $Y^\ast_t(\omega) = Z^-(\omega,t,Z_t(H_\mathrm{M} + H_\mathrm{L})(\omega))$. \end{thm} {\it Proof : } By Condition~\ref{cond1}, there exists $Z^\ast := Z(H_\mathrm{M} + H_\mathrm{L})$ such that \begin{equation*} H_\mathrm{M} + H_\mathrm{L} = \Pi_0(H_\mathrm{M} + H_\mathrm{L}) + \int_0^T g_s(Z^\ast_s)\mathrm{d}s - \int_0^T Z^\ast_s \mathrm{d}W_s. \end{equation*} Define $Y^\ast$ as $Y^\ast_t(\omega) = Z^-(\omega,t,Z^\ast_t(\omega))$. Then, \begin{equation*} Z^{Y^\ast}_t(\omega) = Z(\omega,t,Y^\ast_t(\omega)) = Z^\ast_t(\omega). \end{equation*} Therefore, by Lemma~\ref{repni} \begin{equation*} \mathcal{I}(Y^\ast) = H_\mathrm{M} - \Pi_0(H_\mathrm{M}) - \int_0^Tg_t(Z^\ast_t)\mathrm{d}t + \int_0^T Z^\ast_t\mathrm{d}W_t, \end{equation*} which implies the result. \hfill////\\ \noindent This theorem means that any option payoff $-H_\mathrm{L}$ can be perfectly replicated by a self-financing dynamic trading strategy of the security with initial capital $$\Pi_0(H_\mathrm{M}) - \Pi_0(H_\mathrm{M} + H_\mathrm{L}).$$ This is an increasing and convex function of $-H_\mathrm{L}$, which reflects a diversification effect of risk. In Section~4, we study even more tractable models and see that Conditions~\ref{cond1}, \ref{cond2} and \ref{cond3} are satisfied under reasonable assumptions. \section{Markov markets} Here we verify Conditions~\ref{cond2} and \ref{cond3} and characterize the hedging strategy in terms of solutions of semi-linear PDEs under Markov models. More precisely, in addition to Condition~\ref{cond1}, we suppose $g_t(z) = g(z,t)$, $S= s(F_T)$ and $H_\mathrm{M}= h_\mathrm{M}(F_T)$, where $g : \mathbb{R}\times [0,T] \to \mathbb{R}$, $s : \mathbb{R} \to \mathbb{R}$ and $h_\mathrm{M} : \mathbb{R} \to \mathbb{R}$ are Borel functions and $F$ is the solution of the SDE \begin{equation*} \mathrm{d}F_t = \mu(F_t,t)\mathrm{d}t + \sigma(F_t,t) \mathrm{d}W_t, \end{equation*} where $\mu: \mathbb{R} \times [0,T] \to \mathbb{R}$ and $\sigma: \mathbb{R} \times [0,T] \to \mathbb{R}^+$ are Lipschitz functions in the following sense: there exists $L > 0$ such that \begin{enumerate} \item $|\mu(x,t)-\mu(y,t)| + |\sigma(x,t)-\sigma(y,t)| \leq L|x-y|$ and \item $|\mu(x,t)| + |\sigma (x,t)| \leq L(1 + |x|)$ \end{enumerate} for all $x,y\in \mathbb{R}$ and $t \in [0,T]$. The Markov process $F$ should be understood as an economic factor. As in Section~3, the filtration $\{\mathcal{F}_t\}$ is supposed to be generated by the standard Brownian motion $W$. Let $p : \mathbb{R} \times [0,T] \times \mathbb{R} \to \mathbb{R}$ be a $C^{2,1,0}$ solution of the PDE \begin{equation}\label{pde} \partial_tp(x,t,y) + \mu(x,t) \partial_xp(x,t,y) + \frac{1}{2}\sigma^2(x,t)\partial_x^2p(x,t,y) = g(-{ \sigma(x,t)}\partial_xp(x,t,y), t) \end{equation} on $\mathbb{R}\times (0,T) \times \mathbb{R}$ with $p(x,T,y) = h_\mathrm{M}(x)-y s(x)$ for all $(x,y) \in \mathbb{R}^2$. Here its existence is assumed. Then, it is well-known, and easy to check, that $(\Pi^y,Z^y)$ defined by \begin{equation*} \Pi^y_t = p(F_t,t,y), \ \ Z^y_t = -\sigma(F_t,t)\partial_xp(F_t,t,y) \end{equation*} is a solution of the BSDE (\ref{bsdeX}) with $X = H_\mathrm{M}-yS$ for each $y \in \mathbb{R}$. In the following two subsections, we separately deal with the cases that the driver $g$ is Lipschitz and that $g$ is a quadratic function, or equivalently that $\Pi$ is an exponential utility. \subsection{Lipschitz drivers} \begin{thm}\label{theorem2} Let $\mathcal{D}_t = L^2(\Omega,\mathcal{F}_t,P)$ for $t \in [0,T]$ and assume \begin{enumerate} \item $h_\mathrm{M}$ and $s$ are in $C^1(\mathbb{R})$ with $s^\prime \geq 0$, $s^\prime(F_T) \in \mathcal{D}_T$, \item $\mu$, $\sigma$, and $g$ are in $C^{1,0}(\mathbb{R}\times [0,T])$ and $\partial_zg$ is bounded, \item $p$ is in $C^{3,1,0}(\mathbb{R}\times [0,T] \times \mathbb{R})$ and satisfies (\ref{pde}), \item $h_\mathrm{M}^\prime$ is of exponential growth and $\sigma$ and $\mu$ are bounded, or $h_\mathrm{M}^\prime$ is of polynomial growth, \end{enumerate} and that either one of the following conditions holds, \begin{itemize} \item [a)] $\inf_{x \in \mathbb{R}}s^\prime(x) > 0$. \item [b)] $1/\sigma$ is bounded and for all $t \in [0,T)$, there exists $M \in \mathbb{R}$ such that \begin{equation*} \inf_{x \in [M,\infty)}s^\prime(x) > 0 \end{equation*} and the support of $f + F_T - F_t$ under $P(\cdot|F_t = f)$ includes $[M,\infty)$ for any $f$ in the support of $F_t$. \item [c)] $1/\sigma$ is bounded for all $t \in [0,T)$, there exists $M \in \mathbb{R}$ such that \begin{equation*} \inf_{x \in (-\infty,M]}s^\prime(x) > 0 \end{equation*} and the support of $f + F_T - F_t$ under $P(\cdot|F_t = f)$ includes $(-\infty,M]$ for any $f$ in the support of $F_t$. \item [d)] $\sigma(x,t) = \sigma(t)$ is independent of $x$, $\mu$ is bounded, and for all $t \in [0,T)$, there exists an interval $[a,b]$ such that $b-a > 2(\|\mu\|_\infty + \|\partial_x\sigma\|_\infty + \|\partial_zg\|_\infty)T$, \begin{equation*} \inf_{x \in [a,b]}s^\prime(x) > 0, \end{equation*} and the support of $f+F_T - F_t$ under $P(\cdot|F_t = f)$ includes $[a,b]$ for any $f$ in the support of $F_t$. \end{itemize} Then, Conditions~\ref{cond2} and \ref{cond3} hold with \begin{equation*} Z(\omega,t,y) = -\sigma(F_t(\omega),t)\partial_xp(F_t(\omega),t,y), \ \ Z^-(\omega,t,z) = \inf\{ y \in \mathbb{R} ; Z(\omega,t,y) \geq z\}. \end{equation*} In particular, for any $H_\mathrm{L} \in \mathcal{D}_T$, \begin{equation*} -H_\mathrm{L} = p(F_0,0,0) - \Pi^\ast_0 + \mathcal{I}(Y^\ast), \ \ Y^\ast_t(\omega) = Z^-(\omega,t,Z^\ast_t), \end{equation*} where $(\Pi^\ast,Z^\ast)$ is the unique solution of the BSDE \begin{equation}\label{eq:opt} H_\mathrm{L} + h_\mathrm{M}(F_T) = \Pi^\ast_t + \int_t^T g(Z^\ast_s,s)\mathrm{d}s - \int_t^T Z^\ast_s\mathrm{d}W_s. \end{equation} \end{thm} \begin{rmk} The conditions on $F$ in the cases b)-d) are satisfied if the increments $F_T-F_t$ have full support in $\mathbb{R}$ under every initial condition $F_t = f$. \end{rmk} \begin{rmk} Theorem~\ref{theorem2} remains true if $s^\prime$ is replaced with $-s^\prime$ in the assumptions. \end{rmk} \noindent {\it Proof : } The unique existence of $(\Pi^\ast,Z^\ast)$ follows from the fact that $g$ is Lipschitz as mentioned before. Condition~\ref{cond2} follows from the PDE (\ref{pde}) and the continuity of $\partial_xp$. To verify Condition~3, we are going to show \begin{equation}\label{surj} \lim_{y \to \pm \infty} Z^y_t = \pm \infty. \end{equation} Let $q(x,t,y) = -\partial_xp(x,t,y)$ and differentiate the PDE (\ref{pde}) to obtain, \begin{equation*} \begin{split} \partial_tq(x,t,y) &+ \mu(x,t)\partial_xq(x,t,y) + \frac{1}{2} \sigma^2(x,t)\partial_x^2 q(x,t,y) \\ = & -{(\partial_x\mu(x,t) + \partial_zg(\sigma(x,t)q(x,t,y),t) \partial_x\sigma(x,t))}q(x,t,y) \\ &-\left(\partial_zg({\sigma(x,t)}q(x,t,y),t)+ \partial_x\sigma(x,t)\right){\sigma(x,t)}\partial_xq(x,t,y). \end{split} \end{equation*} Applying It$\hat{\text{o}}$'s formula to $V^y_t=q(F_t,t,y)\bigg(=\sigma^{-1}(F_t,t)Z_t^y\bigg)$, \begin{equation*} \begin{split} V^y_T =& V^y_t + \int_t^T -\bigg[{\left(\partial_x\mu(F_s,s) + \partial_zg(Z^y_s,s)\partial_x\sigma(F_s,s)\right)}V^y_s \\ &+\left(\partial_zg({Z^y_s},s)+\partial_x\sigma(F_s,s)\right){ \sigma(F_t,t)}\hat{Z}^y_s \bigg]\mathrm{d}s - \int_t^T\sigma(F_t,t) \hat{Z}^y_s \mathrm{d}W_s \\ =& V^y_t + \int_t^T -{\left(\partial_x\mu(F_s,s) + \partial_zg(Z^y_s,s)\partial_x\sigma(F_s,s)\right)}V^y_s\mathrm{d}s- \int_t^T\sigma(F_t,t) \hat{Z}^y_s \mathrm{d}W^Q_s, \end{split} \end{equation*} where \begin{equation*} \hat{Z}^y_t = -\partial_x q(F_t,t,y), \ \ W^Q_t = W_t + \int_0^t ({\partial_zg(Z^y_s,s)}+\partial_x\sigma(F_s,s))\mathrm{d}s. \end{equation*} Define a probability measure $Q$ (which depends on $y$) by \begin{equation*} \frac{\mathrm{d}Q}{\mathrm{d}P} = \exp\left\{- \int_0^T ({\partial_zg(Z^y_s,s)}+\partial_x\sigma(F_t,t))\mathrm{d}W_t - \frac{1}{2} \int_0^T({\partial_zg(Z^y_s,s)}+\partial_x\sigma(F_t,t))^2\mathrm{d}t \right\}. \end{equation*} Then \begin{equation*} \begin{split} V^y_t =& E^Q\left[ \exp\left\{\int_t^T{(\partial_x\mu(F_s,s) + \partial_zg(Z^y_s,s)\partial_x\sigma(F_s,s))}\mathrm{d}s\right\}V^y_T \bigg| \mathcal{F}_t \right] \\ =& - E^Q\left[ \exp\left\{\int_t^T{(\partial_x\mu(F_s,s) + \partial_zg(Z^y_s,s)\partial_x\sigma(F_s,s))}\mathrm{d}s\right\}h_\mathrm{M}^\prime(F_T) \bigg| \mathcal{F}_t \right] \\ &+ yE^Q\left[ \exp\left\{\int_t^T{(\partial_x\mu(F_s,s) + \partial_zg(Z^y_s,s)\partial_x\sigma(F_s,s))}\mathrm{d}s\right\}s^\prime(F_T) \bigg| \mathcal{F}_t \right]. \end{split} \end{equation*} Note that $Q$ depends on $y$. Under $Q$, \begin{equation*} \mathrm{d}F_t = (\mu(F_t,t) - {\partial_zg(\sigma(F_t,t)q(F_t,t,y),t)}-\partial_x\sigma(F_t,t))\mathrm{d}t + \sigma(F_t,t)\mathrm{d}W^Q_t \end{equation*} and in particular, $F$ is Markov. Note that $F$ under every $Q$ has a different distribution. Since $\|\partial_x\mu\|_\infty + \|\partial_zg\|_\infty\|\partial_x\sigma\|_\infty < \infty$ and $s^\prime \geq 0$, it is sufficient to show \begin{equation}\label{eq9} \sup_{y \in \mathbb{R}} E^Q\left[ |h_\mathrm{M}^\prime(F_T)| | F_t = f \right]< \infty \end{equation} and \begin{equation}\label{eq8} \inf_{y \in \mathbb{R}} E^Q\left[ s^\prime(F_T) | F_t = f \right] > 0. \end{equation} Let fix $t \in [0,T)$ and $f$ in the support of $F_t$. Define $\underbar{F}^Q_u$ and $\bar{F}^Q_u$, $u\geq t$ by \begin{equation*} \mathrm{d}\underbar{F}^Q_u = (\mu(\underbar{F}^Q_u,u)-K)\mathrm{d}u +\sigma(\underbar{F}^Q_u,u) \mathrm{d}W^Q_u, \ \ \underbar{F}^Q_t = f, \end{equation*} \begin{equation*} \mathrm{d}\bar{F}^Q_u = (\mu(\bar{F}^Q_u,u)+K)\mathrm{d}u +\sigma(\bar{F}^Q_u,u) \mathrm{d}W^Q_u, \ \ \bar{F}^Q_t = f, \end{equation*} where $K = \|\partial_zg\|_\infty + \|\partial_x\sigma\|_\infty$. By Proposition~2.18 in Section~5 of \cite{BMSC} we have $\underbar{F}^Q_u \leq {F}_u \leq \bar{F}^Q_u$ for all $u \in [t,T]$. To check Equation~(\ref{eq9}) note that if $h_\mathrm{M}^\prime$ grows at most exponentially and $\mu$ and $\sigma$ are bounded, we have \begin{align*} \sup_{y \in \mathbb{R}} & E^Q\left[ |h_\mathrm{M}^\prime(F_T)| | F_t = f \right]\\ \leq &L\sup_{y \in \mathbb{R}}E^Q\left[ \exp(C\bar{F}^Q_T)| \bar{F}^Q_t = f \right] + L\sup_{y \in \mathbb{R}}E^Q\left[ \exp(-C\underbar{F}^Q_T)| \underbar{F}^Q_t = f \right]\\ \leq & \tilde{L} \sup_{y \in \mathbb{R}}E^Q\left[ \exp\left\{C\int_t^T\sigma(\bar{F}^Q_s,s)\mathrm{d}W_s-\frac{C^2}{2}\int_t^T\sigma^2(\bar{F}^Q_s,s)\mathrm{d}s\right\}| \bar{F}^Q_t = f \right]\\ & + \tilde{L} \sup_{y \in \mathbb{R}}E^Q\left[ \exp\left\{-C\int_t^T\sigma(\underbar{F}^Q_s,s)\mathrm{d}W_s-\frac{C^2}{2}\int_t^T\sigma^2(\underbar{F}^Q_s,s)\mathrm{d}s\right\}| \underbar{F}^Q_t = f \right] <\infty \end{align*} for some constants $L, C, \tilde{L} > 0$, where the last inequality holds by Novikov's criterion. A similar argument holds as well for the case that $h_\mathrm{M}^\prime$ is of polynomial growth without the boundedness of $\mu$ and $\sigma$, where we use that \begin{equation*} E^Q[|\bar{F}^Q_T|^m | \bar{F}^Q_t = f] < \infty, \ \ E^Q[|\underbar{F}^Q_T|^m | \underbar{F}^Q_t = f] < \infty \end{equation*} for any $m \in \mathbb{N}$. Note that the left hand sides do not depend on $Q$ and so, also not on $y$. To check (\ref{eq8}), we consider the four cases in order. \textbf{Case a)}: In this case (\ref{eq8}) is clear. \textbf{Case b)}: Suppose that $s'(x) \geq \epsilon > 0$ for all $x \in [M,\infty)$. Clearly, $\underbar{F}^Q$ under every $Q$ has the same distribution and the same holds for $\bar{F}^Q$. Hence, \begin{equation*} E^Q[s^\prime(F_T)| F_t = f] \geq E^Q[s^\prime(F_T)1_{[M,\infty)}(\underbar{F}^Q_{T})| F_t = f] \geq \epsilon Q(\underbar{F}^Q_T \geq M)> 0, \end{equation*} where the last strict inequality holds as $\underbar{F}^Q_T$ has the same distribution under $P'$ given by \begin{equation*} \frac{\mathrm{d} P'}{\mathrm{d}Q} = \exp\left\{ K\int_t^T \frac{\mathrm{d}W^Q_u}{\sigma(\underbar{F}^Q_u,u)} - \frac{K^2}{2} \int_t^T \frac{\mathrm{d}u}{\sigma(\underbar{F}^Q_u,u)^2} \right\} \end{equation*} as $F_T$ under $P(\cdot | F_t = f)$. The probability measure $P'$ is well-defined because of the boundedness assumption on $1/\sigma$. The rest follows from Theorem~\ref{thm1}. \textbf{Case c)}: Is treated similarly to Case~b). \textbf{Case d)}: Define $\hat{F}^Q_u$, $u \geq t$ by \begin{equation*} \mathrm{d}\hat{F}^Q_u= \mu(\hat{F}^Q_u,u)\mathrm{d}u +\sigma(\hat{F}^Q_u,u) \mathrm{d}W^Q_u= \mu(\hat{F}^Q_u,u)\mathrm{d}u +\sigma(u) \mathrm{d}W^Q_u, \ \ \hat{F}^Q_t = f. \end{equation*} Clearly, $\hat{F}^Q_T$ under every $Q$ has the same distribution as $F$ under $P(\cdot | F_t = f)$. As $\sigma$ does not depend on $x$, \begin{equation*} \|\hat{F}^Q_T-F_T\|_\infty = \|\hat{F}^Q_T-\hat{F}^Q_t - (F_T-F_t)\|_\infty \leq (K+\|\mu\|_\infty)(T-t). \end{equation*} Put \begin{equation*} B = \left\{ x \in \mathbb{R}; \left|x -\frac{a+b}{2}\right| < \frac{b-a}{2}-(K+\|\mu\|_\infty)T \right\}. \end{equation*} Then, \begin{equation*} \begin{split} E^Q[s^\prime(F_T)| F_t = f] & \geq E^Q[s^\prime(F_T)1_B(F_t + \hat{F}^Q_{T}-\hat{F}^Q_t)| F_t = f] \\ &\geq \epsilon Q(f+\hat{F}^Q_T-\hat{F}^Q_t\in B)=\epsilon P(f+F_T-F_t\in B | F_t = f) > 0, \end{split} \end{equation*} where the second inequality holds since necessarily $f+F_T-F_t\in [a,b]$ if $f+\hat{F}^Q_T-\hat{F}^Q_t \in B$. The rest follows from Theorem~\ref{thm1}. \hfill////\\ \subsection{Exponential utilities} \begin{thm} Let $\mathcal{D}_t = \{ X \in L^0(\Omega,\mathcal{F}_t,P) ; E[\exp\{a|X|\}] < \infty \text{ for all } a > 0\}$, $\mu(x,t) = b(t)$, $\sigma(x,t)=\sigma(t)$, and $g(z,t) = \beta(t) z + \gamma z^2/2$, where $b \in L^2([0,T])$, $\beta$ satisfies $\mathbb{E}\left[\exp\left\{ \frac{1}{2}\int_0^T \beta(t)^2\mathrm{d}t \right\} \right] < \infty $ and $\gamma > 0$. If $s$ and $h_\mathrm{M}$ are of linear growth and $s$ is strictly monotone on $\mathbb{R}$, then Conditions~\ref{cond1}, \ref{cond2} and \ref{cond3} hold. \end{thm} {\it Proof : } Extending Example~\ref{exa2}, we have \begin{equation*} \Pi_t(X) = -\frac{1}{\gamma} \log E^Q[\exp\{-\gamma X\}|\mathcal{F}_t], \ \ \frac{\mathrm{d}Q}{\mathrm{d}P} = \exp\left\{ \int_0^T \beta(t)\mathrm{d}W_t - \frac{1}{2}\int_0^T \beta(t)^2\mathrm{d}t \right\} \end{equation*} for $X \in \mathcal{D}_T$. In particular, Condition~\ref{cond1} holds and \begin{equation*} \Pi^y_t = -\frac{1}{\gamma}\log E^Q[\exp\{-\gamma(h_\mathrm{M}(F_T)-ys(F_T))\}|\mathcal{F}_t] \end{equation*} with $F_T = \hat{\sigma}_{0,T}W^Q_T + B(T)$, where $W^Q$ is a standard Brownian motion under $Q$ and \begin{equation*} B(t) = \int_0^t (b(s) + \beta(s))\mathrm{d}s \quad \text{and} \quad \hat{\sigma}_{t,T}=\sqrt{\int_t^T\sigma^2(s)\mathrm{d}s}. \end{equation*} By a straightforward computation, we see, \begin{equation*} \begin{split} & p(x,t,y) \\ &= -\frac{1}{\gamma} \log \int \exp\left\{ -\gamma (h_\mathrm{M}(u) - ys(u))- \frac{(u-x+B(t)-B(T))^2}{2\hat{\sigma}^2_{t,T}} \right\} \frac{\mathrm{d}u}{ \sqrt{2\pi \hat{\sigma}^2_{t,T}}} \end{split} \end{equation*} and so, \begin{equation*} \begin{split} & -\partial_xp(x,t,y) \gamma \exp(-\gamma p(x,t,y)) \\ &= \int \frac{(u-x+B(t)-B(T)) }{ \sqrt{2\pi }\hat{\sigma}^3_{t,T}} \exp\left\{ -\gamma (h_\mathrm{M}(u) - ys(u))- \frac{(u-x+B(t)-B(T))^2}{2\hat{\sigma}^2_{t,T}} \right\}\mathrm{d}u. \end{split} \end{equation*} Therefore $Z^y = -\partial_xp(F_t,t,y)$ is continuous in $y$ and in particular, Condition~\ref{cond2} holds. Denote by $(l,r)$ the interval $s(\mathbb{R})$. Fix $t \in [0,T)$ and define $\varphi : [l,r]\to [-\infty,\infty]$ by \begin{equation*} \varphi(v) = \frac{s^{-1}(v)-x + B(t) -B(T)}{\hat{\sigma}^3_{t,T}}. \end{equation*} Further, define a measure $\mu$ on $(l,r)$ by \begin{equation*} \mu(\mathrm{d}v) = \exp\left\{ -\gamma h_\mathrm{M}(s^{-1}(v)) - \frac{\left(s^{-1}(v)-x + B(t)+B(T)\right)^2}{2\hat{\sigma}^2_{t,T}} \right\} s^{-1}(\mathrm{d}v). \end{equation*} Then, by applying Lemma~\ref{ess} in Appendix, we have $\lim_{y \to \pm \infty} |\partial_xp(x,t,y)| = \infty$, which implies Condition~3. \hfill////\\ The following proposition shows that the strict monotonicity of $s$ is essential for Condition~\ref{cond3} to hold under exponential utilities. This is in contrast to the case of Lipschitz drivers. \begin{prop} Let $\mu = 0$, $\sigma =1$, $g(z,t) = \gamma z^2/2$, $h_\mathrm{M}=0$ and $s(x) = (x -k)_+$, where $k \in \mathbb{R}$. Then, for any $t \in [0,T)$, \begin{equation*} \lim_{y \to \infty} Z^y_t = \infty, \ \ \lim_{y \to - \infty} Z^y_t = - \frac{\phi\left( \frac{k-W_t}{\sqrt{T-t}} \right)}{\gamma \sqrt{T-t}\Phi\left( \frac{k-W_t}{\sqrt{T-t}} \right)}, \end{equation*} where $\phi$ and $\Phi$ are the standard normal density and distribution functions respectively. \end{prop} \noindent {\it Proof: } Since \begin{equation*} p(w,t,y) = -\frac{1}{\gamma} \log E[ \exp(y\gamma (W_T-k)_+) | W_t = w], \end{equation*} we obtain \begin{equation*} \begin{split} & \exp(- \gamma p(w,t,y)) \\ &= \exp\left(\frac{T-t}{2}\gamma^2y^2 + \gamma y (w-k)\right)\left(1-\Phi\left(\frac{k-w}{\sqrt{T-t}}-\sqrt{T-t}\gamma y\right)\right) + \Phi\left( \frac{k-w}{\sqrt{T-t}} \right) \end{split} \end{equation*} and so, \begin{equation*} \begin{split} Z^y_t = & - \frac{\partial p}{\partial w}(W_t,t,y) \\ = &\frac{ y \exp\left(\frac{T-t}{2}\gamma^2y^2 + \gamma y (W_t-k)\right)\left(1-\Phi\left(\frac{k-W_t}{\sqrt{T-t}}-\sqrt{T-t}\gamma y\right)\right)} {\exp\left(\frac{T-t}{2}\gamma^2y^2 + \gamma y (W_t-k)\right)\left(1-\Phi\left(\frac{k-W_t}{\sqrt{T-t}}-\sqrt{T-t}\gamma y\right)\right) + \Phi\left(\frac{k-W_t}{\sqrt{T-t}} \right)}. \end{split} \end{equation*} Here we have used the identity \begin{equation*} \exp\left(\frac{T-t}{2}\gamma^2y^2 + \gamma y (w-k)\right) \phi\left( \frac{k-w}{\sqrt{T-t}}-\sqrt{T-t}\gamma y \right) = \phi\left(\frac{k-w}{\sqrt{T-t}} \right). \end{equation*} Since $\Phi(-\infty) = 0$, we have $\lim_{y \to \infty} Z^y_t = \infty$. Since \begin{equation*} \lim_{x \to \infty} \frac{x(1-\Phi(x))}{\phi(x)} = 1, \end{equation*} we have \begin{equation*} \lim_{y \to -\infty} Z^y_t = \lim_{y \to -\infty} \frac{y\phi\left(\frac{k-W_t}{\sqrt{T-t}} \right)}{\phi\left(\frac{k-W_t}{\sqrt{T-t}} \right) + \left( \frac{k-W_t}{\sqrt{T-t}}-\sqrt{T-t}\gamma y \right)\Phi\left(\frac{k-W_t}{\sqrt{T-t}} \right)} = - \frac{\phi\left( \frac{k-W_t}{\sqrt{T-t}} \right)}{\gamma \sqrt{T-t}\Phi\left( \frac{k-W_t}{\sqrt{T-t}} \right)}. \end{equation*} \hfill//// \section{Explicit computations for European options} Here we consider the case $H_L = h_L(S)$ with a Borel function $h_L : \mathbb{R} \to \mathbb{R}$ under the Markov framework of the previous section. This corresponds to the situation where the Large trader has to hedge an European option $-h_L(S)$ written on $S$. Then, the solution $(\Pi^\ast,Z^\ast)$ of the BSDE (\ref{eq:opt}) is given by \begin{equation*} \Pi^\ast_t = v(F_t,t), \ \ Z^\ast_t = -\sigma(F_t,t)\partial_xv(F_t,t), \end{equation*} where \begin{equation*} \begin{split} &\partial_t v(x,t) + \mu(x,t)\partial_xv(x,t) + \frac{1}{2}\sigma^2(x,t)\partial_x^2 v(x,t) = g(-\sigma(x,t)\partial_xv(x,t),t), \\ & v(x,T) = h_M(x) + h_L(s(x)). \end{split} \end{equation*} Now, let us consider a specific model to discuss how our consideration of market impacts affects hedging strategies. Let $F_t = W_t$, $H_M = a S$ with $a \in \mathbb{R}$, $S = b + c W_T$ with $b \in \mathbb{R}$, $c > 0$ and $g(z,t) = \gamma z^2/2$ with $\gamma \geq 0$. Then, when $\gamma > 0$, \begin{equation*} \begin{split} \Pi^y_t &= p(W_t,t,y) = -\frac{1}{\gamma}\log E[\exp\left\{-\gamma(a-y)(b + c W_T) \right\}|W_t] \\ & = (a-y)(b + c W_t) - \frac{T-t}{2} \gamma (a-y)^2c^2. \end{split} \end{equation*} This can be also seen from the fact that \begin{equation*} p(x,t,y) = (a-y)(b+c x) - \frac{T-t}{2} \gamma (a-y)^2c^2 \end{equation*} solves \begin{equation*} \partial_t p(x,t,y) + \frac{1}{2}\partial_x^2 p(x,t,y) = \frac{\gamma}{2} |\partial_x p(x,t,y)|^2, \ \ p(x,T,y) = (a-y)(b+cx). \end{equation*} (This remains true when $\gamma = 0$ as well.) It is then easy to see that \begin{equation*} Z^y_t = -(a-y)c,\ \ Z^-(\omega,t,z) = a + \frac{z}{c} \end{equation*} and so, the hedging strategy for $-h_L(S)$ is \begin{equation*} Y^\ast_t = a -\frac{1}{c} \partial_x v(W_t,t), \end{equation*} where $v$ is the solution of \begin{equation*} \partial_t v(x,t) + \frac{1}{2}\partial_x^2v(x,t) = \frac{\gamma}{2} |\partial_x v(x,t)|^2, \ \ v(x,T) = a(b+cx) + h_L(b+cx). \end{equation*} Note that this is a backward Kardar-Parisi-Zhang equation and the derivative $u = \partial_x v$ solves a backward Burgers' equation: \begin{equation} \label{Burg} \partial_t u(x,t) + \frac{1}{2}\partial_x^2u(x,t) = \gamma u(x,t)\partial_x u(x,t), \ \ u(x,T) = ac + ch_L^\prime(b+cx). \end{equation} We also have an integral representation; when $\gamma > 0$, \begin{equation*} \begin{split} v(x,t) &= -\frac{1}{\gamma} \log E[\exp\left\{-\gamma (a(b+cW_T) + h_L(b+cW_T))\right\} | W_t = x]\\ &= -\frac{1}{\gamma} \log \int \exp\left\{ -\gamma (ay + h_L(y))\right\} \frac{1}{\sqrt{2\pi c^2(T-t)}}\exp\left\{-\frac{(y-b-cx)^2}{2c^2(T-t)}\right\}\mathrm{d}y. \end{split} \end{equation*} When $\gamma = 0$, \begin{equation*} v(x,t) = a(b+cx) + E[h_L(b+cW_T)|W_t=x], \end{equation*} which corresponds to hedging under the Bachelier model. There are some cases where we can be more explicit. It is known and easily checked that if $u$ is a solution of a Burgers' equation, then $u_\lambda(x,t) = \lambda u(\lambda x, \lambda^2 t)$ is also a solution of a Burgers' equation. Moreover, some non-trivial explicit solutions are available; for example, \begin{equation*} u(x,t) = 1- \tanh( \gamma x + \gamma^2 t + \delta) \end{equation*} with $\delta \in \mathbb{R}$ and $ 1- \tanh( \gamma x + \gamma^2 T + \delta)$ being the terminal condition. Suppose $\gamma > 0$, $a = 0$ and the Large trader has to hedge a huge amount of put options ($K \in \mathbb{R}$, $\lambda >>1$) \begin{equation*} 2\lambda (K-S)_+ \approx \lambda \left( K-S + \frac{1}{\lambda \gamma}\log \cosh (-\lambda \gamma(K-S)) \right) =: -h_L(S). \end{equation*} Since \begin{equation*} h^\prime_L(s) = \lambda (1- \tanh(-\lambda \gamma (K-s))), \end{equation*} the solution $u$ of (\ref{Burg}) is given by \begin{equation}\label{Burgsol} u(x,t) = \lambda c (1-\tanh(\gamma\lambda c x + \gamma^2\lambda^2 c^2 t + \delta)), \end{equation} where $\delta = \lambda \gamma (b-K)- \gamma^2\lambda^2 c^2T$. Hence, the hedging strategy is \begin{equation}\label{YG} Y^\ast_t = -\lambda (1- \tanh(\gamma\lambda (b + c W_t -K) - \gamma^2\lambda^2 c^2(T-t))). \end{equation} It also follows that \begin{equation*} \begin{split} v(x,t) = & \lambda\Biggl\{ b + cx - K -\lambda\gamma c^2(T-t) \\ &- \frac{1}{\lambda \gamma}\log \cosh \left( \lambda \gamma(b + cx - K)-\lambda^2\gamma^2c^2(T-t)\right) \Biggr\} \end{split} \end{equation*} and so, by Theorem~\ref{thm1}, the replication cost at time $0$ is computed as \begin{equation*} \begin{split} & p(W_0,0,0) - v(W_0,0) \\& = \lambda\left\{ K-S_0 +\lambda\gamma c^2 T+ \frac{1}{\lambda \gamma}\log \cosh \left( -\lambda \gamma (K-S_0) - \lambda^2 \gamma^2 c^2T \right) \right\}\\ & \approx 2\lambda(K -S_0+ \lambda \gamma c^2 T)_+, \end{split} \end{equation*} where $S_0 = b + cW_0$. Here nonlinearity in $\lambda$ is clearly seen. On the other hand, when $\gamma = 0$, we are under the Bachelier model and so, the hedging of put options is standard; putting $S_t = E[S|\mathcal{F}_t] = b + cW_t$, \begin{equation*} E[2 \lambda(K-S)_+|\mathcal{F}_t] = 2\lambda\left( (K-S_t)\Phi\left(\frac{K-S_t}{c\sqrt{T-t}}\right) + c\sqrt{T-t} \phi\left(\frac{K-S_t}{c\sqrt{T-t}}\right) \right) \end{equation*} and so the hedging strategy is \begin{equation}\label{YB} Y^\ast_t = -2 \lambda \Phi\left(\frac{K-S_t}{c\sqrt{T-t}}\right) = -2\lambda \Phi\left(\frac{K-b-cW_t}{c\sqrt{T-t}}\right). \end{equation} Both of (\ref{YG}) and (\ref{YB}) are $(-2\lambda,0)$-valued increasing functions of $S_t$. The striking difference is in their dependence on $T-t$. While the strategy becomes flatter as $T-t$ increases under the Bachelier model (\ref{YB}), $T-t$ is only a location parameter and does not change the functional shape under (\ref{YG}). The function (\ref{Burgsol}) is interpreted as a shockwave propagated from the terminal condition $h^\prime_L$.
2024-02-18T23:40:10.130Z
2017-02-07T02:06:16.000Z
algebraic_stack_train_0000
1,526
8,932
proofpile-arXiv_065-7496
\section{Introduction} Big Data platforms have evolved over the last decade to address the unique challenges posed by the ability to collect data at vast scales, and the need to process them rapidly. These platforms have also leveraged the availability of distributed computing resources, such as commodity clusters and Clouds, to allow the application to scale out as data sizes grow. In particular, platforms like Apache Hadoop and Spark have allowed massive data volumes to be processed with high throughput, and NoSQL databases like Hive and HBase support low latency queries over semi-structured data at large scales. However, much of the research and innovation in Big Data platforms has skewed toward the \emph{volume} rather than the \emph{velocity} dimension~\cite{laney:gartner:2001}. On the other hand, the growing prevalence of Internet of Things (IoT) is contributing to the deployment of physical and virtual sensors to monitor and respond to infrastructure, nature and human activity is leading to a rapid influx of streaming data~\cite{buyya:fgcs:2013}. These emerging applications complement the existing needs of micro-blogs like Twitter that already contend with the need to rapidly process tweet streams for detecting trends or malicious activity~\cite{kulkarni:twitter:2015}. Such streaming applications require low-latency processing and analysis of data streams to take decisions that control the physical or digital eco-system they observe. A \emph{Distributed Stream Processing System (DSPS)} is a Big Data platform designed for online processing of such data streams~\cite{balazinska:cidr:2003}. While early stream processing systems date back to applications on wireless sensor networks~\cite{chandrasekaran:cidr:2003}, contemporary DSPS's such as Apache Storm from Twitter, Flink and Spark Streaming are designed to execute complex dataflows over tuple streams using commodity clusters~\cite{toshniwal:sigmod:2014,flink:tcde:2015,spark:usenix:2012}. These dataflows are typically designed as Directed Acyclic Graphs (DAGs), where user tasks are vertices and streams are edges. They can leverage data parallelism across tuples in the stream using multiple threads of execution per task, in addition to pipelined and task-parallel execution of the DAG, and have been shown to process $1000's$ of tuples per second~\cite{toshniwal:sigmod:2014,shukla:tpctc:2016}. A DSPS executes streaming dataflow applications on distributed resources such as commodity clusters and Cloud Virtual Machines (VMs). In order to meet the required performance for these applications, the DSPS needs to schedule these dataflows efficiently over the resources. Scheduling for a DSPS has two parts: \emph{resource allocation} and \emph{resource mapping}. The former determines the appropriate degrees of parallelism per task (e.g., threads of execution) and quanta of computing resources (e.g., number and type of VMs) for the given dataflow. Here, care has to be taken to avoid both over-allocation, that can have monetary costs when using Cloud VMs, or under-allocation, that can impact performance. Resource mapping decides the specific assignment of the threads to the VMs to ensure that the expected performance behavior and resource utilization is met. Despite their growing use, resource scheduling for DSPSs tends to be done in an \emph{ad hoc} manner, favoring empirical and reactive approaches, rather than a model-driven and analytical approach. Such empirical approaches may arrive at an approximate resource allocation for the DSPS based on a linear extrapolation of the resource needs and performance of dataflow tasks, and hand-tune these to meet the Quality of Service (QoS)~\cite{schneider:ipdps:2009,xu:icdcs:2014}. Mapping of tasks to resources may be round-robin or consider data locality and resource capacities~\cite{khandekar:middleware:2009,rychly:cicic:2014}. More sophisticated research techniques support dynamic scheduling by monitoring the queue waiting times or tuple latencies to incrementally increase/decrease the degrees of parallelism for individual tasks or the number of VMs they run on~\cite{kumbhare:ccgrid:2014,fu:icdcs:2015}. While these dynamic techniques have the advantage of working for arbitrary dataflows and stream rates, such schedules can lead to local optima for individual tasks without regard to global efficiency of the dataflow, introduce latency and cost overheads due to constant changes to the mapping, or offer weaker guarantees for the QoS. In this article, we propose a model-driven approach for scheduling streaming applications that effectively utilizes \emph{a priori} knowledge of the applications to provide predictable scheduling behavior. Specifically, we leverage our observation that dataflow tasks have diverse performance behavior as the degree of parallelism increases, and use performance models to offer reliable estimates of the resource allocation required. Further, this intuition also drives resource mapping to mitigate the impact of multi-tenancy of different tasks on the same resource, and helps narrow the estimated and actual dataflow performance and resource utilization. Together, this model-driven scheduling approach gives a predictable application performance and resource utilization behavior for executing a given DSPS application at a target input stream rate on distributed resources. Often, importance is given to lower latency of resource usage rather than predictable behavior. But in stream processing that can be latency sensitive, it may be more important to offer tighter bounds rather than lower bounds. We limit this article to static scheduling of the dataflow on distributed resources, before the application starts running. This is complementary to dynamic scheduling algorithms that can react to changes in the stream rates~\cite{xu:ic2e:2016}, application composition~\cite{kumbhare:ccgrid:2014} and make use of Cloud elasticity~\cite{esc:cloud:2011}. However, our work can be extended and applied to a dynamic context as well. Rather than incrementally increase or decrease resource allocation and the mapping until the QoS stabilizes, a dynamic algorithm can make use of our model to converge to a stable configuration more rapidly. Our work is of particular use for enterprises and service providers who have a large class of infrastructure applications that are run frequently~\cite{jha:cep:2016,simmhan:wbdb:2015}, or who reuse a library of common tasks when composing their applications~\cite{fb:sigmod:2016,filgueira:escience:2015,biem:sigmod:2010}, as is common in the scientific workflow community~\cite{myexperiment}. This amortizes the cost of building task-level performance models. Our approach also resembles scheduling in HPC centers that typically have a captive set of scientific applications that can benefit from such a model-driven approach ~\cite{quasar:sigplan:2014}~\cite{weidendorfer:hipeac:2016} Specifically, we make the following key contributions in this article \begin{enumerate} \item We highlight the gap between ideal and actual performance of dataflow tasks using performance models, that causes many existing DSPS scheduling algorithms to fail and motivates our model-based approach for reliable scheduling. \item We propose an allocation and a mapping algorithm that leverage these performance models to schedule DSPS dataflows for a fixed input rate, minimizing the distributed resources used and offering predictable performance behavior. \item We offer detailed experimental results and analysis evaluating our scheduling algorithm using the Apache Storm DSPS, and compare it against the state-of-the-art scheduling approaches, for micro and application dataflows. \end{enumerate} The rest of the article is organized as follows: \S~\ref{sec:bg} introduces the problem \emph{motivation} and \S~\ref{sec:problem} formalizes the scheduling \emph{problem}; \S~\ref{sec:approach} offers a high-level intuition of the analytical \emph{approach} taken to solving the problem; \S~\ref{sec:bm} offers evidence on the diversity of task's behavior using \emph{performance models}, leveraged in the solution; \S~\ref{sec:allocation} proposes a novel \emph{Model Based Allocation (MBA) algorithm} using these models, and also describes a Linear Scaling Allocation (LSA) used as a contemporary baseline; \S~\ref{sec:mapping} presents our \emph{Slot-Aware Mapping (SAM) algorithm} that leverages thread-locality in a resource slot, and lists two existing algorithms from literature and practice used as comparison; we offer detailed experimental \emph{results and analysis} of these allocation and mapping algorithms in \S~\ref{sec:results}; contrast our work against \emph{related literature} in \S~\ref{sec:related}; and lastly, present our \emph{conclusions} in \S~\ref{sec:conclusions}. \section{Background and Motivation} \label{sec:bg} We offer an overview of the generic composition and execution model favored by contemporary DSPSs such as Apache Storm, Apache Spark Streaming, Apache Flink and IBM InfoSphere Streams in this section. We further use this to motivate the specific research challenges and technical problems we address in this article; a formal definition follows in the subsequent section, \S~\ref{sec:problem}. While we use features and limitations of the popular Apache Storm as a representative DSPS here, similar features and short-comings of other DSPSs are discussed in the related work, \S~\ref{sec:related}. Streaming applications are composed as a dataflow in a DSPS, represented as a \textit{directed acyclic graph (DAG)}, where \emph{tasks} form vertices and \textit{tuple streams} are the edges connecting the output of one task to the input of its downstream task. Contents of the \emph{tuples} (also called \emph{events} or \emph{messages}) are generally opaque to the DSPS, except for special fields like IDs and keys that may be present for recovery and routing. \emph{Source tasks} in the DAG contain user logic responsible for acquiring and generating the initial input stream to the DAG, say by subscribing to a message broker or pulling events over the network from sensors. For other tasks, their logic is executed once for each input tuple arriving at that task, and may produce zero or more output tuples for each invocation. These output tuples are passed to downstream tasks in the DAG, and so on till the \emph{Sink tasks} are reached. These sinks do not emit any output stream but may store the tuples or notify external services. Apache Storm uses the terms \emph{topology} and \emph{component} for a DAG and a task, and more specifically \emph{spout} and \emph{bolt} for source tasks and non-source tasks, respectively. Multiple outgoing edges connecting one task to downstream tasks may indicate different \emph{routing semantics} for output tuples on that edge, based on the application definition -- tuples may be \emph{duplicated} to all downstream tasks, passed in a \emph{round-robin} manner to the tasks, or \emph{mapped} based on an output key in the tuple. Likewise, multiple input streams incident on a task typically have their tuples \emph{interleaved} into a single logical stream, though semantics such as \emph{joins} across tuples from different input streams are possible as well. Further, the \emph{selectivity} of an outgoing edge of a task is the ratio between the average number of tuples generated on that output stream for each input tuple to the task. Streaming applications are designed to process tuples with low latency. The \emph{end-to-end latency} for processing an input tuple from the source to the sink task(s) is typically a measure of the \emph{Quality of Service (QoS)} expected. This QoS depends on both the \emph{input stream rate} at the source task(s) and the resource allocation to the tasks in the DSPS. A key requirement is that the execution performance of the streaming application remains \emph{stable}, i.e., the end-to-end latency is maintained within a narrow range over time and the queue size at each task does not grow. Otherwise, an unstable application can lead to an exponential growth in the latency and the queue size, causing hosts to run out of memory. The execution model of a DSPS can naturally leverage \emph{pipelining} and \emph{task parallelism} due to the composition of linear and concurrent tasks in the DAG, respectively. These benefits are bound by the length and the number of tasks in the DAG. In addition, they also actively make use of \emph{data-parallel} execution for a single task by assigning multiple threads of execution that can each operate on an independent tuple in the input stream. This data parallelism is typically limited to stateless tasks, where threads of execution for a task do not share a global variable or state, such as a sum and a count for aggregation; stateful tasks require more involved distributed coordination~\cite{wu:icdes:2012}. Stateless tasks are common in streaming dataflows, allowing the DSPS to make use of this \emph{important dimension of parallelism that is not limited by the dataflow size but rather the stream rate and resource availability.} In operational DSPSs such as Apache Storm, Yahoo S4~\cite{neumeyer:icdmw:2010}, Twitter Heron~\cite{kulkarni:twitter:2015} and IBM InfoSphere Streams~\cite{biem:sigmod:2010}, the platform expects the application developer to provide the \emph{number of threads} or degrees of data parallelism that should be exploited for a single task. As we show in \S~\ref{sec:bm}, general rules of thumb are inadequate for deciding this and both over- and under-allocation of threads can have a negative effect on performance. This value may also change with the input rate of the stream. Thread allocation is one of the challenges we tackle in this paper. In addition, the user is responsible for deciding the \emph{number of compute resources} to be allocated to the DAG. Typically, as with many Big Data platforms, each host or Virtual Machine (VM) in the DSPS cluster exposes multiple \emph{resource slots}, and those many \emph{worker processes} can be run on the host. Typically, there are as many workers as the number of CPU cores in that host. Each worker process can internally run multiple threads for one or more tasks in the DAG. The user specifies the number of hosts or slots in the DSPS cluster to be used for the given DAG when it is submitted for execution. For e.g., in Apache Storm, the threads for the dataflow can use all available resource slots in the DSPS cluster by default. In practice, this again ends up being a trial and error empirical process for the user or the system to decide the resource allocation for the DAG, and will change for each DAG or the input rate that it needs to support. Ascertaining the Cloud VM resource allocation for the given DAG and input rate is another problem that we address in this article, and this is equally applicable to commodity hosts in a cluster as well. The DSPS, on receiving a DAG for scheduling, is responsible for deploying the dataflow on the cluster and coordinating its execution. As part of this deployment, it needs to decide the mapping from the threads of the tasks to the slots in the workers. There has been prior work on making this mapping efficient for stream processing platforms~\cite{eidenbenz:infocom:2016}~\cite{khandekar:middleware:2009}. For e.g., the native scheduler of Apache Storm uses a round-robin technique for assigning threads from different tasks to all available slots for the DAG. Its intuition is to avoid contention for the same resource by threads for the same task, and also balance the workload among the available workers. More recently, Storm has incorporated the R-Storm scheduler~\cite{peng:middleware:2015} that is both resource and network topology aware, and this offers better efficiency by considering the resource needs for an individual task thread. In InfoSphere Streams~\cite{biem:sigmod:2010} and our own prior work~\cite{kumbhare:sc:2013}, the mapping decision is dynamic and relies on the current load and previous resource utilization for the tasks. This placement decision is important, as an optimal resource allocation for a given DAG and input rate may still under-perform if the task thread to worker mapping is not effective. This inefficiency can be due to additional costs for resource contention between threads of a task or different tasks on a VM, excess context switching between threads in a core, movement of tuples between distributed tasks over the network, among other reasons. This inefficiency is manifest in the form of additional latency for the messages to be processed, or a lower input rate that is supported in a stable manner for the DAG with the given resources and mapping. In this article, we enhance the mapping strategy for the DSPS by using a model-driven approach that goes beyond a resource-aware approach, such as used in R-Storm. In summary, the aim of this paper is to \emph{use a model-driven approach to perform predictable and efficient resource scheduling for a given DAG and input event rate}. The specific goals are to determine: \begin{itemize} \item the thread allocation per task, \item the VM allocation for the entire DAG, and \item the resource mapping from a task thread to a resource slot. \end{itemize} The outcome of this schedule will be to improve the efficiency and reduce the contention for VM resources, reduce the number of VM resources, and hence \emph{monetary costs}, for executing the DAG, and ensure a stable (and preferably low) latency for execution. The \emph{predictable performance} of the proposed schedule is also important as it reduces uncertainty and trial and error. Further, when using this scheduling approach for handling dynamism in the workload or resources, say when the input rate or the DAG structure changes, this predictability allows us to update the schedule and pay the overhead cost for the rebalancing once, rather than constantly tweak the schedule purely based on monitoring of the execution. While our article does not directly address dynamism, such as changing input rates, non-deterministic VM performance or modifications to the DAG, the approach we propose offers a valuable methodology for supporting it. Likewise, we limit our work here to scheduling a single streaming application on an exclusive cluster, that is common in on-demand Cloud environments, rather than multi-tenancy of applications in the same cluster. Our algorithm in fact helps determine the smallest number of VMs and their sizes required to meet the application's needs. \section{Problem Definition} \label{sec:problem} \begin{figure}[t] \centering \parbox{1.0\textwidth}{ \footnotesize \[\begin{array}{cl} \hline \textsc{Notation} & \textsc{Description}\\ \hline \hline \mathcal{G} = \langle \mathbb {T},\mathbb {E} \rangle & \text{DAG to be scheduled}\\ \mathbb{T}=\{t_1,...,t_n\} & \text{Set of $n$ task vertices in the DAG}\\ \mathbb {E} =\{e_{ij} | e_{ij}= \langle t_i,t_j \rangle\} & \text{Set of stream edges in the DAG}\\ \Omega & \text{Input rate (tuples/sec) to be supported for DAG}\\ \hline v_j \in \mathbb{V} & \text{VMs available} \\ p_j & \text{Number of resource slots available in VM $v_j$}\\ \hline q_i & \text{Number of threads allocated to task $t_i$}\\ r_i^k \in R & \text{Set of task threads allocated to tasks in DAG}\\ \rho & \text{Sum of the resource slots allocated to the DAG}\\ \hline v_j \in V & \text{VMs allocated to the DAG} \\ s_j^l \in S & \text{Resource slots in VMs allocated to DAG}\\ \mathcal{M} : R \rightarrow S & \text{Mapping function from a thread to its slot}\\ \hline \mathcal{P}_i : \tau \rightarrow \langle \omega, c, m \rangle & \text{Performance model for task $t_i$. Maps from $\tau$ threads to the peak input rate} \\ & \text{supported $\omega$, CPU\% $c$ and Memory\% $m$}\\ \hline \mathcal{C}_i(q),~~\mathcal{M}_i(q) & \text{Incremental CPU\%, memory\% used by task $t_i$ with $q$ threads on a single resource slot} \\ \bar{c_i} = \mathcal{C}_i(1),~~\bar{m_i} = \mathcal{M}_i(1) & \text{CPU\% and memory\% required by 1 thread of the task } t_i \text{ on a single slot}\\ \mathcal{I}_i(q) & \text{Peak input rate supported by the task $t_i$ with $q$ threads on a single slot} \\ \mathcal{T}_i(\omega) & \text{Smallest thread count $q$ needed to satisfy the input rate $\omega$ for task $t_i$ on a single slot} \\ \bar{\omega_i} & \text{Peak rate sustained by 1 thread of task $t_i$ running in 1 slot} \\ \widehat{\omega_i} & \text{Peak rate sustained across any number of threads of task $t_i$ running in 1 slot} \\ \hline C_j,~~M_j & \text{Available CPU\%, memory\% on a VM}\\ M_j^l & \text{Available memory\% on single slot}\\ \tau_i & \text{Number of threads allocated to task $t_i$} \\ \widehat{v} & \text{Reference VM, VM on which last task thread was mapped }\\ \hline C_j^l, M_j^l & \text{Available CPU\%, memory\% on single slot}\\ \widehat{\tau_i} & \text{Number of threads needed to support peak rate $\widehat{\omega_{i}}$ for task $t_{i}$ on 1 slot}\\ \hline \end{array} \] } \caption{Summary of notations used in article} \label{fig:terms} \end{figure} Let the input \emph{DAG} be given as $\mathcal{G} = \langle \mathbb {T},\mathbb {E} \rangle$, where $\mathbb{T}=\{t_1,...,t_n\}$ is the set of $n$ \textit{tasks} that form the vertices of the DAG, and $\mathbb {E} =\{e_{ij} | e_{ij}= \langle t_i,t_j \rangle,~~t_i,t_j \in \mathbb{T}\}$ is the set of \textit{tuple stream} edges connecting the output of a task $t_i$ to the input of its downstream task $t_j$. Let $\sigma_{ij}$ be the \emph{selectivity} for the edge $e_{ij}$. We assume interleave semantics on the input streams and duplicate semantics on the output streams to and from a task, respectively. Further, we are given a set of VMs, $v_j \in \mathbb{V}$, with each VM $v_j$ having multiple identical resource slots, $s_j^1 .. s_j^p$. Each slot is homogeneous in resource capacity and corresponds to a single CPU core of a rated speed and a specific quanta of memory allocated to that core. Let $p_j$ be the number of processing slots present in VM $v_j$. The VMs themselves may be heterogeneous in terms of the number of slots that they have, even as we assume for simplicity that the slots themselves are homogeneous. This is consistent with the ``container'' or ``slot'' model followed by Big Data platforms like Apache Hadoop~\cite{vavilapalli:cc:2013} and Storm, though it can be extended to allow heterogeneous slots as well. \emph{Task threads}, $r_i^1 .. r_i^q$, are responsible for executing the logic for a task $t_i$ on a tuple arriving on the input stream for the task. Each task has one or more threads allocated to it, and each thread is mapped to a resource slot. Different threads can execute different tuples on the input stream, in a data-parallel manner. The order of execution does not matter. Each resource slot can run multiple threads from one or more tasks concurrently. We are given an input stream rate of $\Omega$~tuples/sec (or events/sec) that should be supported for the DAG $\mathcal{G}$. Our goal is to schedule the tasks of the DAG on \emph{VMs with the least number of cumulative resource slots, to support a stable execution of the DAG} for the given input rate. This problem can be divided into two phases: \begin{enumerate} \item \emph{Resource Allocation:} Find the minimum number $q_i$ of task threads required per task $t_i$, and the cumulative resource slots $\rho$ required to meet the input rate to the DAG. Minimizing the slots translates to a minimization of costs for these on-demand VM resources from Cloud providers since the pricing for VMs are typically proportional to the number of slots they offer. \item \emph{Resource Mapping:} Given the set of task threads $r_i^k \in R$ for all tasks in the DAG, and the number of resource slots $\rho$ allocated to the DAG, determine the set of VMs $V$ such that they have an adequate number of slots, $\rho \le \sum_{v_j \in V}{p_j}$. Further, for resource slots $s_j^l \in S$ present in the VMs $v_j \in V$, find an optimal mapping function $\mathcal{M} : R \rightarrow S$ for the allocated task threads on to available slots, to match the resources needed to support the required input rate $\Omega$ in a stable manner. \end{enumerate} There are several qualitative and quantitative measures we use to evaluate the solution to this problem. \begin{enumerate} \item The number of resource slots and VMs estimated by the allocation algorithm should be minimized \item The actual number of resource slots and VMs required by the mapping algorithm to successfully place the threads should be minimized. This is the actual cost paid for acquiring and using the VMs. Closeness of this value to the estimate from above indicates a reliable estimate. \item The actual stable input rate that is supported for the DAG by this allocation and mapping at runtime should be greater than or equal to the required input rate $\Omega$. A value close to $\Omega$ indicates better predictability. \item The expected resource usage as estimated by the scheduling algorithm and the actual resource usage for each slot should be proximate. The closer these two values, the better the predictability of the dataflow's performance and the scheduling algorithm's robustness under dynamic conditions. \end{enumerate} \section{Solution Approach} \label{sec:approach} We propose a model-based approach to solving the two sub-problems of resource allocation and resource mapping, in order to arrive at an efficient and predictable schedule for the DAG to meet the required input rate. The intuition for this is as follows. The stable input rate that can be supported by a task depends on the the number of concurrent threads for that task that can execute over tuples on the input stream in a data-parallel manner. The number of threads for a task in turn determines the resources required by the task. Traditional scheduling approaches for DSPSs assume that both these relationships -- between thread count and rate supported, and thread count and resources required -- are a linear function. That is, if we double the number of threads for a task, we can achieve twice the input rate and require twice the computing resources. However, as we show later in \S~\ref{sec:bm} and Fig.~\ref{fig:bm}, this is not true. Instead, we observe that depending on the task, both these relationships may range anywhere from flat line to a bell curve to a linear function with a positive or a negative slope. The reason for this is understandable. As the number of threads increase in a single VM or resource slot, there is more contention for those specific resources by threads of a task that can mitigate their performance. This can also affect the actual resource used by the threads. For simplicity, we limit our analysis to CPU and memory resources, though it can be extended to disk IOPS and network bandwidth as well. As a result of this real-word behavior of tasks, scheduling that assumes a linear behavior can under-perform. In our approach, we embrace this non-uniformity in the task performance and incorporate the empirical model of the performance into our schedule planning. First, we use micro-benchmarks on a single resource slot to develop a \emph{performance model} function $\mathcal{P}_i$ for each task $t_i$ which, given a certain number of threads $\tau$ for the task on a single resource slot, provides the peak input rate $\omega$ supported, and the CPU and memory utilization, $c$ and $m$, at that rate. This is discussed in \S~\ref{sec:bm}. Second, we use these performance models to determine the number of threads $q_i$ for each task $t_i$ in the DAG that is required to support a given input rate, and the cumulative number of resource slots $\rho$ for all threads in the DAG. This \emph{Model Based Allocation (MBA)} described in \S~\ref{sec:allocation} offers an accurate estimate of the resource needs and task performance, for individual tasks and for the entire DAG. We also discuss a commonly used baseline approach, \emph{Linear Scaling Allocation (LSA)}, in that section. As mentioned before, it assumes that the performance of a single thread on a resource slot can be linearly extrapolated to multiple threads on that slot, as long as the resource capacity of the slot is not exhausted. It under-performs, as we show later. This resource allocation can subsequently be used by different resource mapping algorithms that exist, such as the round-robin algorithm used by default in Apache Storm~\cite{toshniwal:sigmod:2014}, which we refer to as the \emph{Default Storm Mapping (DSM)}, or a resource-aware mapping proposed in R-Storm~\cite{peng:middleware:2015} and included in the latest Apache Storm distribution, which we call \emph{R-Storm Mapping (RSM)}. However, these mapping algorithms do not provide adequate co-location of threads onto the same slot to exploit the intuition of the model based allocation. We propose a novel \emph{Slot Aware Mapping (SAM)} algorithm that attempts to map threads from the same task as a group to individual slots, as a form of \emph{gang scheduling}~\cite{ousterhout:icdcs:1982}. Here, our goal is to maximize the peak event rate that can be exploited from that slot, minimize the interference between threads from different tasks, and ensure predictable performance. These allocation strategies are explored in \S~\ref{sec:mapping}. \subsection{Illustration} \begin{figure}[t] \centering \includegraphics[width=0.80\textwidth]{flow-diagram.pdf} \caption{Illustration of \emph{Modeling}, \emph{Allocation} and \emph{Mapping} phases performed when scheduling a sample DAG } \label{fig:flow} \end{figure} We provide a high-level visual overview of the schedule planning in Fig.~\ref{fig:flow} for a given DAG $\mathcal{G}$ with four tasks, \texttt{Blue}, \texttt{Orange}, \texttt{Yellow} and \texttt{Green}, with a required input rate of $\Omega$. The procedure has three phases. In the \textit{Modeling} phase, we build a performance model $\mathcal{P}_i$ for tasks in the DAG that do not already have a model available. This gives a profile of the peak input tuple rates supported by a task with different numbers of threads, and their corresponding CPU and memory utilization, using a single resource slot. For e.g., the performance models for the four tasks in the DAG are given by $\mathcal{P}_B, \mathcal{P}_O, \mathcal{P}_Y$ and $\mathcal{P}_G$ in Fig.~\ref{fig:flow}. In the \textit{Allocation} phase, we use the above performance model to decide the number of threads $q_i$ for each task required to sustain the tuple rate that is incident on it. The input rate to the task is itself calculated based on the DAG's input rate and the selectivity of upstream tasks. We use the Linear Scaling Allocation (LSA) and our Model Based Allocation (MBA) approaches for this. While LSA only uses the performance model for a single task thread, our MBA uses the full profile that is available. These algorithms also return the CPU\% and memory\% for the threads of each task that are summed up to get the total number of resource slots for the DAG. Fig.~\ref{fig:flow} shows the table for each of the four tasks after allocation, with the number of threads $q_b, q_O, q_Y$ and $q_G$, and their estimated resource usages $c$ and $m$ that are summed up to calculate the total resource slot needs for the DAG $\rho$. Lastly, the \textit{Mapping} phase decides the number and types of VMs required to meet the resource needs for the DAG. It then maps the set of $r_i^k$ threads allocated for the DAG to the slots $s_j^l$ in the VMs, and the total number of these slots can be greater than the estimated $\rho$, depending on the algorithm. Here, we use the Default Storm Mapping (DSM), R-Storm Mapping (RSM) and our proposed Slot Aware Mapping (SAM) algorithms as alternatives. As shown in the figure, DSM is not resource aware and only uses the information on the number of threads $q_i$ and the number of slots $\rho$ for its round-robin mapping. RSM and SAM use the task dependencies between the DAG and the allocation table. RSM uses the performance of a single thread while SAM uses all values in the performance model for its mapping. \subsection{Discussion} As with any approach that relies on prior profiling of tasks, our approach has the short-coming of requiring effort to empirically build the performance model for each task before it can be used. However, this is mitigated in two respects. First, as has been seen for scientific workflows, enterprise workloads and even HPC applications~\cite{de:fgcs:2009,scheidegger:sigmod:2008,medeiros:sigmod:2005}, many domains have common tasks that are reused in compositional frameworks by users in that domain. Similarly, for DSPS applications in domains like social network analytics, IoT or even Enterprise ETL (Extract, Transform and Load), there are common task categories and tasks such as parsing, aggregation, analytics, file I/O and Cloud service operations~\cite{filgueira:escience:2015,shukla:tpctc:2016,lu:ucc:2014}. Identifying and developing performance models for such common tasks for a given user-base -- even if all tasks are not exhaustively profiled -- can help leverage the benefits of more efficient and predictable schedules for streaming applications. Second, the effort in profiling a task is small and can be fully automated, as we describe in the next section. It also does not require access to the eventual DAG that will be executed. This ensures that as long as we can get access to the individual task, some minimal characteristics of the input tuple streams, and VMs with single resource slots comparable to slots in the eventual deployment, the time, costs and management overheads for building the performance model are mitigated. \section{Performance Modeling of Tasks} \label{sec:bm} \subsection{Approach} Performance modeling for a task builds a profile of the peak input tuple rate supported by the task, and its corresponding CPU and memory usage, using a single resource slot. It does this by performing a constrained parameter sweep of the number of threads and different inputs rates as a series of micro-benchmark trials. Algorithm~\ref{alg:bm} gives the pseudo-code for build such a performance model. For a given task $t$, we initialize the number of threads $\tau$ and the input rate $\omega$ to $1$, and iteratively increase the number of threads (lines~\ref{alg:bm:thread-start}--\ref{alg:bm:thread-end}), and for each thread count, the input rate (lines~\ref{alg:bm:rate-start}--\ref{alg:bm:rate-end}). The steps at which the thread count and rates are increased, $\Delta_\tau$ and $\Delta_\omega$, can either be fixed, or be a function of the iteration or the prior performance. This determines the granularity of the model -- while a finer granularity of thread and rate increments offers better modeling accuracy, it requires more experiments, and costs time and money for VMs. \begin{algorithm} \footnotesize \caption{Performance Modeling of a Task}\label{alg:bm} \begin{algorithmic}[1] \Procedure{PerfModel(Task $t$)}{} \State $\mathcal{P} \gets new~Map(~)$ \Comment{\emph{Holds the mapping from threads to input rate, resource usage}} \State $\lambda_\omega = 0$ \Comment{\emph{Slope of the last window of peak stable rates}} \State $\tau = 1$ \Comment{\emph{Number of threads being tested}}\\ \Comment{\emph{Increase the number of threads until $\tau_{max}$, or slope of peak supported rate remains stable or drops}} \While{$\tau < \tau_{max}$ and $\lambda_\omega \le \lambda_\omega^{max}$} \label{alg:bm:thread-start}\\ \Comment{\emph{For each value of $\tau$, increase input rate in steps of $\Delta_\omega$ until trial is unstable, or max rate $\omega_{max}$ reached}} \For{$\omega \gets 1$ \textbf{ to } $\omega_{max}$ \textbf{ step } $\Delta_\omega$} \label{alg:bm:rate-start}\\ \Comment{\emph{Run DSPS with task. Check if rate is supported. Get CPU and memory\%.}} \State $\langle c, m, isStable \rangle \gets$ \textsc{RunTaskTrial($t,\tau,\omega$)} \label{alg:bm:trial} \If{$isStable = false$} \Comment{\emph{If rate not supported, break}} \State \textbf{break}\label{alg:bm:unstable} \EndIf \State $\mathcal{P}.put(\tau \rightarrow \langle \omega, c, m \rangle)$ \Comment{\emph{Add or update mapping from $\tau$ to peak rate, resource usage}} \EndFor \label{alg:bm:rate-end} \State $\lambda_\omega \gets $\textsc{Slope($\mathcal{P}, \omega$)} \Comment{\emph{Get slope of the last window of peak stable rates before $\omega$}} \State $\tau \gets \tau + \Delta_\tau$ \Comment{\emph{Increment thread count}}\label{alg:bm:thread-end} \EndWhile\\ \Return $\mathcal{P}$ \EndProcedure \end{algorithmic} \end{algorithm} For each combination of $\tau$ and $\omega$, we run a trial of the task (line~\ref{alg:bm:trial}) that creates a sequential 3-task DAG, with one source task that generates input tuples at the rate $\omega$, the task $t$ in the middle with task threads set to $\tau$, and a sink task to collect statistics. The threads for task $t$ are assigned one independent resource slot on one VM while on a second VM, the source and sink tasks run on one or more separate resource slots so that they are not resource-constrained. This trial is run for a certain interval of time that goes past the ``warm-up'' period where initial scheduling and caching effects are overcome, and the DAG executes in a uniform manner. For e.g., in our experiments, the warm up period was seen to be $\le 5~mins$. During the trial, a profiler collects statistics on the CPU and memory usage for that resource slot, and the source and sink collect the latency times for the tuples. Running the source and sink tasks on the same VM avoids clock skews. At the end of the trial, these statistics are summarized and returned to the algorithm. In running these experiments, two key termination conditions need to be determined for automated execution and generation of the model. For a given number of threads, we need to decide when we should stop increasing the input rate given to the task. This is done by checking the latency times for the tuples processed in the trial. Under stable conditions, the latency time for tuples are tightly bound and fall within a narrow margin beyond the warm-up period. However, if the task is resource constrained, then it will be unable to keep up its processing rate with the input rate causing queuing of input tuples. As the queue size keeps increasing, there will be an exponential growth in the end-to-end latency for successive tuples. To decide if the task execution is stable, we calculate the slope $\lambda_L$ of the tuple latency values for the trial period past the warm-up and check if it is constant or less than a small positive value, $\lambda_L^{max}$. If $\lambda_L \le \lambda_L^{max}$, this execution configuration is stable, and if not, it is unstable. Once we reach an input rate that is not stable, we stop the trials for these number of threads for the task, and move to a higher number of threads (line~\ref{alg:bm:unstable}). In our experiments, using a tight slope value of $\lambda_L^{max} \approx 0.001$ was possible, and none of the experiments ran for over $12~mins$. The second termination condition decides when to stop increasing the number of threads. Here, the expectation is that as the thread count increases there is an improvement, if any, in the peak rate supported until a point at which it either stabilizes or starts dropping. We maintain the peak rate supported for previous thread counts in the $\mathcal{P}$ hashmap object. As before, we take the slope $\lambda_\omega$ of the rates for the trailing window of thread counts to determine if the slope remains flat at $0$ or turns negative. Once the rate drops or remains flat for the window, we do not expect to see an improvement in performance by increasing the thread count, and terminate the experiments. In our experiments, we set $\lambda_\omega^{min} \approx -0.001$. \subsection{Performance Modeling Setup \label{sec:bm:setup} We identify $5$ representative tasks, shown in Table~\ref{tbl:bm}, to profile and build performance models for. They also empirically motivate the need for fine-grained control over thread and resource allocation. These tasks have been chosen to be diverse, and representative of the categories of tasks often used in DSPS domains such as social media analytics, IoT and ETL pipelines~\cite{gedik:tpds:2014,wang:bigdatabench:2014}. \begin{itemize} \item \emph{Parse XML.} It parses an array of in-memory XML strings for every incoming tuple. Parsing is often required for initial tasks in the DAG that receive text or binary encoded messages from external sources, and need to translate them to a form used by downstream tasks in the DAG. XML was used here due its popular usage though other formats like JSON or Google Protocol Buffers are possible as well. Our XML parsing implementation uses the Java SAX parser that allows serial single-pass parsing even over large messages at a fast rate. Parsing XML is CPU intensive and requires high memory due to numerous string operations (Table ~\ref{tbl:bm}). \item \emph{Pi Computation.} This task numerically evaluates the approximate value of $\pi$ using an infinite series proposed by \textit{Viete} \cite{dence:pi:1993}. Rather than running it non-deterministically till convergence, we evaluate the series for a fixed number of iterations, which we set to 15. This is a CPU intensive floating-point task, and is analogous to tasks that may perform statistical and predictive analytics, or computational modeling. \item \emph{Batch File Write.} It is an accumulator task that resembles both window operations like aggregation or join, and disk I/O intensive tasks like archiving data. The implementation buffers a $100~byte$ string in-memory for every incoming tuple for a window size of $10,000$ tuples, and then writes the batched strings to a local file on a HDD attached to VM. The number of disk operations per second is proportional to the input message rate. \item \emph{Azure Blob Download.} Streaming applications may download metadata annotations, accumulated time-series data, or trained models from Cloud storage services to use in their execution. Microsoft Azure Blob service stores unstructured data as files in the Cloud. This task downloads a file with a given name from the Azure Blob storage service using the native Azure Java SDK. In our implementation, a $2~MB$ file is downloaded and stored in-memory for each input tuple, making it both memory and network intensive. \item \emph{Azure Table Query.} Some streaming applications require to access historic data stored in databases, say, for aggregation and comparison. Microsoft Azure Table service is a NoSQL columnar database in the Cloud. Our task implementation queries a table containing $2,000,000$ records, each with $20$ columns and $~\approx{200~byte}$ record size~\cite{data:taxi}, using the native Azure Java SDK. The query looks up a single record by passing a randomly generated record ID corresponding to a unique row key in the table. \end{itemize} As can be seen, these tasks cover a wide range of operations. These span from text parsing and floating-point operations to both local and Cloud-based file and table operations. There is also diversity in these tasks with respect to the resources they consume as shown in Table~\ref{tbl:bm}, be they memory, CPU, Disk or Network, and some rely on external services with their own Service Level Agreement (SLA). \begin{table}[h] \caption{Characteristics of representative tasks for which performance modeling is performed} \label{tbl:bm} \small \centering \begin{tabular}{ p{3.0cm}||p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}} \hline \textbf{Task}& \textbf{CPU bound?}& \textbf{Memory bound?}&\textbf{N/W bound?}&\textbf{Disk I/O bound?}&\textbf{External Service?} \\ \hline \hline Parse XML &\checkmark & \checkmark & &&\\ \hline Pi Computation &\checkmark & & &&\\ \hline Batched File Write & & & &\checkmark&\\ \hline Azure Blob Download & & \checkmark&\checkmark &&\checkmark\\ \hline Azure Table Query & & & \checkmark & &\checkmark\\ \hline \end{tabular} \end{table} We wrap each of these tasks as a \emph{bolt} in the Apache Storm DSPS. We compose a \emph{topology} DAG consisting of 1 \emph{spout} that generates synthetic tuples with 1 field (\texttt{message-id}) at a given constant rate determined by that trial, 1 bolt with the task that is being modeled, and 1 \emph{sink} bolt that accepts the response from the predecessor. For each input tuple, The task bolt emits one output tuple after executing its application logic, keeping the selectivity $\sigma = 1:1$ Apache Storm~\footnote{Apache Storm v1.0.1, released on 6 May 2016} is deployed on a set of standard \emph{D type VMs} running Ubuntu 14.04 in Microsoft Azure's Infrastructure as a Service (IaaS) Cloud, in the Southeast Asia data-center. Each VM has $2^{i-1}$ resource slots, where $i$ corresponds to the VM size, $D_i \in \{D_1,D_2,D_3,D_4\}$. Each slot has a one Intel Xeon E5-2673~v3 core $@$2.40~GHz processor with hyper-threading disable ~\footnote{Azure A-SERIES, D-SERIES and G-SERIES: Consistent Performances and Size Change Considerations, \url{https://goo.gl/0X6yT2} , $3.5 GB$ memory and $50 GB$ SSD, for e.g., the D3 VM has $4$~cores, $14$~GiB RAM and $200$~GiB SSD. A separate $50 GB$ HDD is present for I/O tasks like Batch File Write For the performance modeling, we deploy the spout and the sink on separate slots of a single D2 size VM, and the task bolt being evaluated on one D1 size VM. The spout and sink have $2$ threads each to ensure they are not the bottleneck, while the number of threads for the task bolt and the input rate to this topology is determined by Algorithm~\ref{alg:bm}. Each trial is run for $12~mins$, to be conservative. We measure the CPU\% and memory\% using the Linux \texttt{top} command, and record the peak stable rate supported for each of these task bolts for specific numbers of threads. The experiments are run multiple times, and the representative measurements are reported. \subsection{Performance Modeling Results \begin{figure*}[t] \centering \subfloat[Parse XML]{ \includegraphics[width=0.30\textwidth]{figures/maxratexmlParse-crop.pdf} \label{fig:bm:xml} } \subfloat[Pi Computation]{ \includegraphics[width=0.30\textwidth]{figures/maxratefloatpi-crop.pdf} \label{fig:bm:pi} }\\ \subfloat[Batched File Write]{ \includegraphics[width=0.30\textwidth]{figures/maxratebatchedFileWriter-crop.pdf} \label{fig:bm:file} } \subfloat[Azure Blob Download]{ \includegraphics[width=0.30\textwidth]{figures/maxrateblob-crop.pdf} \label{fig:bm:blob} } \subfloat[Azure Table Query]{ \includegraphics[width=0.33\textwidth]{figures/maxratetable-crop.pdf} \label{fig:bm:table} } \caption{ \emph{Peak input rate supported} (primary Y Axis) on one resource slot for the specified task bolt, with an increase in the \emph{number of threads} (X Axis). The \emph{\% incremental CPU and Memory usage} for these numbers of threads at the peak rate is shown in the secondary Y Axis. } \label{fig:bm} \end{figure*} The goal of these experiments is to collect the performance model data used by the scheduling algorithms. However, we also supplement it with some observations on the task behavior. Fig.~\ref{fig:bm} shows the performance model plots for the $5$ tasks we evaluate on a single resource slot. On the primary Y Axis (left), the plots show the peak stable rate supported (tuples/sec), and the corresponding CPU and memory utilization on the secondary Y Axis (right), as the number of threads for the task increases along the X Axis. The CPU\% and memory\% are given as a fraction of the utilization above the base load on the machine when no application topology is running, but when the OS and the stream processing platform are running. So a $0\%$ CPU or memory usage in the plots indicate that only the base load is present, and a $100\%$ indicates that the full available resource of the slot is used. We see from Fig.~\ref{fig:bm:xml} that the \textit{Parse XML} task is able to support a peak input rate of about $310~tuples/sec$ with just 1 thread, and increasing the number of threads actually reduces the input throughput supported, down to about $255~tuples/sec$ with 7 threads. The CPU usage even for 1 thread is high at about $85\%$. Here, we surmise that since a single thread is able to utilize the CPU efficiently, increasing the threads causes additional overhead for context-switching and the performance deteriorates linearly. We also see that the Parse XML task uses about $35\%$ of memory due to Java's memory allocation for string manipulation, which is higher than for other tasks. \textit{Pi Computation} is a floating-point heavy task and uses nearly $90\%$ CPU at the peak rate of about $105~tuples/sec$ using a single thread. However, unlike XML Parse where the peak rate linearly reduces with an increase in threads, we see Pi manage to modestly increase the peak rate with 2 threads, to about $110~tuples/sec$, with a similar increase in CPU usage. However, beyond 2 threads, the performance drops and remains flat with the CPU usage being flat as well. This behavior was consistent across multiple trials, and is likely due to the Intel Haswell architecture's execution pipeline characteristics~\footnote{Intel's Haswell Architecture Analyzed: Building a New PC and a New Intel, Anand Lal Shimpi, Oct 2012, \url{http://www.anandtech.com/show/6355/intels-haswell-architecture/8}}. The memory usage is minimal at between $2-10\%$. \textit{Batch File Write} is an aggregation task that is disk I/O bound. It supports a high peak rate of about $60,000~tuples/sec$ with 1 thread, which translates to writing $6~files/sec$, each $1~MB$ in size. This peak rate decreases with an increase in the number of threads, but is non-linear. There is a sharp drop in the peak rate to $45,000~tuples/sec$ with 3~threads, but this increases and stabilizes at $50,000~tuples/sec$ with more threads. The initial drop can be attributed to the disk I/O contention, hence the drop in CPU usage as well, but beyond that the thread contention may dominate, causing an increase in CPU usage even as the supported rate is stable The \emph{Azure Blob} and \emph{Azure Table} tasks rely on an external Cloud service to support their execution. As such, the throughput of these tasks are dependent on the SLA offered for these services by the Cloud provider, in addition to the local resource constraints of the slot. We see the benefit of having multiple threads clearly in these cases. The peak rate supported by both increases gradually until a threshold, beyond which the peak rate flattens and drops. Their CPU and memory utilization follow a similar trend as well. Blob's rate grows from about $2~tuples/sec$ with 1 thread to $30~tuple/sec$ with 50 threads, while Table's increases from $3~tuples/sec$ to $60~tuples/sec$ when scaling from 1 to 60 threads. This closely correlates with the SLA of the Blob service which is $60~MB/sec$, and matches with the $30~files/sec$ of $\approx 2~MB$ each that are cumulatively downloaded~\footnote{\url{https://docs.microsoft.com/en-us/azure/storage/storage-scalability-targets}}. Both these tasks are also network intensive as they download data from the Cloud services. \emph{Summary.} The first three tasks show a flat or decreasing peak rate performance with some deviations, but with differing CPU and memory resource behavior. The last two exhibit a bell-curve in their peak rates as the threads increase. These highlight the distinct characteristics of specific tasks (and even specific CPU architectures and Cloud services they wrap) that necessitate such performance models to support scheduling. Simple rules of thumbs assuming static of linear scaling are inadequate, and we see later, can cause performance degradation and resource wastage. \section{Resource Allocation} \label{sec:allocation} Resource allocation determines the number of resource slots $\rho$ to be allocated for a DAG $\mathcal{G}:\langle \mathbb{V},\mathbb{E} \rangle$ for a given input rate $\Omega$, along with the number of threads $q_j$ required for each task $t_j \in \mathbb{V}$. In doing so, the allocation algorithm needs to be aware of the input rate to each task that will inform it of the resource needs and data parallelism for that task. We can define this \emph{input rate} $\omega_j$ for a task $t_j$ based on the input rate to the DAG, the connectivity of the DAG, and the selectivity of each input stream to a task, using a recurrence relation as follows: \[ \omega_{j}=\begin{cases} \Omega & \text{if } \not\exists e_{ij} \in \mathbb{E} \\ \sum\limits_{e_{ij} \in \mathbb{E}} \big( \omega_{i} \times \sigma_{ij} \big) & \text{otherwise} \end{cases} \] In other words, if task $t_j$ is a source task without any incoming edges, its input rate is implicitly the rate to the DAG, $\Omega$. Otherwise, the input rate to a downstream task is the sum of the tuple rates on the out edges of the tasks $t_i$ immediately preceding it. This output rate is itself given by the product of those predecessor tasks' input rates $\omega_i$ and their selectivities $\sigma_{ij}$ on the out edge connecting them to task $t_j$. This recurrence relationship can be calculated in the topological order of the DAG starting from the source task(s). Let the procedure $\textsc{GetRate}(\mathcal{G}, t_j, \Omega)$ evaluate this for a given task $t_j$. Next, the allocation algorithm will need to determine the threads and resources needed by each task $t_j$ to meet its input rate $\omega_j$. Algorithms can use prior knowledge on resource usage estimates for the task, which may be limited to the CPU\% and memory\% for a single thread of the task, irrespective of the input rate, or approaches like ours that use a more detailed performance model. Say the following functions are available as a result of the performance modeling algorithm, Alg.~\ref{alg:bm}, or some other means. $\mathcal{C}_i(q)$ and $\mathcal{M}_i(q)$ respectively return the incremental CPU\% and memory\% used by task $t_i$ when running on a single resource slot with $q$ threads. Further, let $\mathcal{I}_i(q)$ provide the peak input rate that is supported by the task $t_i$ on a single slot for $q$ number of threads. Lastly, let $\mathcal{T}_i(\omega)$ be the inverse function of $\mathcal{I}_i(q)$ such that it gives the smallest number of threads $q$ adequate to satisfy the given input rate $\omega$ on a single resource slot. Since the $\omega$ values returned by $\mathcal{I}_i(q)$ for integer values of $q$ would be at coarse increments, $\mathcal{T}_i$ may offer an over-estimate depending on the granularity of $\Delta_\omega$ and $\Delta_\tau$ used in Alg.~\ref{alg:bm}. Next, we describe two allocation algorithms -- a baseline which uses simple estimates of resource usage for tasks, and another we propose that leverages the more detailed performance model available for the tasks in the DAG. \subsection{Linear Scaling Allocation (LSA)} The Linear Scaling Allocation (LSA) approach uses a simplifying assumption that the behavior of a single thread of a task will linearly scale to additional threads of the task. This scaling assumption is made both for the input rate supported by the thread, and the CPU\% and memory\% for the thread. For e.g., the R-Storm~\cite{peng:middleware:2015} scheduler assumes this additive behavior of resource needs for a single task thread as more threads are added, though it leaves the selection of the number of threads to the user. Other DSPS schedulers make this assumption as well ~\cite{schneider:ipdps:2009}~\cite{cardellini:hpcs:2016}. Algorithm~\ref{alg:lsa} shows the behavior of such a linear allocation strategy. It first estimates the input rate $\omega_i$ incident at each task $t_i$ using the $\textsc{GetRate}$ procedure discussed before. It then uses information on the peak rate $\bar{\omega_i}$ sustained by a single thread of a task $t_i$ running in one resource slot, and its CPU\% and memory\% at that rate, $\mathcal{C}_i(1)$ and $\mathcal{M}_i(1)$, as a baseline. Using this, it tries to incrementally add more threads until the input rate required, in multiples of the peak rate, is satisfied (line~\ref{alg:lsa:greater}). When the remaining input rate to be satisfied is below the peak rate (line~\ref{alg:lsa:smaller}), we linearly scale down the CPU and memory utilization, proportional to the required rate relative to the peak rate. The result of this LSA algorithm is the thread counts $\tau_i$ per task $t_i \in \mathbb{T}$. In addition, the sum of the CPU and memory allocation for all tasks, rounded up to the nearest integer, gives the nominal lower bound on the resource slots $\rho$ required to support this DAG at the given rate. \begin{algorithm}[t] \footnotesize \caption{Linear Scaling Allocation (LSA)}\label{alg:lsa} \begin{algorithmic}[1] \Procedure{AllocateLSA} {$\mathcal{G} : \langle \mathbb {T},\mathbb {E} \rangle,~\Omega$} \For{$t_{i} \in \mathbb {T}$} \Comment{\emph{For each task in DAG...}} \State $\omega_{i} = \textsc{GetRate}(\mathcal{G}, t_i, \Omega)$ \Comment{\emph{Returns input rate on task $t_i$ if DAG input rate is $\Omega$}} \State $\bar{\omega_{i}} = \mathcal{I}_i(1)$ \Comment{\emph{Peak rate supported by task $t_{i}$ with 1 thread }} \State $\tau_{i} \gets 0$ \Comment{\emph{Allocated thread count for task $t_i$}} \State$c_{i} \gets 0$ \Comment{\emph{Estimated CPU\% for $\tau_i$ threads of task $t_i$}} \State$m_{i} \gets 0$ \Comment{\emph{Estimated Memory\% for $\tau_i$ threads of task $t_i$}} \While{$\omega_{i} \ge \bar{\omega_{i}}$} \label{alg:lsa:greater}\\ \Comment{\emph{One additional thread added for $t_i$, with increase in cumulative rate supported and resources used}} \State $\tau_i \gets \tau_i + 1$ \State $\omega_{i} \gets \omega_{i} - \bar{\omega_{i}}$ \State $c_{i} \gets c_{i} + \mathcal{C}_i(1)$ \State $m_{i} \gets m_{i} + \mathcal{M}_i(1)$ \EndWhile \If{$\omega_{i} > 0$} \Comment{\emph{Trailing input rate below $\bar{\omega_{i}}$. Add thread but scale down the resources needed.}} \label{alg:lsa:smaller} \State $\tau_i \gets \tau_i + 1$ \State $c_{i} \gets c_{i}+\mathcal{C}_i(1) \times \cfrac{\omega_i}{\bar{\omega_{i}}}$ \State $m_{i} \gets m_{i}+\mathcal{M}_i(1) \times \cfrac{\omega_i}{\bar{\omega_{i}}}$ \State $\omega_{i} \gets 0 $ \EndIf \EndFor\\ \Return{$\langle \tau_{i}, c_{i}, m_{i} \rangle~~\forall t_{i}\in \mathbb{T}$} \Comment{\emph{Return number of threads, CPU\% and Memory\% allocated to each task}} \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Model-based Allocation (MBA)} \label{key} While the LSA approach is simple and appears intuitive, it suffers from two key problems that make it unsuitable for may tasks. \emph{First, the assumption that the input rate supported will linearly increase with the number of threads is not valid.} Based on our observations of the performance models from Fig.~\ref{fig:bm}, we can see that for some tasks like Azure Blob and Table, there is a loose correlation between the number of threads and the input rate supported. But even this evens out at a certain number of threads. Others like Parse XML, Pi Computation and Batch File Write see their peak input rate supported remain flat or decrease as the threads increase \emph{on a single resource slot} due to contention. One could expect a linear behavior if the threads run on different slots or VMs, but that would increase the slots required (and the corresponding cost). \emph{Second, the assumption that the resource usage linearly scales with the number of threads, relative to the resources for 1 thread, is incorrect.} This again follows from the performance models, and in fact, the behavior of CPU and memory usage themselves vary for a task. For e.g., in Fig.~\ref{fig:bm:pi} for Pi, the CPU usage remains flat while the memory usage increases even as the rate supported decreases with the number of threads. For Azure Table query in Fig.~\ref{fig:bm:table}, the CPU and memory increase with the threads but with very different slopes. Our Model-based Allocation algorithm, shown in Algorithm~\ref{alg:mba}, avoids such inaccurate assumptions and instead uses the performance models measured for the tasks to drive its thread and resource allocation. Here, the intuition is to select the sweet spot of the number of threads such that the peak rate $\widehat{\omega_i}$ among all number of threads (for which the model is available) is the highest for task $t_i$ (lines~\ref{alg:mba:greater:start}--\ref{alg:mba:greater:end}). This ensures that we maximize the input rate that we can support from a single resource slot for that task. At the same time, when we allocate these many threads to saturate a slot, we also disregard the actual CPU\% and memory\% and instead treat the whole slot as being allocated ($100\%$ usage). This is because that particular task cannot make use of additional CPU or memory resources available in that slot due to a resource contention, and we do not wish additional threads on this slot to exacerbate this problem When the residual input rate to be supported for a task falls below this maximum peak rate (line~\ref{alg:mba:smaller}), we instead select smallest number of threads that are adequate to support this rate, and use the corresponding CPU and memory\%. If a single thread is adequate (line~\ref{alg:mba:1}), just as for LSA, we scale down the resources needed proportion to the residual input rate relative to the peak rate using 1 thread. \begin{algorithm}[t] \footnotesize \caption{Model Based Allocation (MBA)}\label{alg:mba} \begin{algorithmic}[1] \Procedure{AllocateMBA} {$\mathcal{G} : \langle \mathbb {T},\mathbb {E} \rangle ),~\Omega$} \For{$t_{i} \in \mathbb {T}$} \Comment{\emph{For each task in DAG...}} \State $\omega_{i} = \textsc{GetRate}(\mathcal{G}, t_i, \Omega)$ \Comment{\emph{Returns input rate on task $t_i$ if DAG input rate is $\Omega$}} \State $\widehat{\omega_{i}} = \max_{j}\big\{\mathcal{I}_i(j)\big\}$ \Comment{\emph{Maximum of peak rates supported by task $t_{i}$ with any number of threads }} \State $\widehat{\tau_i} = \mathcal{T}(\widehat{\omega_{i}})$ \Comment{\emph{Number of threads needed to support rate $\widehat{\omega_{i}}$ for task $t_{i}$}} \State $\tau_{i} \gets 0$ \Comment{\emph{Allocated thread count for task $t_i$}} \State$c_{i} \gets 0$ \Comment{\emph{Estimated CPU\% for $\tau_i$ threads of task $t_i$}} \State$m_{i} \gets 0$ \Comment{\emph{Estimated Memory\% for $\tau_i$ threads of task $t_i$}} \While{$\omega_{i} \ge \widehat{\omega_{i}}$} \label{alg:mba:greater:start}\\ \Comment{\emph{Add threads for $t_i$ corresponding to maximum peak rate. Increase cumulative rate and resources.}} \State $\tau_i \gets \tau_i + \widehat{\tau_i}$ \State $\omega_{i} \gets \omega_{i} - \widehat{\omega_{i}}$ \State $c_{i} \gets c_{i} + 1.00$ \Comment{\emph{Assign $100\%$ of resource slot to these threads}} \State $m_{i} \gets m_{i} + 1.00$ \EndWhile \label{alg:mba:greater:end} \If{$\omega_{i} > 0$} \Comment{\emph{Trailing input rate below $\widehat{\omega_{i}}$ to be processed for $t_i$}} \label{alg:mba:smaller} \State $\tau_i' \gets \mathcal{T}(\omega_{i})$ \State $\tau_i \gets \tau_{i} + \tau_i'$ \If{$\tau_i' > 1$} \State $c_{i} \gets c_{i}+\mathcal{C}_i(\tau_i')$ \State $m_{i} \gets m_{i}+\mathcal{M}_i(\tau_i')$ \Else \Comment{\emph{One thread adequate. Scale down resources needed}}\label{alg:mba:1} \State $c_{i} \gets c_{i}+\mathcal{C}_i(1) \times \cfrac{\omega_i}{\mathcal{I}_i(1)}$ \State $m_{i} \gets m_{i}+\mathcal{M}_i(1) \times \cfrac{\omega_i}{\mathcal{I}_i(1)}$ \EndIf \State $\omega_{i} \gets 0 $ \EndIf \EndFor\\ \Return{ $\langle \tau_{i},c_{i},m_{i} \rangle~~\forall t_{i}\in \mathbb {T}$} \Comment{\emph{Return number of threads, CPU\% and Memory\% allocated to each task}} \EndProcedure \end{algorithmic} \end{algorithm} The result of running MBA is also a list of thread counts $\tau_i$ per task in the DAG, which will be used by the mapping algorithm. It also gives the CPU\% and memory\% per task which, as before, helps estimate the slots for the DAG: \[ \rho = \max\Big(\Big\lceil\sum_{t_i \in \mathbb{T}}(c_i) \Big\rceil, \Big\lceil \sum_{t_i \in \mathbb{T}}(m_i) \Big\rceil \Big) \] \section{Resource Mapping} \label{sec:mapping} As we saw, the allocation algorithm returns two pieces of information $\tau_i$, the number of threads per task, and $\rho$, the number of resource slots allocated. The goal of resource mapping is to assign these task threads, $r_i^k \in R$ where $|r_i^k| = \tau_i$, to specific resource slots to meet the input rate requirements for the DAG. \subsection{Resource Acquisition} \label{sec:acquire} The first step is to identify and acquire adequate number and sizes of VMs that have the estimated number of slots. This is straight-forward. Most IaaS Cloud providers offer on-demand VM sizes with slots that are in powers of $2$, and with pricing that is a multiple of the number of slots. For e.g., the Microsoft Azure D-series VMs in the Southeast Asia data center have $1,2,3$ and $4$ cores for sizes D1--D4, with $3.5~GB$ RAM per core, and costing $\$0.098, \$0.196, \$0.392$ and $\$0.784$/hour, respectively~\footnote{Microsoft Azure Linux Virtual Machines Pricing, \url{https://azure.microsoft.com/en-in/pricing/details/virtual-machines/linux/}}. Amazon AWS and Google Compute Engine IaaS Cloud services have similar VM sizes and pricing as well. So the total price for a given slot-count $\rho$ does not change based on the mix of VM sizes used, and one can use a simple packing algorithm to select VMs $v_j \in V$ with sizes such that $\sum_{v_j \in V}{p_j} = \rho$, where $p_j$ is the number of slots per VM $v_j$. At the same time, having more VMs means higher network transfer latency, and end-to-end latency for the DAG will increase. Hence, minimizing the number of distinct VMs is useful as well rather than having many small VMs. One approach that we take to balance pricing with communication latency is to acquire as many VMs `$n$' with the largest number of slots as possible, say, $\widehat{p}$, which cumulatively have $(n \times \widehat{p}) \le \rho$. Then, for the remaining slots, we assign to the smallest possible VM whose number of slots is $\ge (\rho - n \times \widehat{p})$. This may include some additional slots than required, but is bound by $(2^{\widehat{p} - 1}-1)$ if slots per VM are in powers of $2$, as is common. Other approaches are possible as well, based on the trade-off between network latency costs and slot pricing. For the $v_j \in V$ set of VMs thus acquired, let $s_j^l \in S$ be the set of slots in these VMs, where $|s_j^l| = p_j$ and $p_j \le \widehat{p}$, such that $\Big(\sum_{v_j \in V} p_j\Big) \ge \rho$. The second step, that we emphasize in this article, is to map the threads for each task to one of the slots we have acquired, and determine the mapping function $\mathcal{M} : R \rightarrow S$. Next, we discuss two mapping approaches available in literature, that we term DSM and RSM, as comparison and propose our novel mapping SAM as the third. While DSM is not ``resource aware'', i.e., it does not consider the resources required by each thread in performing the mapping, RSM and our SAM are resource aware, and use the output from the performance models developed earlier. \begin{figure}[t] \centering \includegraphics[width=0.67\textwidth]{figures/mapping.pdf} \caption{Mapping of a sample DAG to VMs and resource slots using DSM, RSM and SAM } \label{fig:mapping} \end{figure} \subsection{Default Storm Mapping (DSM) DSM is the default mapping algorithm used in Apache Storm, and uses a simple round-robin algorithm to assign the threads to slots. All threads are assumed to have a similar resource requirement, and all slots are assumed to have homogeneous capacity. Under such an assumption, this na\"{i}ve algorithm will balance the load of the number of threads across all slots. Algorithm~\ref{alg:dsm} describes its behavior. The task threads and resource slots available are provided as a set to the algorithm. The slots are considered as a list in some random order. The algorithm iterates through the set of threads in arbitrary order. For each thread, it picks the next slot in the list and repeats this in a round-robin fashion for each pending thread (line \ref{alg:dsm:rr}), wrapping around the slot list if its end is reached. Fig.~\ref{fig:mapping} illustrates the different mapping algorithms for a sample DAG with four tasks, \texttt{\underline{B}lue, \underline{O}range, \underline{Y}ellow} and \texttt{\underline{G}reen}, and say $5, 4, 3$ and $5$ threads allocated to them, respectively, by some allocation algorithm. Let the resources estimated for them be $6$ slots that are acquired across three VMs with $2$ slots each. Given this, the DSM algorithm first gets the list of threads and slots in some random order. For this example, let them be ordered as, $B^1,..., B^5, O^1,..., Y^1,..., \\G^1,..., G^5$, and $s_1^1, s_1^2, s_2^1, ..., s_3^2$. Firstly, the five blue threads are distributed across the first $5$ slots, $s_1^1 - s_3^1$ sequentially. Then, the distribution of the orange threads resumes with the sixth slot $s_3^2$, and wraps around to the first slot to end at $s_2^1$. The three yellow threads map to slots $s_2^2 - s_3^2$, and lastly, the five green threads wrap around and go to the first $5$ slots. As we see, DSM distributes the threads evenly across all the acquired slots irrespective of the resources available on them or required by the threads, with only the trailing slots having fewer threads. This can also help distribute threads of the same task to different slots to avoid them contending for the same type of resources. However, this is optimistic and, as we have seen from the performance models, the resource usages sharply vary not just across tasks but also based on the number of threads of a task present in a slot. \begin{algorithm}[t] \footnotesize \caption{Default Storm Mapping (DSM)}\label{alg:dsm} \begin{algorithmic}[1] \Procedure{MapDSM} {$R,~S$}\Comment{\emph{Map each task thread in set $R$ to a slot in set $S$}} \State $\mathcal{M} \gets new~Map(~)$ \State $S'[~] \gets \textsc{SetToList}(S)$ \Comment{\emph{Returns the slots in the set as a list, in some arbitrary order}} \State $n \gets 1$ \For{\textbf{each} $r \in R$} \Comment{\emph{Round Robin mapping of threads to slots}} \label{alg:dsm:rr} \State $m \gets n~\textbf{mod}~|S|$ \State $s = S'[m]$ \State $\mathcal{M}.put(r \rightarrow ~s)$ \Comment{\emph{Assign $n^{th}$ task thread to $m^{th}$ resource slot}} \State $n \gets n+1$ \EndFor\\ \Return{$\mathcal{M}$} \EndProcedure \end{algorithmic} \end{algorithm} \subsection{R-Storm Mapping (RSM)} The R-Storm Mapping (RSM) algorithm~\cite{peng:middleware:2015} was proposed earlier as a resource-aware scheduling approach to address the deficiencies of the default Storm scheduler. It has subsequently been included as an alternative scheduler for Apache Storm, as of v1.0.1. RSM offers three features that improve upon DSM. First, in contrast to DSM that balances the thread count across all available slots, RSM instead maximizes the resource usage in a slot and thus minimizes the number of slots required for the threads of the DAG. For this, it makes use of the resource usage for \emph{single threads} that we collect from the performance model ($\bar{c}_i, \bar{m}_i$), and the resource capacities of slots and VMs. A second side-effect of this is that it prevents slots from being over-allocated, which can cause an unstable DAG mapping at runtime. For e.g., DSM could place threads in a slot such that their total CPU\% or memory\% is greater than $100\%$. This is avoided by RSM. Lastly, RSM is aware of the network topology of the VMs, and it places the threads on slots such that the communication latency between adjacent tasks in the dataflow graph is reduced. At the heart of the RSM algorithm is a \textit{distance function} based on the available and required resources, and a network latency measure. This Euclidean distance between a given task thread $r_i^k \in R$ and a VM $v_j \in V$ is defined as: \[ d = w_M \times (M_j - \bar{m_i})^2 + w_C \times (C_j - \bar{c_i})^2 + w_N \times \textsc{NWDist}(\widehat{v},v_j) \] where $\bar{c_i} = \mathcal{C}_i(1)$ and $\bar{m_i} = \mathcal{M}_i(1)$ are the incremental CPU\% and memory\% required by a single thread of the task $t_i$ on one slot, and $C_j$ and $M_j$ are the CPU\% and memory\% not yet mapped across all slots of VM $v_j$. A network latency multiplier, from a \emph{reference VM}, $\widehat{v}$, to the candidate VM $v_j$, is also defined using the \textsc{NWDist} function. This reference VM is the last VM on which a task thread was mapped, and the network latency multiplier is set to $0.0$ if the candidate VM is the reference VM, $0.5$ if the VM is in the same rack as the reference, and $1.0$ if on a different rack. Lastly, the weights $w_C, w_M$ and $w_N$ are coefficients to tune this distance function as appropriated. Given this, the RSM Algorithm, given in Alg.~\ref{alg:rsm}, works as follows. It initializes the CPU\% and memory\% resources available for the candidate VMs to $100\%$ of their number of slots, the memory available per slot to $100\%$, and the number of threads to be mapped per task (lines~\ref{alg:rsm:init:start}--\ref{alg:rsm:init:end}). The initial reference VM is set to some VM, say $v_1 \in V$. Then, it performs one sweep where one thread of each task, in topological order rooted at the source task(s), is mapped to a slot (lines~\ref{alg:rsm:map:start}--\ref{alg:rsm:map:end}). This mapping in the order of BFS traversal increases the chance that threads of adjacent tasks in the DAG are placed on the same VM to reduce network latency. During the sweep, we first check if the task has any pending threads to map, and if so, we test the VMs in the ascending order of their distance function, returned by function \textsc{GetSortedVMs}, to see if they have adequate resources for that task thread (lines~\ref{alg:rsm:dist}--\ref{alg:rsm:map:loop}). There are two checks that are performed: one to see if the VM has adequate CPU\% available for the thread, and second if any slot in that VM has enough memory\% to accommodate that thread. This differentiated check is because in Storm, the memory allocation per slot is tightly bound, while the CPU\% available across slots is assumed to be usable by any thread placed in that VM. If no available slot meets the resource needs of a thread, then RSM fails. As we show later, this is not uncommon depending on the allocation algorithm. If a valid slot, $s_j'$, is found, the task thread is mapped to this slot, and the thread count and resource availability updated (lines~\ref{alg:rsm:post:start}--~\ref{alg:rsm:post:end}). The reference VM is also set to the current VM having that slot. Then the next threads in the sweep are mapped, and this process repeated till all task threads are mapped. Fig.~\ref{fig:mapping} shows the RSM algorithm applied to the sample DAG. Mapping of the threads to slots is done in BFS ordering of tasks, $B, O, Y$ and $G$. For each thread of the task in this order, a slot on the VM with the minimum distance and available resources is chosen. Say in the first sweep, the threads $B^1,O^1$ and $Y^1$ are mapped to the same slot $s_1^1$, and then next thread $G^1$ be mapped to new slot $s_1^2$ due to resource constraint on $s_1^1$ for this thread. The new slot $s_1^2$ is picked on same VM as it has the least distance among all VMs. In the second sweep, thread $B^2,O^2, Y^2$ and $G^2$ are mapped to slots $s_1^2$ and $s_2^1$, and likewise for the third sweep. In the fourth sweep, there are no threads for the Yellow task pending. Also, we see that thread $B^4$ is not mapped to slot $s_2^2$ due to lack of resources, instead going to $s_3^1$. However, a slot $s_2^2$ does have resources for a subsequent thread $O^4$, and the distance to $s_2^2$ is lesser than $s_3^1$. Thus RSM tries to perform a network-aware best fit packing to minimize the number of slots \begin{algorithm}[t] \footnotesize \caption{R-Storm Mapping (RSM)}\label{alg:rsm} \begin{algorithmic}[1] \Procedure{MapRSM} {$\mathcal{G}:\langle \mathbb {T},\mathbb {E} \rangle,~R,~V,~S$} \State $C_j = p_j \times 1.00,~~~ M_j = p_j \times 1.00,~~\forall v_j \in V$ \Comment{\emph{Initialize available CPU\%, Memory\% for all VMs}} \label{alg:rsm:init:start} \State $M_j^l = 1.00,~~\forall s_j^l \in S$ \Comment{\emph{Initialize available Memory\% for all slots of VMs}} \State $\tau_i = |r_i^k|~~\forall r_i^k \in R$ \Comment{\emph{Initialize number of task threads to map for task $t_i$}} \label{alg:rsm:init:end} \State $\mathcal{M} \gets new~Map(~)$ \Comment{\emph{Initialize mapping function}} \State $\widehat{v} \gets v_1$ \Comment {\emph{Initialise the reference VM to the first VM in set}} \While{$\sum_{t_i \in \mathbb{T}}\tau_i > 0$} \Comment{\emph{Repeat while tasks have unmapped threads}} \For{\textbf{each} $t_{i} \in \textsc{TasksTopoOrder}(\mathcal{G})$} \label{alg:rsm:map:start} \If{$\tau_i \neq 0$} \State \hskip-3em \Comment{\emph{Get list of VMs sorted by distance, based on their available resources for task $t_i$} \State $V'[~] \gets \textsc{GetSortedVMs}(V, t_i, \widehat{v})$ \label{alg:rsm:dist} \State $s_j' \gets \varnothing$ \For{$v_j' \in V'[~] ~\&~ s_j' == \varnothing$} \State $s_j' \gets s_j^l \in S ~|~ C_j \geq \bar{c}_i ~\&~ M_j^l \geq \bar{m}_i$ \Comment{\emph{Does VM have CPU\%, some slot in it have mem\% for 1 thread?}} \EndFor \label{alg:rsm:map:loop} \If{$s_j' == \varnothing$} \Return{``Error: Insufficient resources for task $t_i$''} \EndIf \State $r_i' \gets r_i^k \in R ~|~ \not\exists \mathcal{M}(r_i^k)$ \Comment{\emph{Pick one unmapped thread for task $t_i$}} \State $\mathcal{M}.put(r_i' \rightarrow s_j')$ \Comment{\emph{Assign the thread to the selected slot with available resources}} \label{alg:rsm:map} \State $C_j \gets (C_j - \bar{c}_i),~~M_j \gets (M_j - \bar{m}_i),~~M_j' \gets (M_j' - \bar{m}_i)$ \Comment {\emph{Reduce available resources by 1 thread}} \label{alg:rsm:post:start} \State $\tau_i \gets \tau_i-1$ \label{alg:rsm:post:end} \State $\widehat{v} \gets v_j'$ \Comment{\emph{Update reference node to be the current mapped VM}} \EndIf \EndFor \label{alg:rsm:map:end} \EndWhile\\ \Return{$\mathcal{M}$} \EndProcedure % \end{algorithmic} \end{algorithm} \subsection{Slot Aware Mapping (SAM)} While the RSM algorithm is resource aware, it linearly extrapolates the resource usage for multiple threads of a task in a VM or slot based on the behavior of a single thread. As we saw earlier in the \S~\ref{sec:bm}, this assumption does not hold, and as we show later in \S~\ref{sec:results}, it causes inefficient mapping, non-deterministic performance and needs over-allocation of resources. Our Slot Aware Mapping (SAM) addresses these deficiencies by fully utilizing the performance model and the strategy used by our model based allocation. They key idea is to map a \emph{bundle of threads} of the same task exclusively to a single slot such that the stream throughput is maximized for that task on that slot based on its performance model, and the interference from threads of other tasks on that slot is reduced. In Algorithm~\ref{alg:sam}, as for RSM, we initialize the resources for the VMs and slots. Further, in addition to the total slots $\rho$ required by the DAG, we also have the quantity of CPU\% and memory\% required by all the threads of each task available as $c_i$ and $m_i$. Recollect that the MBA algorithm returns this information based on the performance model. As for RSM, we iterate through the tasks in topological order (line~\ref{alg:sam:topo}). However, rather than map one thread of each task, we first check if the number of pending threads forms a \emph{full bundle}, which we define to be as $\widehat{\omega_i}$, the number of threads at the peak rate supported by the task on a single slot (line~\ref{alg:sam:full}). If so, we select an empty slot in the last mapped VM, or if none exist, in its neighboring one (line~\ref{alg:sam:fullslot}). We $\widehat{\omega_i}$ unmapped threads for this task and assign this whole bundle of threads to this exclusive slot, i.e., $100\%$ of its CPU and memory (line~\ref{alg:sam:full:map}). The resource needs of the task are reduced concomitantly, and this slot is fully mapped. It is possible that the task has a \emph{partial bundle} of unmapped threads, having fewer than $\widehat{\omega_i}$ ones (line~\ref{alg:sam:partial}). In this case, we find the best-fit slot as the one whose sum of available CPU\% and memory\% is the smallest, while being adequate for the CPU\% and memory\% required for this partial bundle (line~\ref{alg:sam:bfslot}). We assign this partial bundle of threads to this slot and reduce the resources available for this slot by $c_i$ and $m_i$. At this point, all threads of this task will be assigned (line~\ref{alg:sam:partial:done}). Notice that slots co-locate threads from different tasks only for the last partial bundle of each task. So we have an upper bound on the number of slots with mixed thread types as $|\mathbb{V}|$. Since the performance models offers information on the behavior of the same thread type on a slot, this limits the interference between threads of different types, that is not captured by the model. In practice, as we show in the experiments, most slots have threads of a single task type. As a result, SAM has a more predictable resource usage and behavior for the mapped DAG. It is possible that even in SAM, the resources allocated may not be adequate for the mapping (lines~\ref{alg:sam:full:na}, \ref{alg:sam:partial:na}), though the chances of this happening is smaller than for RSM since SAM uses a strategy similar to the allocation algorithm, MBA. This is a side-effect of the binning, when resource available in partly used slots are not adequate to fully fit a partial bundle. Also, while we do not explicitly consider network distance unlike in RSM, the mapping of tasks in topological order combined with picking a bundle at a time achieves a degree of network proximity between threads of adjacent tasks in the DAG. \begin{algorithm}[H] \footnotesize \caption{Slot Aware Mapping (SAM)}\label{alg:sam} \begin{algorithmic}[1] \Procedure{MapSAM} {$\mathcal{G}:\langle \mathbb {T},\mathbb {E} \rangle,~R,~V,~S$} \Comment{\emph{$c_{i}$ and $m_{i}$ are CPU\% and memory\% required by task $t_{i}$ from MBA}} \State $\tau_i = |r_i^k|~~\forall r_i^k \in R$ \Comment{\emph{Initialize number of task threads to map for task $t_i$}} \State $\widehat{\tau_i} = \mathcal{T}(\widehat{\omega_{i}})$ \Comment{\emph{Number of threads needed to support peak rate $\widehat{\omega_{i}}$ for task $t_{i}$ on 1 slot}} \State $C_j^l = 1.00,~~~ M_j^l = 1.00,~~\forall s_j^l \in S$ \Comment{\emph{Initialize available CPU\%, Memory\% for all slots of VMs}} \State $\mathcal{M} \gets new~Map(~)$ \Comment{\emph{Initialize mapping function}} \While{$\sum_{t_i \in \mathbb{T}}\tau_i > 0$} \Comment{\emph{Repeat while tasks have unmapped threads}} \For{\textbf{each} $t_{i} \in \textsc{TasksTopoOrder}(\mathcal{G})$} \label{alg:sam:topo} \If{$\tau_i \geq \widehat{\tau_i}$} \Comment {\emph{At least 1 full bundle of threads remains for task $t_i$}} \label{alg:sam:full} \State $s_j' \gets \textsc{GetNextFullSlot(V)}$ \Comment{\emph{Returns next full slot in current or next VM}} \label{alg:sam:fullslot} \If{$s_j' == \varnothing$} \Return{``Error: Insufficient resources for task $t_i$''} \label{alg:sam:full:na} \EndIf \State $r_i'[~] \gets \{ r_i^k \} \in R ~|~ \not\exists \mathcal{M}(r_i^k),~~|r_i'| = \widehat{\tau_i}$ \Comment{\emph{Pick unmapped full bundle of $\widehat{\tau_i}$ threads for task $t_i$}} \State $\mathcal{M}.putAll(r_i'[~] \rightarrow s_j')$ \Comment{\emph{Assign threads in bundle to the selected slot}} \label{alg:sam:full:map} \State $\tau_i \gets \tau_i - \widehat{\tau_i},~~~c_{i} \gets c_{i} - 1.00,~~~ m_i \gets m_i - 1.00$ \Comment{\emph{Reduce resource needs for bundle}} \State $C_j' \gets 0.00,~~~M_j' \gets 0.00$ \Comment {\emph{Used all resources in slot}} \label{alg:sam:full:end} \Else \If{$\tau_i > 0$} \Comment {\emph{Partial bundle of threads remains for task $t_i$}} \label{alg:sam:partial} \State $s_j' \gets \textsc{GetBestFitSlot}(V, c_i, m_i)$ \Comment{\emph{Find smallest slot with sufficient resources for partial bundle}} \label{alg:sam:bfslot} \If{$s_j' == \varnothing$} \Return{``Error: Insufficient resources for task $t_i$''} \label{alg:sam:partial:na} \EndIf \State $r_i'[~] \gets \{ r_i^k \} \in R ~|~ \not\exists \mathcal{M}(r_i^k)$ \Comment{\emph{Pick all remaining threads for task $t_i$}} \State $\mathcal{M}.putAll(r_i'[~] \rightarrow s_j')$ \State $C_j' \gets C_j' - c_i,~~~M_j' \gets M_j' - m_i$ \Comment {\emph{Reduce resources in slot by partial bundle}} \State $\tau_i \gets 0,~~~c_{i} \gets 0.00,~~~m_i \gets 0.00$ \Comment{\emph{All done for this task}} \label{alg:sam:partial:done} \EndIf \EndIf \EndFor \EndWhile\\ \Return{$\mathcal{M}$} \EndProcedure \end{algorithmic} \end{algorithm} Fig.~\ref{fig:mapping} shows the SAM algorithm applied to the sample DAG. Say a full bundle of the four tasks, $B, O, Y$ and $G$, have 2, 3, 3 and 4 threads, respectively. We iteratively consider each task in the BFS order of the DAG, similar to RSM, and attempt to assign a full bundle from their remaining threads to an exclusive slot. For e.g., in the first sweep, the full bundles $B^1..B^2, O^1..O^3, Y^1..Y^3, G^1..G^4$ are mapped to the four slots, $s_1^1, s_1^2, s_2^1, s_2^2$, respectively, occupying 2 VMs. In the next sweep, we still have a full bundle for the Blue task, $B^3..B^4$, that takes an independent slot $s_3^1$, but the Orange and Green tasks have only partial bundle consisting of one thread each. $O^4$ is mapped to a new slot $s_3^2$ as there are no partial slots, and $G^5$ is mapped to the same slot as it is the best-fit partial slot. All threads of the Yellow task are already mapped. In the last sweep, the only remaining partial bundle for the Blue task, $B^5$ is mapped to the partial slot $s_3^2$ as the best fit. \section{Results and Analysis } \label{sec:results} \subsection{Implementation} We validate our proposed allocation and mapping techniques, MBA and SAM, on the popular Apache Storm DSPS, open-sourced by Twitter. Streaming applications in Storm, also called \emph{topologies}, are composed in Java as a DAG, and the resource allocation -- number of threads per task (\emph{parallelism hint}) and the resource slots for the topology (\emph{workers}) -- is provided by the user as part of the application. Here, we implement our MBA algorithm within a script that takes the DAG and the performance model for the tasks as input, and returns the number of threads and slots required. We manually embed this information in the application, though this can be automated in future. We take a similar approach and implement the LSA algorithm as well, which is used as a baseline. A Storm cluster has multiple hosts or VMs, one of which is the master and the rest are compute VMs having one or more resource slots. When the application is submitted to the Storm cluster, the master VM runs the \emph{Nimbus scheduling service} responsible for mapping the threads of the application's tasks to worker slots. A \emph{supervisor service} on each compute VM receives the mapping from Nimbus and assigns threads of the DAG for execution. While it is possible to run multiple topologies concurrently in a cluster, our goal is to run each application on an exclusive on-demand Storm cluster with the exact number of required VMs and slots, determined based on the allocation algorithm. For e.g., one scenario is to acquire a Storm cluster on-demand from Azure's HDInsight Platform as a Service (PaaS)~\footnote{Apache Storm for HDInsight, \url{https://azure.microsoft.com/en-in/services/hdinsight/apache-storm/}} Nimbus natively implements the default round-robin scheduler (DSM) and recently, the scheduling algorithm of R-Storm (RSM) using the \texttt{DefaultScheduler} and \texttt{ResourceAwareScheduler} classes, respectively. We implement our SAM algorithm as a custom scheduler in Nimbus, \texttt{SlotAwareScheduler}. It implements the \texttt{schedule} method of the \texttt{IScheduler} interface which is periodically invoked by the Nimbus service with the pending list of threads to be scheduled. When a thread for the DAG first arrives for mapping, the SAM scheduler generates and caches a mapping for all the threads in the given DAG to slots available in the cluster. The host IDs and slot IDs available in the cluster is retrieved using methods in Storm's \texttt{Cluster} class. Then, the algorithm groups the threads by their slot ID as Storm requires all thread for a slot to be mapped at once. The actual mapping is enacted by calling the \texttt{assign} method of the \texttt{Cluster} class that takes the slot ID and the list of threads mapped to it. \subsection{Experiment Setup} \label{sec:setup} The setup for validating and comparing the allocation and mapping algorithms are similar to the one used for performance modeling, \S~\ref{sec:bm:setup}. In summary, Apache Storm $v1.0.1$ is deployed on Microsoft Azure D-series VMs in the Southeast Asia data center. The type and number of VMs depend on the experiment, but each slot of this VM series has one core of Intel Xeon E5-2673~v3 CPU $@2.4~GHz$, $3.5~GB$ RAM and a $50~GB$ SSD. We use three VM sizes in our experiments to keep the costs manageable -- \texttt{D3} having $2^{(\underline{3}-1)} = 4$~slots, \texttt{D2} with $2$~slots, and \texttt{D1} with $1$~slot. We perform two sets of experiments. In the first, we evaluate the resource benefits of our Model Based Allocation (MBA) combined with our Slot Aware Mapping (SAM), in comparison with the baseline Linear Storm Allocation (LSA) with the resource-aware R-Storm Mapping (RSM), for a given input rate (\S~\ref{sec:results:usage}). For the allocated number of resource slots, we acquire the largest size VMs (\texttt{D3}) first to meet the needs, and finally pick a \texttt{D2} or a \texttt{D1} VM for the remaining slot(s), as discussed in \S~\ref{sec:acquire}. In the second set of experiments (\S~\ref{sec:results:accy}), we verify the predictability of the performance of our MBA and SAM approaches, relative to the existing techniques. Here, we perform five experiments, using a combination of 2 allocation and 3 mapping algorithms. We measure the highest stable input rate supported by the DAGs using these algorithms on a fixed number of five \texttt{D3} VMs, and compare the observed rate and resource usage against the planned rate, and with the rate and usage estimated by our model. In all experiments, a separate \texttt{D3} VM is used as the master node on which the Nimbus scheduler and other shared services run. For the RSM implementation, we need to explicitly specify the memory available to a slot and to the VM, which we set to $3.5~GB$ and the number of slots times $3.5~GB$, respectively. For RSM and SAM, we set the available CPU\% per slot to 100\% and for the VM to be the number of VM slots times $100\%$. \subsection{Streaming Applications} \label{sec:impl:topo} \begin{figure*}[t] \includegraphics[width=0.99\textwidth]{figures/topology/micro.pdf} \caption{Micro DAG used in experiments. Tasks are referred to by their initials: B-Azure Blob Download, F-Batched File Write , P-Pi Computation, T-Azure Table Query, X-XMLparse. Since selectivity is 1:1, the input and output rates are the same, and indicated above the task} \label{fig:micro:dag} \end{figure*} \begin{figure*}[h] \centering \subfloat[Finance DAG]{ \includegraphics[width=0.50\textwidth]{figures/topology/app-trading.pdf} \label{fig:app:finance} } \subfloat[Traffic DAG]{ \includegraphics[width=0.50\textwidth]{figures/topology/app-gps.pdf} \label{fig:app:traffic} }\\ \subfloat[Smart grid DAG]{ \includegraphics[width=1.0\textwidth]{figures/topology/app-grid.pdf} \label{fig:app:smartgrid} } \caption{Application DAGs [Notation: B-Azure Blob Download, F-Batched FileWrite , P-Pi Computation, T-Azure Table Query, X-XMLparse] } \label{fig:app:dag} \end{figure*} In our experiments, we use two types of streaming dataflows -- \emph{micro-DAGs} and \emph{application DAGs}. The micro-DAGs capture common dataflow patterns that are found as sub-graphs in larger applications, and are commonly used in literature, including by R-Storm~\cite{peng:middleware:2015,millwheel:vldb:2013,xu:ic2e:2016}. These include \emph{Linear}, \emph{Diamond} and \emph{Star} micro-DAGs that respectively capture a sequential flow, a fan-out and fan-in, and a hub-and-spoke model (Fig.~\ref{fig:micro:dag}). While the linear DAG has a uniform input and output tuple rate for all tasks, the diamond exploits task parallelism, and the star doubles the input and output rates for the hub task. All three micro-DAGs have 5 tasks, in addition to the source and sink tasks, and we randomly assign the five different tasks that were modeled in \S~\ref{sec:bm} to these five DAG vertices, as labeled in Fig.~\ref{fig:micro:dag}. The figure also shows the input rates to each task based on a sample input rate of $100~tuples/sec$ to the DAG. All tasks have a selectivity of $\sigma = 1:1$. The application DAGs have a structure based on three real-world streaming applications that analyze traffic patterns from GPS sensor streams (\emph{Traffic})~\cite{biem:sigmod:2010}, compute the bargain index value from real-time stock trading prices (\emph{Finance})~\cite{gedik:sigmod:2008}, and perform data pre-processing and predictive analytics over electricity meter and weather data streams from Smart Power Grids (\emph{Grid})~\cite{simmhan:cise:2012}. In the absence of access to the actual application logic, we reuse and iteratively assign the five tasks we have modeled earlier to random vertices in these application DAGs and use a task selectivity of $\sigma = 1:1$. These three applications DAGs have between $7-15$ logic tasks, and exhibit different composition patterns. Their overall DAG selectivity ranges from $1:2$ to $1:4$. We also see that the five diverse tasks we have chosen as proxies for these domain tasks are representative of the native tasks, as described in literature. For e.g., a task of the Traffic application does parsing of input streams, similar to our XML Parse task, and another archives data for historical analysis, similar to the Batch File Write task. The moving average price and bargain index value tasks in the Finance DAG are floating-point intensive like the Pi task. The Grid DAG performs parsing and database operations, similar to XML Parse and Azure Table, as well as time-series analytics that tend to be floating-point oriented. As a result, these are reasonable equivalents of real-world applications for evaluating resource scheduling algorithms. Along with the application logic tasks, we have separate source and sink tasks for passing the input stream and logging the output stream for all DAGs. The source task generates synthetic tuples with a single opaque field of 10~bytes at a given constant rate. The sink task logs the output tuples and helps calculate the latency and other empirical statistics. Both these tasks are mapped by the scheduler just like the application tasks. Given the light-weight nature of these tasks, we empirically observe that a single thread for each of these tasks is adequate, with a static allocation of $10\%$ CPU and $15\%$ memory for the source and $10\%$ CPU and $20\%$ memory for the sink. Each experiment is run for $15$~minutes and over multiple runs, and we report the representative values seen for the entire experiment. \subsection{Resource Benefits of Allocation and Mapping} \label{sec:results:usage} We compare our combination of MBA allocation and SAM mapping, henceforth called \textbf{MBA+SAM}, against LSA allocation with RSM mapping, referred to as \textbf{LSA+RSM}. The metric of success here is the ability to \emph{minimize the overall resources allocated} for a stable execution of the DAG at a given fixed input rate. We consider both micro-DAGs and applications DAGs. First the allocation algorithm determines the minimum number of resource slots required and then the mapping algorithm is used to schedule the threads on slots for the DAG. There may be cases where the resource-aware mapping algorithm is unable to find a valid schedule for the resource allocation, in which case, we incrementally increase the number of slots by $1$ until the mapping is successful. We report and discuss this as well. We then execute the DAG and check if it is able to support the expected input rate or not. \subsubsection{Micro DAG} \label{exp:micro:slots} \begin{figure*}[t] \centering \subfloat[Linear DAG]{ \includegraphics[width=0.33\textwidth]{figures/lineartopo-slotsPlot.pdf} \label{fig:slots:linear} } \subfloat[Diamond DAG]{ \includegraphics[width=0.33\textwidth]{figures/diamondtopo-slotsPlot.pdf} \label{fig:slots:diamond} } \subfloat[Star DAG]{ \includegraphics[width=0.33\textwidth]{figures/startopo-slotsPlot.pdf} \label{fig:slots:star} } \caption{Micro DAGs: Required Slots on primary, Actual throughput on secondary Y-axis} \label{fig:micro:slots} \end{figure*} The experiments are run for the micro-DAGs with input rates of $50, 100$ and $200~tuples/sec$. This allows us to study the changes in resource allocation and usage as the input rate changes. These specific rates are chosen to offer diversity while keeping within reasonable VM costs. For e.g., each run costs $\approx US\$1.00$, and many such runs are required during the course of these experiments. Fig.~\ref{fig:micro:slots} shows the number of resource slots allocated by the LSA and MBA algorithms (yellow bars, left Y axis) for the three different input rates to the three micro-DAGs. Further, it shows the additional slots beyond this allocation (green bars, left Y axis) that is required by the resource-aware RSM and SAM mapping algorithms to ensure that threads in a slot are not under-provisioned. The DAGs are run with the input rate they the schedule was planned for (i.e., $50, 100$ or $200~tuples/sec$), and if they were not stable, we incrementally reduced the rate by $5~tuples/sec$ until the execution is stable. The stable input rate, less than or equal to the planned schedule, is shown on the right Y axis (blue circle). \emph{We can observe that LSA allocates more slots than MBA in all cases.} In fact, the resources allocated by LSA is nearly twice as that by MBA requiring, say, 7, 13 and 28 slots respectively for the Linear DAG for the rates of $50, 100$ and $200~tuples/sec$ compared to MBA that correspondingly allocates only 4, 7 and 15 slots. This is observed for the other two micro-DAGs as well. The primary reason for this resource over-allocation in LSA is due to a linear extrapolation of resources with the number of threads. In fact, while MBA allocates $\approx 3\times$ more threads than LSA for the DAGs, the resource allocation for these threads by LSA is much higher. For e.g., LSA allocates $337\%$ CPU and $1196\%$ memory for its $50$ threads of the Blob Download task for the Linear DAG at $100~tuples/sec$ while MBA allocates only $315\%$ of CPU and $326\%$ of memory for its corresponding $170$ Blob Download threads. This alone translates to a difference between $\approx12$ slots allocated by LSA (based on memory\%) and $\approx3$ slots by MBA. \emph{Despite the higher allocation by LSA, we see that RSM is often unable to complete the mapping without requiring additional slots.} This happens for 6 of the 9 combination of DAG and rates, for e.g., requiring $1$ more slot for the Diamond DAG with $50~tuples/sec$ (Fig.~\ref{fig:slots:diamond}, green bar) and $3$ more for the Linear DAG at $200~tuples/sec$ (Fig.~\ref{fig:slots:linear}). In contrast, our SAM mapping uses 1 additional slot, only in the case of Linear DAG at $50~tuples/sec$ (Fig.~\ref{fig:slots:linear}) and none other, despite being allocated fewer resources by MBA compared to LSA. Both RSM and SAM are resource aware, which means they will fail if they are unable to pack threads on to allocated slots such that their expected resource usage by all threads on a slot is within the slot's capacity. RSM more often fails to find a valid bin-packing than SAM. This is because of its distribution of a task's threads across many slots, based on the distance function, which causes resource fragmentation. We see memory fragmentation to be more common, causing vacant holes in slots that are each inadequate to map a thread but are cumulatively are sufficient. For e.g., the Linear DAG at $200~tuples/sec$ is assigned $25$ slots by LSA. During mapping by RSM, the $100$ Blob Download threads dominate the memory and occupy $4 \times 23.9\%$ of memory in each of the $25$ slots, leaving only $8\%$ memory on each slot. This is inadequate to fit threads for other tasks like XML Parse which requires $22.98\%$ of memory for one of its thread, though over $25 \times 8\% = 200\%$ of fragmented memory is available across slots. This happens much less frequently in SAM due to its preference for packing a slot at a time with a full bundle of threads in a single slot without any fragmentation. Fragmenting can only happen for the last partial thread bundle for each task. A full bundle also takes less resources according to the performance models than linear extrapolation from a single thread. For e.g., MBA packs $50$ threads of the same Blob Download task from above in a single slot. \emph{We see that in several cases, the DAGs are unable to support the rate that the schedule was planned for.} This reduction in rate is up to $30\%$ for LSA+RSM and up to $10\%$ for MBA+SAM. For e.g., in the Diamond DAG in Fig.~\ref{fig:slots:diamond}, the observed/expected rates in $tuples/sec$ for LSA+RSM is $35/50$ and $90/100$ while it is $90/100$ and $190/200$ for MBA+SAM. The reasons vary between the two approaches. In LSA+RSM, LSA allocates threads assuming a linear scaling of the rate with the threads but this holds only if each thread is running on an exclusive slot. As RSM packs multiple threads to a slot, the rate supported by these threads is often lower than estimated. For e.g., for the Diamond DAG in Fig.~\ref{fig:slots:diamond}, LSA allocates 18~threads for the Azure Table task for an input rate of $50~tuples/sec$ based on a single thread supporting $3~tuples/sec$. However, RSM distributes 2~threads each on 4~slots and the remaining 9~threads on 1~slot. As per our performance model for the Azure Table task, 2~threads on a slot support $5~tuples/sec$ and 9~threads support $10~tuples/sec$, to give a total of $4 \times 5 + 1 \times 10 = 30~tuples/sec$. This is close to the observed $35~tuples/sec$ supported by this DAG for LSA+RSM. While SAM's model-based mapping of thread bundles mitigates this issue, it does suffer from an imbalance in message routing by Storm to slots. Storm's \emph{shuffle grouping} from an upstream task sends an equal number of tuples to each downstream thread. However, the individual threads may not have the same per-capita capacity to process that many tuples on its assigned slot, as seen from the performance models. This can cause a mismatch between tuples that arrive and those that can be processed on slots. For e.g., the Diamond DAG at $100~tuples/sec$ (Fig.~\ref{fig:slots:diamond}), MBA allocates 160~threads for the Azure Table task and SAM maps two full bundles of 60~threads each to 2 slots, and the remaining 40~threads on 1 partial slot. As per the model, SAM expects the threads in a full slot to support $40~tuples/sec$ and the partial slot to support $20~tuples/sec$. However, due to Storm's shuffle grouping, the full slots receive $37~tuples/sec$ while the partial slot receives $26~tuples/sec$. This problem does not affect RSM since it distributes threads across many slots achieving a degree of balance across slots. As future work, it is worth considering a slot-aware \emph{routing} in Storm as well~\footnote{\url{https://issues.apache.org/jira/browse/STORM-162}}. Also Figs.~\ref{fig:micro:slots} show that as expected, \emph{the resource requirements increase proportionally with the input rate for both LSA+RSM and MBA+SAM}. Some minor variations exist due to rounding up of partial slots to whole, or marginal differences in fragmentation. For e.g., LSA+RSM assigns the Star DAG $\lceil 6.23 \rceil=7~slots$ and $\lceil 12.47 \rceil=13~slots$ for $50$ and $100~tuples/sec$ rates, respectively. \emph{Lastly, we also observe that all three micro-DAGs acquire about the same number of slots, for a given rate, using LSA+RSM,} e.g., using about 7 slots for $50~tuples/sec$ for all three micro-DAGs. These three DAGs have the same 5 tasks though their composition pattern is different. However, for LSA, the memory\% of the Blob Download task threads dominates the resource requirements for each DAG, and the input rate to this task is the same as the DAG input rate in all cases. As a result, for a given rate, the threads and resource needs for this task is the same for all three DAGs at 25 threads taking $598\%$ memory for $50~tuples/sec$, while the memory\% for the entire Linear DAG is marginally higher at $623\%$ for this rate and its total CPU\% need is only $242\%$. Hence, the resource needs for all other tasks, which are more CPU heavy, falls within the available 7~slots that is dominated by this Blob task, i.e., $\sum_{\text{All task threads}} CPU\% < \sum_{\text{Blob task threads}} Memory\%$. In case of MBA+SAM, there is diversity both in CPU and memory utilization, and the number of threads for each task for the different DAGs. So the resource requirements are not overwhelmed by a single task. For e.g., the same Blob task at $50~tuples/sec$ requires only $128\%$ memory according to MBA while the CPU\% required for the entire Linear DAG is $323\%$, which becomes the deciding factor for slot allocation. \subsubsection{Application DAG } \begin{figure*}[t] \centering \subfloat[Finance DAG]{ \includegraphics[width=0.33\textwidth]{figures/tradingtopo-slotsPlot.pdf} \label{fig:slots:finance} } \subfloat[Traffic DAG]{ \includegraphics[width=0.33\textwidth]{figures/gpstopo-slotsPlot.pdf} \label{fig:slots:traffic} } \subfloat[Smart Grid DAG]{ \includegraphics[width=0.33\textwidth]{figures/smartgrid-slotsPlot.pdf} \label{fig:slots:smartgrid} } \caption{Application DAGs: Required Slots on primary, Actual throughput on secondary Y-axis } \label{fig:app:slots} \end{figure*} We perform similar experiments to analyze the resource benefits for more complex applications DAGs, limited to input rates of $50$ and $100~tuples/sec$ that require as much as 65~slots for LSA+RSM costing $\approx US\$2$ per run. Figs.~\ref{fig:app:slots} plot the results. Several of our observations here mimic the behavior of the scheduling approaches for the micro-DAGs, but at a larger scale. We also see more diversity across the DAGs, with Finance taking $3-5\times$ fewer resources than Grid for the same data rates. As before for micro-DAGs, we see that MBA+SAM consistently uses $33-50\%$ fewer slots than LSA+RSM for all the application DAGs and rates. This is seen both for the allocation and in the incremental slots acquired by the mapping during packing. In fact, RSM acquires additional slots for all application DAGs allocated by LSA while MBA needs this only for Grid DAG at $50~tuples/sec$. The resource benefit for MBA+SAM relative to LSA+RSM is the least for the Finance DAG in Fig.~\ref{fig:slots:finance}, using $3$ fewer slots for $50~tuples/sec$ rate and $5$ fewer for $100~tuples/sec$ rate. This is because total CPU\% tends to dominate for MBA+SAM, and this is higher for Finance compared to the other two due to a higher input rate that arrives at the compute-intensive Pi task. On the other hand, LSA+RSM consumes over twice the slots for Traffic and Grid DAGs due having one less Pi task and one more Blob task, which is memory intensive. This is due to random mapping of our candidate tasks to application tasks. As we saw earlier for the micro-DAG, this memory intensive task tends to be sub-optimal when bin-packing by RSM, and that causes a the resource needs to grow for the Traffic and Grid DAGs. That said, while the DAGs are complex for these application workloads, the fraction of additional slots required by mapping relative to allocation does not grow much higher. In fact, the additional slots required by RSM is small in absolute numbers, at $1-3$ slots, as the bin-packing efficiency improves with more slots and threads. This shows that the punitive impact of RSM's additional slot requirements is mitigated for larger application DAGs. As for the micro-DAGs, several of the application DAGs are also unable to support the planned input rate. The impact worsens for LSA+RSM with its stable observed rate up to $40\%$ below expected, while this impact is much smaller for MBA+SAM with only up to $5\%$ lower rate observed despite the complex DAG structure. The reasoning for SAM is the same as for the micro-DAGs, where the shuffle grouping unformly distributes the output tuple rate across all threads. For RSM, an additional factor comes into play for these larger DAGs. In practice, this algorithm allows threads in a slot to access all cores in a VM while restraining their memory use to only that slot's limit. This means threads in a single slot of a \texttt{D3} VM can consume up to $400\%$ CPU, as long as their memory\% is $\le 100\%$. This causes more CPU bound threads like Pi and XMLParse to be mapped to a single slot, consuming $\approx300\%$ of a VM's CPU in the Grid DAG for $50~tuples/sec$. However, each slot has just a single worker thread responsible for tuple buffering and routing between threads and across workers. Having many CPU intensive task threads on the same slot stresses this worker thread and cause a slowdown, as seen for the Grid DAG which has an observe/expected tuple rates of $30/50$~\footnote{\url{http://stackoverflow.com/questions/20371073/how-to-tune-the-parallelism-hint-in-storm}}~\footnote{\url{http://mail-archives.apache.org/mod_mbox/storm-user/201606.mbox/browser}}. This consistently happens across all VMs where threads with high CPU\% are over-allocated to a single slot. In MBA, the mapping of full bundles to a slot rather than over-allocating CPU\% means that we have a better estimate of the collective behaviour of threads on each worker slot and these side-effects are avoided. As before, we see that the resource requirements increase proportionally as the rate doubles from $50~tuples/sec$ to $100~tuples/sec$ in most cases. However, unlike the micro-DAGs where all the dataflows for a given input rate consumed about the same number of slots using LSA+RSM, this is not the case for the application DAGs. Here, the number of tasks of each type vary and their complex compositions cause much higher diversity in input rates to these tasks. For e.g., the same Table task in Traffic, Finance and Smart Grid DAGs in Figs~\ref{fig:app:dag} have input rates of $100, 200$ and $300~tuples/sec$. In fact, this complexity means that the resource usage does not just proportionally increase with the number of tasks either. This argues the need for non-trivial resource- and DAG-aware scheduling strategies for streaming applications, such as RSM, MBA and SAM. \subsection{Accuracy of Models} \label{sec:results:accy} In the previous experiments, we showed that our MBA+SAM scheduling approach offered \emph{lower resource costs} than the LSA+RSM scheduler while meeting the planned input rate more often. In these experiments, we show that our model based scheduling approach offers \emph{predictable resource utilization}, and consequently \emph{reliable performance} that can be generalized to other DAGs and data rates. Further, we also show that it is possible to independently use our performance-model technique to accurately predict the resource usage and supported rates for other scheduling algorithms as well. Rather than determine the allocation for a given application and rate, we instead design these experiments with a fixed number of VMs and slots -- five \texttt{D3} VMs with $20$ total slots, for the three micro-DAGs. We then examine the highest input rate that our performance model estimates will be supported by the given schedule, and what is actually supported on enacting the schedules. The \emph{planned input rate} is the peak rate for which the DAG's resource requirements can be fulfilled with the fixed number of five D3 VMs, according to the allocation+mapping algorithm pair that we consider. For this, we independently run the allocation \emph{and} mapping algorithm plans outside Storm, adding incremental input rates of $10~tuples/sec$ until the resources required is just within or equal to $20$ slots according to the respective algorithm pairs. Subsequently, for the threads and their slot mappings determined by the scheduling algorithm, we use our performance models to find the \emph{predicted rate} supported by that DAG. We also use our model to \emph{predict the CPU\% and memory\%} for the slots as well, and report the cumulative value for each of the $5$ VMs. The actual input rate for the DAG is obtained empirically by increasing the rate in steps of $10~tuples/sec$ as long as the execution remains stable. The actual CPU and memory utilization corresponding to the peak rate supported is reported for each VM as well. Besides comparing the predicted and actual input rates, we also compare the predicted and actual VM resource usage in the analysis since there is a causal effect of the latter on the former. We further show that our model based allocation algorithm can be used independently with other mapping algorithms, besides SAM. To this end, we evaluate and compare the baseline combination of LSA allocation with DSM and RSM mappings available in Storm (LSA+DSM and LSA+RSM), against our MBA allocation with DSM, RSM and SAM mapping algorithms (MBA+DSM, MBA+RSM and MBA+SAM). \subsubsection{Prediction and Comparison of Input Rates} \label{sec:an:rate} \begin{figure*}[t] \centering \subfloat[Linear DAG]{ \includegraphics[width=0.33\textwidth]{figures/thruput-scatter-plot-LinearTopo.pdf} } \subfloat[Diamond DAG]{ \includegraphics[width=0.33\textwidth]{figures/thruput-scatter-plot-DiamondTopo.pdf} } \subfloat[Star DAG]{ \includegraphics[width=0.33\textwidth]{figures/thruput-scatter-plot-StarTopo.pdf} } \caption{Scatter plot of \emph{Planned} and \emph{Actual} input rates supported for the Micro-DAGs on 5 VMs using the scheduling strategy pairs } \label{fig:plot:rate:planned} \end{figure*} \begin{figure*}[t] \centering \subfloat[Linear DAG]{ \includegraphics[width=0.33\textwidth]{figures/thruput-scatter-plot-planned-LinearTopo.pdf} \label{fig:rate:linear} } \subfloat[Diamond DAG]{ \includegraphics[width=0.33\textwidth]{figures/thruput-scatter-plot-planned-DiamondTopo.pdf} \label{fig:rate:diamond} } \subfloat[Star DAG]{ \includegraphics[width=0.33\textwidth]{figures/thruput-scatter-plot-planned-StarTopo.pdf} \label{fig:rate:star} } \caption{Scatter plot of \emph{Predicted} and \emph{Actual} input rates supported for the Micro-DAGs on 5 VMs using the scheduling strategy pairs \label{fig:plot:rate} \end{figure*} Fig.~\ref{fig:plot:rate:planned} shows a scatter plot comparing the \emph{Actual rate} (X axis) and the \emph{Planned rate} by the scheduling algorithm (Y axis) for the Linear, Diamond and Star micro-DAGs, while Fig.~\ref{fig:plot:rate} does the same for the \emph{Actual rate} (X axis) and our \emph{Model Predicted rate} (Y axis). We see that our performance model is able to \emph{accurately predict the input rate for these DAGs with a high correlation coefficient of $R^2$ ranging from $0.71 - 0.95$}. This is actually significantly better than the Planned rate by the schedulers for the three DAGs whose $R^2$ values fall between $0.55-0.69$. Thus, our performance model is able to accurately predict the input rates for the schedules from all 5 algorithm pairs, better than even the scheduling algorithms themselves. There are also algorithm-specific behavior of the prediction models, which we analyze. The input rate predictions are more accurate for MBA+SAM (Fig.~\ref{fig:plot:rate}, blue `+'), falling close to the 1:1 line in all cases, since it uses the model both for allocation and for mapping. However, it is not $100\%$ accurate due to Storm's shuffle grouping that routes a different rate to downstream threads than expected. We also see our model underestimate the supported rate for LSA in Fig.~\ref{fig:plot:rate} by a small fraction. This happens due to the granularity of the model. With LSA, several slots have 3 table threads mapped to them. As we do not have exact performance models with 3 threads, we interpolate between the available thread values which estimates the rate supported at $6~tuples/sec$ while the observed rate is closer to $9~tuples/sec$. In such cases, the predictions can be made more accurate if the performance modeling is done at a finer granularity of thread counts ($\Delta_\tau$) in Algo.~\ref{alg:bm}. The algorithms also show distinctions in the actual rates that they support the same quanta of resources for a DAG. When using MBA with DSM, the actual rate is often much smaller than the planned rate and, to a lesser extent, than the predicted rate as well. For e.g., the Linear DAG's planned rate is $280~tuples/sec$, predicted is $200~tuples/sec$ and actual is $180~tuple/sec$. Since DSM does a round-robin mapping of threads without regard to the model, it is unable to leverage the full potential of the allocation. In the case of the Linear DAG, the allocation estimates the planned performance for, say, the Blob Download task with $470~threads$ based on it being distributed in bundles of $50$ threads each on $9$ slots but DSM assigns them uniformly with $\approx{23~threads~per~slot}$. Hence, using MBA with DSM is not advisable, compared to RSM or SAM mapping approaches. However, compared to LSA, MBA offers a higher predicted and actual input rate irrespective of the mapping, offering improvements of $20-175\%$. We observe from the plots that the cluster of points for MBA (in blue) is consistently at a higher rate than the LSA cluster (in red) despite both being allocated the same fixed number of resources. As discussed before, LSA allocates fewer data-parallel threads than MBA due to its linear-scaling assumption, and they are unable to fully exploit the available resources. This is consistent with the lower CPU\% and memory\% for LSA observed in Figs.~\ref{fig:plot:cpu} and \ref{fig:plot:mem}. For e.g., the Azure Table task in the Linear DAG is assigned only $54$ threads by LSA, with a planned rate of $160~tuples/sec$ whereas MBA assigns it $420$ threads with a planned rate of $280~tuples/sec$ Interestingly, when using MBA, RSM is able to offer a higher actual input rate compared to SAM in two of the three DAGs, Linear and Star (Fig.~\ref{fig:plot:rate}), even as its planned rate is lower than SAM. For e.g., we see that RSM's distance measure is able find the sweet spot for distributing the $470~threads$ of Blob Download for the Linear DAG across 15 slots with $25-30$ threads each and 3 slots with under 10 threads each, to offer a predicted rate of $315~tuples/sec$ and actual rate of $330~tuples/sec$. SAM on the other hand favors predictable performance within exclusive slots and bundles $50~threads$ each on 9 slots and the rest in a partial slot to give a predicted and actual rate close to $280~tuples/sec$. This highlights the trade-off that SAM makes in favor of a predictable model-driven performance, while sacrificing some of performance benefits relative to RSM. \subsubsection{Prediction and Comparison of CPU and Memory Utilization} \begin{figure*}[t] \centering \subfloat[Linear DAG]{ \includegraphics[width=0.33\textwidth]{figures/exp-act-scatter-linear-TEST-CPU.pdf} \label{fig:cpu:linear} } \centering \subfloat[Diamond DAG]{ \includegraphics[width=0.33\textwidth]{figures/exp-act-scatter-diamond-TEST-CPU.pdf} \label{fig:cpu:diamond} } \centering \subfloat[Star DAG]{ \includegraphics[width=0.33\textwidth]{figures/exp-act-scatter-star-TEST-CPU.pdf} \label{fig:cpu:star} } \caption{Scatter plot of \emph{Predicted} and \emph{Actual} CPU\% per VM for the Micro-DAGs on 5 VMs using the 5 scheduling strategy pairs} \label{fig:plot:cpu} \end{figure*} Figs.~\ref{fig:plot:cpu} and ~\ref{fig:plot:mem} show the Actual (X axis) and Predicted (Y axis) CPU\% and memory\% values for the three DAGs. Each scatter plot has a data point for each of the 5 VMs and for every scheduling algorithm pair. It is immediately clear from Figs.~\ref{fig:plot:cpu} that our performance model is able to \emph{accurately predict the CPU\% for each VM for these DAGs with a high correlation coefficient $R^2 \ge 0.81$.} This consistently holds for all three DAGs, scheduling algorithms, and for CPU utilization that ranges from $10-90\%$. While for the Linear DAG, the CPU utilization accuracy is high, there are a few cases in the Diamond and Star DAGs where our predictions deviate from the observed for higher values of CPU\%. The under-prediction of CPU for Diamond DAG with MBA+SAM is because the VMs with Pi thread bundles receive a slightly higher input rate than expected due to Storm's shuffle grouping that impacts $4$ of the $5$ slots, and Pi's CPU model is sensitive to even small changes in the rate. For e.g., in Fig.~\ref{fig:cpu:diamond}, a VM with a predicted CPU use of $80\%$ for a predicted input rate of $110~tuples/sec$ ends up having an actual CPU usage of $88\%$ as it actually gets $116~tuples/sec$. This happens for the Star DAG with Pi and Blob threads we well. As mentioned before, enhancing Storm's shuffle grouping to be sensitive to resource allocation for downstream slots will address this skew while improving the performance as well. At the same time, just from a modeling perspective, it is also possible to capture the round-robin routing of Storm's shuffle grouping in the model to improve the predictability. For Star DAG in Fig.~\ref{fig:cpu:star}, there is one VM whose predicted CPU\% is more than the actual for both MBA+RSM and MBA+SAM. We find that both these VMs have $2$ threads of the Parse task, each on a separate slot, that are meant to support a required input rate of $480~tuples/sec$. However, a single thread of Parse supports $310~tuples/sec$. Since these two threads receive less than the peak rate of input, our model proportionally scales down the expected resource usage and estimates it at $~47\%$ CPU usage. However, the actual turns out to be $~32\%$, causing an overestimate. As we mentioned, there is a balance between the costs for building fine-grained models and the accuracy of the models, and this prediction error is an outcome of this trade-off that causes us to interpolate. For the Diamond DAG in Fig.~\ref{fig:cpu:diamond}, we see two VMs with expected CPU\% of $\approx100\%$ for MBA+RSM but the observed values that are much lesser. These correspond to two Pi threads that the MBA algorithm expects the pair to be placed on the same slot with $95\%$ combined usage while RSM actually maps them onto different VMs with $10\%$ fewer usage by each for 1 thread \begin{figure*}[t] \centering \subfloat[Linear DAG]{ \includegraphics[width=0.33\textwidth]{figures/exp-act-scatter-linear-TEST-MEM.pdf} \label{fig:mem:linear} } \centering \subfloat[Diamond DAG]{ \includegraphics[width=0.33\textwidth]{figures/exp-act-scatter-diamond-TEST-MEM.pdf} \label{fig:mem:diamond} } \centering \subfloat[Star DAG]{ \includegraphics[width=0.33\textwidth]{figures/exp-act-scatter-star-TEST-MEM.pdf} \label{fig:mem:star} } \caption{Scatter plot of \emph{Predicted} and \emph{Actual} Memory\% per VM for the Micro-DAGs on 5 VMs using the 5 scheduling strategy pairs} \label{fig:plot:mem} \end{figure*} The prediction of memory utilization shown in Figs.~\ref{fig:plot:mem}, while not as good as the CPU\% is still valuable at $R^2 \ge 0.55$. Unlike the CPU usage that spans the entire spectrum from $0-400\%$ for each VM, the memory usage has a compact range with a median value of $\approx 60\%$. This indicates that the DAGs are more compute-bound than memory-bound. Due to this low memory\%, even small variations in predictions has a large penalty on the correlation coefficient. We do see a few outlying clusters in these memory scatter plots. In the Linear and Star DAGs, we see that MBA+DSM over-predicts the memory usage. This is because the round-robin mapping of DSM assigns single threads of XML Parse to different slots, each of which receive fewer than their peak supported input rate. As a result, our model proportionally scales down the resources but ends up with an over-estimate. On the other hand, we also see cases where we marginally under-predict the memory usage for these same DAGs for MBA+SAM. Here, the shuffle grouping that sends a higher rate than expected to some slots with full thread bundles, and consequently a lower to other downstream threads, causing the resource usage to be lower than expected. We also see broader resource usage trends for specific scheduling approaches that can impact their input rate performance. \emph{We see that plans that use LSA consistently under-utilize resources.} The CPU\% used is particularly bad for LSA, with the 5 VMs for LSA-based plans using an average of just $15-35\%$ CPU each while the MBA-based schedules use an average of $70-90\%$ per VM. This reaffirms our earlier insight that the allocation of the number of data-parallel threads by LSA is inadequate to utilize the resources available in the given VMs. Among DSM and RSM, we do see that RSM clearly has a better CPU\% when using LSA though the memory\% between DSM and RSM is relatively similar. The latter is because RSM ends up distributing memory intensive threads across multiple slots due to constraints on a slot, which has a pattern similar to DSM's round-robin approach. This shows the benefits of RSM over DSM, as is also seen in the input rates supported. However, \emph{RSM has a more variable CPU\% and memory\% utilization across VMs irrespective of the allocation.} For e.g. in Fig.~\ref{fig:cpu:linear}, the Linear DAG has CPU\% that ranges from $10-40$ for LSA+RSM and from $55-90$ for MBA+RSM. This is because RSM tries to pack all slots in a VM as long as the cumulative CPU\% for the \emph{VM} and the memory\% \emph{per slot} is not exhausted. This causes the CPU\% of initially mapped VMs to grow quickly due to the best-fit distance measure, while the remaining VMs are packed with more memory-heavy tasks. This causes the skew. The DSM mapping uses a round-robin distribution of threads to slots and hence has is more tightly grouped. While SAM uses a best-fit packing similar to RSM, this is limited to the partial thread bundles, and hence its resource skew across VMs is mitigated. \subsection{Comparison of Latency} \begin{figure*}[t] \centering \includegraphics[width=0.80\textwidth]{figures/BoxPlotLatency.pdf} \caption{Violin Plot of observed \emph{latency per tuple} for the Micro-DAGs on 5 VMs using the 5 scheduling strategy pairs \label{fig:latency:dag} \end{figure*} Reducing the latency is not an explicit goal for our scheduling algorithms, though ensuring a stable latency is. However, some applications may require a low absolute latency value that is a factor in the schedule generator. So we also report the average latency distribution observed for the different scheduling algorithm pairs for the three micro-DAGs executed on a static set of 5 VMs. The \emph{average latency} of the DAG is the average of time difference between each message consumed at the source tasks and all its causally dependent messages that are generated at the sink tasks. The latency of a message depends on the input rate and resources allocated to the task. It includes the network, queuing and processing time of tuple. The average latency is relevant only for a DAG with a stable latency and resource configuration. We have used separate spout and sink tasks for logging each input and output tuple with a timestamp, and use this to plot the distribution of the average latency for a DAG. Fig.~\ref{fig:latency:dag} shows a Violin Plot for the average latency for the three micro-DAGs executed on 5 VMs using both LSA and MBA based allocation with DSM, RSM, MBA mappings, at stable rate. These results are for the same experiment setup as \S~\ref{sec:results:accy}. We can make a few observations based on these plots. We see that the Diamond micro-DAG has a consistently lower latency, followed by the Star DAG and then the Linear DAG. As is evident from the dataflow structure, this is proportional to the number of tasks on the critical path of the DAG, from the source to the sink. This is $4$ for Diamond, $5$ for Star and $7$ for Linear. The median observed latency values typically increase in the order: MBA+DSM $\le$ \{LSA+DSM, MBA+RSM\} $\le$ LSA+RSM $\le$ MBA+SAM. However, this has to be tempered by the input rates that these schedules support for the same DAG and resource slots. While MBA+DSM has a low latency, it supports the lowest rate among the three scheduling pairs that use MBA, though all support a higher rate than the LSA-based algorithm pairs. So this is suitable for low latency and average throughput. MBA+RSM has the next best latency given that RSM is network-aware and hence, able to lower the network transfer latency. This is positive given that it is also able to support a high input rate. The LSA+RSM schedule have the second worst latency and also the worst input rate, seen earlier. So this algorithm pair is not a good selection. Separately, we also report that the MBA schedules have a long tail distribution of latency values, indicating that the threads are running at peak utilization that is sensitive to small changes. \section{Related Work} \label{sec:related} \subsection{Scheduling for Storm} The popularity of Storm as a DSPS has led to several publications on streaming DAG scheduling that is customized for Storm. As discussed before, Storm natively supports the default round-robin scheduler and the R-Storm resource-aware scheduler. Both of these only participate in the mapping phase and not in thread or resource allocation. The round-robin scheduler \cite{rychly:cisis:2014}does not take into account the actual VM resources required for a thread instead distributes them uniformly on all available VMs. In R-storm~\cite{peng:middleware:2015}, the user is expected to provide the CPU\%, memory\% and network bandwidth for each task thread under a stable input message rate, along with the number of threads for each task. It uses its resource-aware distance function to pack threads to VMs with the goal of achieving a high resource utilization and minimum network latency costs. As we have shown earlier, this linear extrapolation is not effective in many cases. Further, R-Storm does not consider the input rates to the DAG in its model. This means the resource utilization provided by the user is not scaled based on the actual rate that is received at a task thread. Our techniques use both a performance model and interpolation over it to support non-linear behavior and diverse input rates that make it amenable to efficient scheduling even under dynamic conditions. However, R-Storm is well suited for collections of dataflows that execute on large, shared Strom clusters with hundred's VMs that can be distributed across many racks in the data center. Here, the network latency between VMs vary greatly depending on their placement, and this can impact applications that have a high inter-task communication. Our algorithms do not actively consider the network latency other than scheduling the threads in BFS order, like R-Storm, to increase the possibility of collocating neighboring task threads in the DAG on the same slot. Consequently, our latency values suffer even as we offer predictable performance. Our target is streaming applications launched on a PaaS Storm cluster with tens of VMs that have network proximity, and for this scenario, the absence of network consideration is acceptable. That said, including network distance is a relatively simple extension to our model. Others have considered \emph{dynamic scheduling} for Apache Storm as well, where the scheduler adapts to changes in input rates at runtime. Latency is often the objective function they try to minimize while also limiting the resource usage. \cite{aniello:debs:2013} proposes an offline and an online scheduler which aim at minimizing the inter-node traffic by packing threads in decreasing order of communication cost into the same VM, while taking CPU capacity as constraint based on the resource usage at runtime. The goal here is on the mapping phase as well with the number of threads and slots for the tasks assumed to be known \emph{a priori}. The offline algorithm just examines the topological structure and does not take message input rate or resource usage into consideration for scheduling. It just place the threads of adjacent tasks on same slot and then slots are assigned to worker nodes in round robin fashion. The online algorithm monitors the communication patterns and resource usage at run time to decide the mapping. It tries to reduce the inter-node and inter-slot traffic among the threads. The online algorithm have two phases, In the first phase threads are partitioned among the workers assigned to DAG, minimizing the traffic among threads of distinct workers and balancing the CPU on each worker. In the second phase these workers are assigned to available slots in the cluster, minimizing the inter-node traffic. Both these algorithms uses tuning parameters that controls the balancing of threads assigned per worker. These algorithms also have the effect of reducing intra-VM communication traffic besides inter-VM messaging. \emph{T-Storm}~\cite{xu:icdcs:2014} also takes a similar mapping approach, but uses the summation of incoming and outgoing traffic rates for the tasks in descending order to decide the packing of threads to a VM. It further seeks to limit the messaging within a VM by just running one worker on each VM that is responsible for all threads on the VM. The algorithm monitors the load at run time and assigns the thread to available slot with minimum increamental traffic load. The number of threads for each task is user defined and their distribution among worker nodes is controlled by some parameter (consolidation factor), which is obtained emperically. Also, algorithm does not gurantee that communicating threads will be mapped to the same node as ordering is done based on total traffic and not on traffic between threads. Both these schedulers~\cite{xu:icdcs:2014,aniello:debs:2013} use only CPU\% as the packing criteria at runtime, and this can cause memory overflow for memory intensive tasks or when the message queue size grows. Their online algorithms also require active monitoring of the various tasks and slots at runtime, which can be invasive and entail management overheads. At the same time, despite their claim of adapting to runtime conditions, neither scheduler actually acquires new VM resources or relinquishes them, and instead just redistributes the existing threads on the captive set of VMs for load balancing and reducing the active slots. Thus, the input-rate capacity of the dataflow is bounded or the acquired captive resources can be under-utilized. Further, the use of a single worker per VM in T-Storm can be a bottleneck for queuing and routing when the total input and output rate of threads on that VM are high. While we do not directly address dynamic scheduling, our algorithms can potentially be rerun when the input rate changes to rebalance the schedule, without the need for fine-grained monitoring. This can include acquiring and releasing VMs as well since we offer both allocation and mapping algorithms. We consider both memory and CPU usage in our model-based approach. A predictable schedule behavior is a stated goal for us rather than reducing absolute latency through reduced communication. The \emph{P-Scheduler}~\cite{eskandari:acsw:2016} uses the ratio of total CPU usage from all threads to the VM's CPU threshold to find the number of VMs required for the DAG at runtime. The goal here is to dynamically determine the number of VMs required at runtime based on CPU usage and then map the threads to these VMs such that the traffic among VMs is minimized. The mapping hierarchically partitions the DAG, with edge-weights representing tuple transfer rate. It first maps threads to VMs and then re-partitions threads within the VM to slots. This reduces the communication costs but the partitioning can cause unbalanced CPU usage, and the memory usage is not considered at all. While the algorithm does VM allocation, it does not consider thread allocation that can cause VMs to be under-utilized. It also requires a centralized global monitoring of the data rates between threads and CPU\% to perform the partitioning and allocation As mentioned before, our scheduling offers both VM and thread allocation in addition to mapping, consider input rate, CPU\% and memory\% for the decisions, and our model does not rely on active runtime monitoring. There have been few works on resource and thread allocation for Storm. The \emph{DRS}~\cite{fu:icdcs:2015} is one such scheduler that models the DAG as open queuing network to find the expected latency of a task for a given number of threads. Its goal is to limit the expected task latency to within a user-defined threshold at runtime, while minimizing the total number of threads required and hence the resources. It monitors the tuple arrival and service rate at every task to find the expected latency using \emph{Erlang formula}~\cite{tijms:stochastic:1986}. The approach first finds the expected number of threads required for the task so that latency bound is met. This is done by increasing a thread for the task which gives maximum latency improvement obtained by Erlang formula discussed in paper, but it requires that an upper bound on total number of threads to be set by user. Also paper assumes that only a fixed number of threads can run on a VM, independent of thread type. The number of VMs are identified using total number of threads and number of threads that can run on a VM, already fixed by user. Mapping is done by default scheduler only. DRS uses an analytical approach like us, but based on queuing theory rather than empirical models. They apply it for runtime application but do not consider mapping of the threads to VM slots. We consider both allocation and mapping, but do not apply them to a dynamic scenario yet. Neither approaches require intensive runtime monitoring of resources and rates. Like us, they consider CPU resources and threads for allocation and not network, but unlike us, they do not consider memory\%. Their experiments show a mismatch between expected and observed performance from failing to include network delays in their model while our experiments do not highlight network communication to be a key contributor, partly because of the modest rates supported on the DAGs and resources we consider. They also bound the total number of CPU slots and the number of threads per VM, which we relax. \subsection{Scheduling for DSPS} While our scheduling algorithms were designed and evaluated in the context of Storm, it is generalizable to other DSPS as well. There has been a body of work on scheduling for DSPS, both static and adaptive scheduling on Cloud VMs, besides those related to Storm. Borealis~\cite{abadi:cidr:2005} an extnesion to Aurora~\cite{carney:vldb:2003} provides parallel processing of streams. It uses local and neighbor load information for balancing load across the cluster by moving operators. They also differ from cloud based DSPS as they assume that only fixed amount resources are available beforehand. Some extensions to Borealis like ~\cite{xing:icde:2005,pietzuch:icde:2006}, does not use intra operator level parallelism and considers only dynamic mapping of tasks for load balancing. TelegraphCQ~\cite{chandrasekaran:sigmod:2003} uses adaptive routing using special operators like Eddy and Juggle to optimize query plans. These special operators decides how to route data to different operators, reorders input tuples based on their content. It also dynamically decides the optimal stream partitioning for parallel processing. These systems allocate queries to seperate nodes for scaling with the number of queries and are not designed to run on cloud. COLA~\cite{khandekar:middleware:2009} for System S, scalable distributed stream processing system aims at finding best operator fusion (multiple operators within same process) possible for reducing inter process stream traffic.The approach first uses list scheduling (longest processing time) to get operators schedule then it checks for VM capacity(only CPU) if schedule is unfeasible, uses graph partitioning to split processing element to other VMs.Thus COLA also does not take memory intensive operators in to account. Infosphere streams~\cite{biem:sigmod:2010} uses component-based programming model. It helps in composing and reconfiguring individual components to create different applications. The scheduler component~\cite{wolf:middleware:2008} finds the best partitioning of data-flow graph and distributes it across a set of physical nodes. It uses the computational profiles of the operators, the loads on the nodes for making its scheduling decisions.   Apache S4~\cite{neumeyer:icdm:2010} follows the actor model and allocation is decided manually based on distinct keys in the input stream. The messages are distributed across the VMs based on hash function on all keyed attribute in input messages. S4 schedules parallel instances of operators but does not manage their parallelism and state. Since it does not support dynamic resource management thus unable to handle varying load. IBM System S~\cite{amini:dmssp:2006} run jobs in the form of data-flow graphs. It supports intra-query parallelism but management is manual. It also supports dynamic application composition and stream discovery, where multiple applications can directly interact. This support for sharing of streams across applications is done by annotating the messages with already declared types in global type system. This enables sharing of applications written by different developers through streams. Esc~\cite{satzger:cloudcomputing:2011} which process streaming data as key-value pairs. Hash functions are used to balance load by dynamically mapping the keys to the machines and function itself can be updated at run time. Hash function can also use the cpu,memory load based on the VM statistics for message distribution. Dynamic updation of the DAG based on the custom rules from user is also supported for load balance. A processing element in a DAG can have multiple operators and can be created at run time as per need. There can be many workers for a processing element. Since it dynamically adjusts the required computational resources based on the current load of the system it is good fit for use cases with varying load, with deployment on cloud. \cite{kumbhare:sc:2013} have used variant called dynamic dataflows that adapts to changing performance of cloud resources by using alternate processing elements.The logic uses variable sized bin packing for allocating processing element over the VMs on cloud. Dynamic rates are managed by allocating resources for alternate processing elements thus making tradeoff between cost and QoS on cloud. In ~\cite{gedik:tpds:2014} have proposed elastic auto-parallelization for balancing the dynamic load in case of SPL applications. The scaling is based on a control algorithm that monitors the congestion and throughput at runtime to adjust data parallelism. Each ope rator maintains a local state for every stream partition. An incremental migration protocol is proposed for maintaining states while scaling, minimizing the amount of state transfer between hosts. StreamCloud~\cite{gulisano:tpds:2012} modifies the parallelism level by splitting queries into sub queries minimizing the distribution overhead of parallel processing, each of which have utmost one stateful operator that stores its state in a \emph{tuple-bucket}, where the key for a state is a hash on a tuple. At the boundary between sub-queries, tuples are hashed and routed to specific stateful task instances that hold tuple-bucket with their hash key. This ensures consistent stateful operations with data-parallelism. It uses special operators called Load Balancers placed over outgoing edge of each instance of subcluster, LB does Bucket Instance Mapping to map buckets with instances of downstream clusters. ChronosStream~\cite{wu:icde:2015} hash-partitions computation states into collection of fine-grained slices and then distributes them to muliple nodes for scaling. Each slice is a computationally independent unit associated with a subset of input streams and and can be transparently relocated without affecting the consistency. The elasticity is achieved by migrating the workload to new nodes using a lightweight transactional migration protocol based on states. ElasticStream~\cite{ishii:cloud:2011} considers a hybrid model for processing streams as it is impossible to process them on local stream computing environment due to finite resources. The goal is to minimize the charges for using the cloud while satisfying the SLA, as a trade-off between the application’s latency and charges uisng linear programming. The approach dynamically adjusts the resources required with dynamic rates in place of over-provisioning with fixed amount of resources. The implementation done on System S is able to assign or remove computational resources dynamically. Twitter Heron~\cite{kulkarni:twitter:2015} does user defined thread allocation and mapping by Aurora scheduler. In the paper ~\cite{li:vldb:2015} proposed an analytical model for resource allocation and dynamic mapping to meet latency requirement while maximizing throughput, for processing real time streams on hadoop. Stela ~\cite{xu:ic2e:2016} uses effective throughput percentage (ETP) as the metric to decide the task to be scaled when user requests scaling in/out with given number of machines. The number of threads required for the tasks and their mapping to slots is not being discussed in the paper. \subsection{Parallel Scheduling} Our model-based approach is similar to scheduling strategies employed in parallel job and thread scheduling for HPC applications. The Performance Modeling frameworks~\cite{snavely:sc:2002,bailey:ecpp:2005,snavely:apc:2004} for large HPC systems predicts application performance from a function of system profiles (e.g., memory performance, communication performance). These profiles can be analysed to improve the application performance by understanding the tuning parameters. Also~\cite{bailey:ecpp:2005} proposes methods to reduce the time required for performance modelling, like combining the results of several compilation, execution, performance data analysis cycles into a application signature, so that these steps need not to be repeated each time a new performance question is asked. Warwick Performance Prediction (WARPP)~\cite{hammond:icst:2009} simulator is used to construct application performance models for complex parallel scientific codes executing on thousands of processing cores. It utilises coarse-grained compute and network models to enable the accurate assessment of parallel application behaviour at large scale. The simulator exposes six types of discrete events ranging from compute to I/O read,write to generate events representing the behaviour of a parallel application. ~\cite{gahvari:ics:2011} models the aplication performance for future architectures with several millions or billions of cores. It considers algebraic multigrid (AMG), a popular and highly efficient iterative solver to discuss the model-based predictions. It uses local computation and communication models as baseline for predicting the performance and its scalability on future machines. The paper~\cite{hong:sigarch:2009} proposes simple analytical model to predict the execution time of massively parallel programs on GPU architecture. This is done by estimating the number of parallel memory requests by considering the number of running threads and memory bandwidth. The aplication execution time in GPU is dominated by the latency of memory instructions. Thus by finding the number of memory requests that can be executed concurrently (memory warp parallelism) the effective costs of memory requests is estimated. ~\cite{hovestadt:wjsp:2003} proposes Planning systems and compares them to Queuing systems for resource managament in HPC. Features like advance resource reservation, request diffusing can not be achieved using queuing because it considers only present resource usgae. Planning systems like CCS, Maui Scheduler does resource planning for present and future by assigning start time to all requests and using run time estimates for each job. Recent works like~\cite{escobar:hipc:2016} uses statistical approach to predict application execution time using emperical analysis of execution time for small input sizes. The paper uses a collection of well known kernel benchmarks for modelling the execution time of each phase of an application. The approach collects profiles obtained by few short application runs to match phases to kernels and uses it for predicting the execution times accurately. Our model-based mapping of a bundle of threads also has some similarities with \emph{co-scheduling}~\cite{ousterhout:icdcs:1982} or \emph{gang scheduling}~\cite{feitelson:jpdcs:1992} of threads in concurrent and parallel systems. In the former, a set of co-dependent processes that are part of a working set are scheduled simultaneously by the Operating System (OS) on multi-processors to avoid process thrashing. In gang scheduling, a set of interacting threads are scheduled to concurrently execute on different processors and coordinate through busy-waits. The intution is to assign the threads to dedicated processors so that all dependent threads progress together without blocking. Our allocation and mapping based on performance models tries to identify and leverage the benefits of co-scheduling coarse-grained thread bundles from the same task onto one CPU, with deterministic performance given by the models, and by separating out thread bundles from different tasks onto different slots to execute concurrently without resource contention. We also encounter issues seen in gang scheduling that may cause processor over or under-allocation if sizes of the gangs do not match the number processors, which is similar to the partial thread bundles mapped to the same slot in our case, causing interference but reusing partial slots~\cite{feitelson:jc:1990}. At the same time, we perform the thread to slot mapping once in the streaming dataflow environment, and do not have to remap unless the input rate changes. Hence, we do not deal with recurrent or fine-grained scheduling issues such as constantly having to schedule threads since the number of threads are much more than the CPU cores, paired gang scheduling for threads with interleaved blocking I/O and compute~\cite{wiseman:tpds:2003}, or admission control due to inadequate resources~\cite{batat:ipdps:2000}. \section{Conclusion} \label{sec:conclusions} Based on these results, we see that LSA+RSM consistently allocates more resources than MBA+SAM, often twice as many slots due to its linear extrapolation of rate and resources. However, it still misses the planned input rate supported by $30-40\%$ in several cases due to unbalanced mapping by RSM where the rate does not scale as it expects. We see a $5-10\%$ drop for MBA+SAM due to the shuffle grouping that uniformly routes tuples to threads. Also, RSM often requires additional resources than ones allocated by LSA due to fragmentation during bin-packing, though this tends to be marginal. SAM has less fragmentation due to packing full bundles to exclusive slots. These hold for all DAGs, both micro and application, small and large. The model-based prediction of input rates is much more accurate than the planned prediction, correlating with the actual rate with and $R^2 \ge 0.71$. The few outliers we see are due to the model expecting a different routing compared to Storm's shuffle grouping, and due to the interpolation of rates based on the granularity of the performance models. MBA is consistently is better than LSA in the input rate supported for the same quanta of resources, through MBA+DSM shows the least improvement. MBA+RSM is often better than MBA+SAM in actual rate though MBA+SAM gives a predictable observed rate. Our performance model is able to predict the resource utilization for individual VMs with high accuracy, having $R^2$ value $\geq0.81$ for CPU\% and $\geq0.55$ for MEM\%, independent of the allocation and mapping technique used. The few prediction errors we see are due to threads receiving fewer than the peak rate for processing, where our model proportionally scales down the estimated resource usage relative to a single-thread usage at the peak rate. The low memory\% also causes the error to be sensitive to even small skews in the prediction, giving a lower correlation coefficient value. MBA consistently has a higher resource utilization than LSA, that is also reflected in the better input rate performance. While the resource usage across VMs for schedules based on MBA are close together, RSM shows a greater variation of its CPU\% and memory\% across VMs. \section{Future Work} The current slot aware mapping does not consider load aware shuffle groping, we can leverage it to have more accuracy for predicting supported input rate and resource requirement. Also Dynamic resource allocation and mapping for the given distribution of input rate or monitored input rate at run time is part of our future work. \bibliographystyle{plain} \footnotesize{
2024-02-18T23:40:10.215Z
2017-02-08T02:01:01.000Z
algebraic_stack_train_0000
1,530
26,039
proofpile-arXiv_065-7519
\section*{Results} \subsection*{Optimal projective measurements for a single two-level system} \label{optimalconditions} Let us begin by considering a single two-level system interacting with an arbitrary environment. The Hamiltonian of the system is $H_S$, the environment Hamiltonian is $H_{B}$, and there is some interaction between the system and the environment that is described by the Hamiltonian $V$. The total system-environment Hamiltonian is thus $H = H_S + H_B + V$. At time $t = 0$, we prepare the system state $\ket{\psi}$. In the usual treatment of the quantum Zeno and anti-Zeno effects, repeated projective measurements described by the projector $\ket{\psi}\bra{\psi}$ are then performed on the system with time interval $\tau$. The survival probability of the system state after one measurement is then $s(\tau) = \text{Tr}_{S,B}[(\ket{\psi}\bra{\psi} \otimes \mathds{1}) e^{iH_S\tau}e^{-iH\tau} \rho(0) e^{iH\tau}e^{-iH_S\tau}]$, where $\text{Tr}_{S,B}$ denotes taking the trace over the system and the environment, $\rho(0)$ is the initial combined state of the system and the environment, and the evolution of the system state due to the system Hamiltonian itself has been eliminated via a suitable unitary operation just before performing the measurement \cite{MatsuzakiPRB2010,ChaudhryPRA2014zeno,Chaudhryscirep2016}. Assuming that the system-environment correlations can be neglected, the survival probability after $N$ measurements can be written as $[s(\tau)]^N = e^{-\Gamma(\tau)N\tau}$, thereby defining the effective decay rate $\Gamma(\tau)$. It should be noted that the behaviour of the effective decay rate $\Gamma(\tau)$ as a function of the measurement interval allows us to identify the Zeno and anti-Zeno regimes. Namely, if $\Gamma(\tau)$ increases as $\tau$ increases, we are in the Zeno regime, while if $\Gamma(\tau)$ decreases if $\tau$ increases, we are in the anti-Zeno regime \cite{KurizkiNature2000, SegalPRA2007, ThilagamJMP2010, ChaudhryPRA2014zeno,Chaudhryscirep2016}. We now consider an alternative way of repeatedly preparing the initial state with time interval $\tau$. Once again, we start from the initial system state $\ket{\psi}$. After time $\tau$, we know that the state of the system is given by the density matrix $\rho_S(\tau) = \text{Tr}_B[e^{iH_S\tau}e^{-iH\tau} \rho(0) e^{iH\tau}e^{-iH_S\tau}]$, where once again the evolution due to the free system Hamiltonian has been removed. Now, instead of performing the projective measurement $\ket{\psi}\bra{\psi}$, we perform an arbitrary projective measurement given by the projector $\ket{\chi}\bra{\chi}$. The survival probability is then $s(\tau) = \text{Tr}_S[(\ket{\chi}\bra{\chi})\rho_S(\tau)]$, and the post-measurement state is $\ket{\chi}$. By performing a unitary operation $U_R$ on the system state on a short timescale, where $U_R\ket{\chi} = \ket{\psi}$, we can again end up with the initial state $\ket{\psi}$ after the measurement. This process can then, as before, repeated again and again to repeatedly prepare the system state $\ket{\psi}$. Once again, if the correlations between the system and the environment can be neglected, we can write the effective decay rate as $\Gamma(\tau) = -\frac{1}{\tau}\ln s(\tau)$. But now, we can, in principle, via a suitable choice of the projector $\ket{\chi}\bra{\chi}$, obtain a larger survival probability (and a correspondingly smaller decay rate) than what was obtained with repeatedly using projective measurements given by the projector $\ket{\psi}\bra{\psi}$. The question, then, is what is this projector $\ket{\chi}\bra{\chi}$ that should be chosen to maximize the survival probability? For an arbitrary quantum system, it is difficult to give a general condition or formalism that will predict this optimal projective measurement. However, most studies of the effect of repeated quantum measurements on quantum systems have been performed by considering the quantum system to be a single two-level system \cite{KoshinoPhysRep2005}. Let us now show that if the quantum system is a two-level system, then it is straightforward to derive a general method for calculating the optimal projective measurements that need to be performed as well as an expression for the optimized decay rate. We start from the observation that the system density matrix at time $\tau$, just before the measurement, can be written as \begin{equation} \label{do} \rho_{S}(\tau) = \frac{1}{2} \Big (\mathds{1} + n_{x}(\tau)\sigma_{x} + n_{y}(\tau)\sigma_{y} + n_{z}(\tau)\sigma_{z} \Big ) = \frac{1}{2} \Big ( \mathds{1} + \mathbf{n}(\tau) \cdot \mathbf{\sigma} \Big ), \end{equation} where $\mathbf{n}(\tau)$ is the Bloch vector of the system state. We are interested in maximizing the survival probability $s(\tau) = \text{Tr}_S[(\ket{\chi}\bra{\chi})\rho_S(\tau)]$. It is clear that we can also write \begin{equation} \label{dop} \ket {\chi} \bra {\chi} = \frac{1}{2} \Big ( \mathds{1} + n'_{x}\sigma_{x} + n'_{y}\sigma_{y} + n'_{z}\sigma_{z} \Big ) = \frac{1}{2} \Big ( \mathds{1} + \mathbf{n'} \cdot \mathbf{\sigma} \Big ), \end{equation} where $\mathbf{n'}$ is a unit vector corresponding to the Bloch vector for the projector $\ket{\chi}\bra{\chi}$. Using Eqs.~\eqref{do} and \eqref{dop}, we find that the survival probability is \begin{equation} s(\tau) = \frac{1}{2}\left(1 + \mathbf{n}(\tau) \cdot \mathbf{n'} \right). \end{equation} It should then be obvious how to find the optimal projective measurement $\ket{\chi}\bra{\chi}$ that needs to be performed. The maximum survival probability is obtained if $\mathbf{n'}$ is parallel to $\mathbf{n}(\tau)$. If we know $\rho_S(\tau)$, we can find out $\mathbf{n}(\tau)$. Consequently, $\mathbf{n'}$ is simply the unit vector parallel to $\mathbf{n}(\tau)$. Once we know $\mathbf{n'}$, we know the projective measurement $\ket{\chi}\bra{\chi}$ that needs to be performed. The corresponding optimal survival probability is given by \begin{equation} \label{optimizedprobability} s^{*}(\tau) = \frac{1}{2}\left(1 + \norm{\mathbf{n}(\tau)}\right). \end{equation} Now, if we ignore the correlations between the system and environment, which is valid for weak system-environment coupling, we can again derive the effective decay rate of the quantum state to be $\Gamma(\tau) = -\frac{1}{\tau}\ln s^{*}(\tau)$. We now investigate the optimal effective decay rate for a variety of system-environment models. \begin{comment} after optimizing over \eqref{optimalprob1}, the expression for the optimal survival probability is given by: \begin{equation}\label{optimalprob} s^{*}(\tau) = \frac{1}{2} \Big (1 + \norm{n(\tau)} \Big ), \end{equation} where $\norm{n(\tau)}$ is the Bloch vector of the system's density matrix at time $\tau$. We assume that the system-environment interaction is weak enough such that we can safely ignore the build-up of the system-environment correlations as the system and the environment evolve together\cite{ChaudhryPRA2014zeno}. In this case if one makes $N$ optimal projective measurements, the survival probability, $S(\tau)$, is given by $S(\tau) = [s^{*}(\tau)]^N$, where $s(\tau)$ is the survival probability associated with one measurement. We define $S(\tau) = e^{- \Gamma(\tau) N \tau}$. From this definition, we note that the effective decay rate is given by $\Gamma(\tau) = - \frac{1}{\tau} \ln s^{*} (\tau)$. We now use these results for the case of various models for a single two level system coupled to an environment. \end{comment} \begin{comment} that it is very easy to derive the optimal measurement that needs to be performed in this case. We are interested in determining the optimal projective measurements one could make to maximize the probability, under a single measurement, for the state of the system to collapse to the state corresponding to the projective measurement that is made. Assume that Hamiltonian can be written written in the form $H = H_S + H_B + V$, where $H_S$ is the system Hamiltonian, $H_B$ is the Hamiltonian of the environment, and $V$ describes the system-environment interaction. Assume that the system is weakly coupled to an external environment (a bath) such that the density matrix of the composite (total) system is: \begin{equation} \rho_{T} (0) = \rho_{S}(0) \otimes \rho_{B}. \end{equation} We don't assume a specific functional form of the time-evolved density operator of the composite system at a later time $\tau, \rho_T(\tau)$, since both the system and the bath could represent an entangled composite state. However, since the system of interest is a two-level system, we can easily work in the Bloch picture by writing the time-evolved density matrix of the system as: \end{comment} \subsection*{The population decay model} \begin{figure} {\includegraphics[scale = 0.55]{PopulationDecayMerged2.pdf}}\caption{\textbf{Behaviour of both the survival probability and the effective decay rate for the population decay model.} \textbf{(a)} $s(\tau)$ versus $\tau$. The purple dashed curve shows the survival probability if the excited state is repeatedly measured; the black curve shows the survival probability if the optimal projective measurement is repeatedly made. \textbf{(b)} $\Gamma(\tau)$ versus $\tau$. The blue dashed curve shows the decay rate if the excited state is repeatedly measured; the solid red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.01, \omega_c = 50$ and $\varepsilon = 1$. In this case, $\tau^* \approx 10.6$.} \label{PopDecayMerged2} \end{figure} To begin, we consider the paradigmatic population decay model. The system-environment Hamiltonian is (we use $\hbar = 1$ throughout) \begin{equation} H = \frac{\varepsilon}{2}\sigma_{z} + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + \sum_{k} (g_{k}^{*}b_{k}\sigma^{+} + g_{k}b_{k}^{\dagger}\sigma^{-}), \end{equation} where $\varepsilon$ is the energy difference between the two levels, $\sigma_{z}$ is the standard Pauli matrix, $\sigma^{+}$ and $\sigma^{-}$ are the raising and lowering operators, and $b_k$ and $b_k^{\dagger}$ are the annihilation and creation operators for mode $k$ of the environment. It should be noted that here we have made the rotating-wave approximation. This system-environment Hamiltonian is widely used to study, for instance, spontaneous emission \cite{Scullybook}. We consider the very low temperature regime. We initially prepare the system-environment state $\ket{\uparrow_z,0}$ that describes the two-level system to be in the excited state and the environment oscillators to be in their ground state. Ordinarily, in the studies of the QZE and the QAZE, the system is repeatedly projected onto the excited state with time interval $\tau$. As discussed before, we, on the other hand, allow the system to be projected onto some other state such that the effective decay rate is minimized. To find this optimal projective measurement, we need to understand how the Bloch vector of the system evolves in time. Due to the structure of the system-environment Hamiltonian, the system-environment state at a later time $\tau$ can be written as $ | \psi(t) \rangle = f(t) \ket{\uparrow_z, 0} + \sum_{k} f_k(t) \ket{\downarrow_z, 0}$, where $\ket{\downarrow_z, k}$ means that the two-level system is in the ground state and that mode $k$ of the environment has been excited. It then follows that the density matrix of the system at time $\tau$ is \begin{align}\label{reduceddebsitymatrix} \rho_{S}(\tau)& = \begin{bmatrix} |f(t)|^2 & 0 \\ 0 & \displaystyle \sum_{k} |f_{k}(t) |^2 \end{bmatrix} \end{align} \begin{comment} Using \eqref{pop.psti.t}, we calculate the time-evolved density matrix of the composite (system and environment) system, which is $\rho_{T}(\tau) = | \psi(\tau) \rangle \langle \psi(\tau) |$. This expression takes the form \begin{align} \label{pop.density} \rho_{T}(\tau) = |f(\tau)|^{2} \ket{\uparrow_z, 0} \bra{\uparrow_z, 0} + f(\tau) \sum_{k'} \ket {\uparrow_z, 0} f_{k'}^{*}(\tau) \bra{\downarrow_z, k'} + f^{*}(\tau) \sum_{k} f_k(\tau) \ket{\downarrow_z, k} \bra{\uparrow_z, 0} + \sum_{k, k'}f_k(\tau)f_{k'}^{*}(\tau) \ket{\downarrow_z, k} \bra{\downarrow_z, k'}. \end{align} The aforementioned expression expresses the density operator for the total (composite) system. To calculate the density operator for the two level system, we take the partial trace of \eqref{pop.density}. The result is: \begin{comment} \begin{align}\label{partialtrace} \rho_{S}(\tau) & = \Tr_{B} [ \rho_{T}(\tau) ] = \sum_{m} (\mathds{1}_{A} \otimes \bra {m}_{B}) \rho_{T}(\tau) (\mathds{1}_{A} \otimes \ket{m}_{B}). \end{align} To avoid making the equations cumbersome, we evaluate each of the four terms (labeled alphabetically) in \eqref{partialtrace} individually. \begin{align} \label{first} A & = \sum_{m} |f(\tau)|^2 \Big ( \mathds{1}_{A} \otimes \bra {m}_{B} \Big ) \ket{\uparrow_z, 0} \bra{\uparrow_z, 0} \Big ( \mathds{1}_{A} \otimes \ket{m}_{B} \Big ) \nonumber \\ & = |f(\tau)|^2 \sum_{m} \ket {\uparrow_z} \braket{m|0}_{B} \bra{\uparrow_z} \braket{0|m}_{B}, \nonumber \\ & = |f(\tau)|^2 \sum_{m} \delta_{m_{B}, 0_{B}} \ket {\uparrow_z} \bra{\uparrow_z}, \nonumber \\ & = |f(\tau)|^2 \ket {\uparrow_z} \bra{\uparrow_z}. \end{align} \begin{align} \label{second} B & = f(\tau) \sum_{k', m} f_{k'}^{*}(\tau) \Big ( \mathds{1}_{A} \otimes \bra {m}_{B} \Big ) \ket {\uparrow_z, 0} \bra{\downarrow_z, k'} \Big ( \mathds{1}_{A} \otimes \ket{m}_{B} \Big ) \nonumber, \\ & = f(\tau) \sum_{k', m} f_{k'}^{*}(\tau) \ket {\uparrow_z} \braket{m|0}_{B} \bra{\downarrow_z} \braket{k'|m}_{B}, \nonumber \\ & = f(\tau) \sum_{k', m} f_{k'}^{*}(\tau) \delta_{m_{B}, 0_{B}} \delta_{k'_{B}, m_{B}} \ket{\uparrow_z} \bra{\downarrow_z}, \nonumber \\ & = 0. \end{align} The last equality follows since the sequence of two Kronecker delta functions imply that we must have: \begin{equation} \label{contradiction} m_{B} = k'_{B} = 0_{B}. \end{equation} Equation \eqref{contradiction} is not true since by assumption, \begin{equation} k_{B}' \geq 1, \end{equation} since we assume that excitations of the number state cause transitions from the excited state to the ground state in the two level system. If $k'_{B} = 0$, then by assumption the construction in \eqref{second} would not hold true. A similar line of reasoning allows use to conclude that the other off-diagonal term, labeled $C$, must also be zero. \begin{align} \label{fourth} D & = \sum_{m, k, k'} f_{k}(\tau) f_{k'}^{*}(\tau) \Big ( \mathds{1}_{A} \otimes \bra {m}_{B} \Big ) \ket {\downarrow_z, k} \bra{\downarrow_z, k'} \Big ( \mathds{1}_{A} \otimes \ket{m}_{B} \Big ) \nonumber \\ & = \sum_{m, k, k'} f_{k}(\tau) f_{k'}^{*}(\tau) \ket {\downarrow_z} \braket{m|k}_{B} \bra{\downarrow_z} \braket{k'|m}_{B} \nonumber \\ & = \sum_{m, k, k'} f_{k}(\tau) f_{k'}^{*}(\tau) \delta_{m_{B}, k_{B}} \delta_{k'_{B}, m_{B}} \ket{\downarrow_z} \bra{\downarrow_z}, \nonumber \\ & = \sum_{k} |f_{k}(\tau) |^2 \ket{\uparrow_z} \bra{\downarrow_z}. \end{align} Equations \eqref{first}, \eqref{second} and \eqref{fourth} imply that the matrix representation of reduced density matrix of the system, written in the basis of the $\sigma_{z}$ operator, is: \end{comment} We consequently find that the components of the Bloch vector of the system are $n_{x}(t) = n_{y} (t) = 0$, while $n_{z}(t) = 1 - 2 \displaystyle \sum_{k} |f_{k}(t) |^2$. \begin{comment} \begin{align}\label{nzpop} n_{z}(\tau) & = \Tr (\sigma_{z} \rho(\tau)), \nonumber \\ & = \Tr \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} |f(\tau)|^2 & 0 \\ 0 & \displaystyle \sum_{k} |f_{k}(\tau) |^2 \end{bmatrix} \nonumber, \\ & = \Tr \begin{bmatrix} |f(\tau)|^2 & 0 \\ 0 & - \displaystyle \sum_{k} |f_{k}(\tau) |^2 \end{bmatrix} \nonumber, \\ & = |f(\tau)|^2 - \displaystyle \sum_{k} |f_{k}(\tau) |^2, \nonumber \\ & = 1 - 2 \displaystyle \sum_{k} |f_{k}(\tau) |^2, \end{align} \end{comment} Thus, we have a nice interpretation for the dynamics of the system. Initially, the system is in the excited state. As time goes on, the coherences remain zero, and the probability that the system makes a transition to the ground state increases. In other words, initially the Bloch vector of the system is a unit vector with $n_z(0) = 1$. The Bloch vector then decreases in magnitude (while keeping the $x$ and $y$ components zero) until the size of the Bloch vector becomes zero. The Bloch vector thereafter flips direction and increases in length until effectively $n_z(t) = -1$. Since the direction of the Bloch vector corresponding to the optimal measurement is parallel to the Bloch vector of the system, we find that if the measurement interval is short enough such that $n_z(\tau) > 0$, then we should keep on applying the projector $\ket{\uparrow_z}\bra{\uparrow_z}$. On the other hand, if the measurement interval is large enough so that $n_z(\tau) < 0$, then we should rather apply the projector $\ket{\downarrow_z}\bra{\downarrow_z}$, and then, just after the measurement, apply a $\pi$ pulse so as to end up with the system state $\ket{\uparrow_z}$ again. In other words, the time $t = \tau^*$ at which the Bloch vector flips direction is of critical importance to us and needs to be found out in order to optimize the effective decay rate. To find this time, we assume that the system and the environment are weakly coupled and thus we can use a master equation to analyze the dynamics of the system. Since we numerically solve this master equation, we might as well put back in the non-rotating wave approximation terms so that the system-environment interaction that we consider in solving the master equation is $\sum_{k} \sigma_{x} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger})$. The master equation that we use can be written as \begin{equation} \label{masterequation} \frac{d \rho_S (t)}{dt} = i[\rho_S(t), H_{S}] + \int_{0}^{t} ds \Big \{ [\bar{F}(t,s)\rho_S(t), F ]C_{t s} + \; \text{h.c.} \Big \}, \end{equation} where the system Hamiltonian is $H_S = \frac{\varepsilon}{2}\sigma_z$, the system-environment interaction Hamiltonian has been written as $F \otimes B$ with $F = \sigma_x$ and $B = \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger})$, $\bar{F}(t,s) = e^{iH_S(t -s)}Fe^{-iH_S(t -s)}$, and h.c. denotes the hermitian conjugate. Here the environment correlation function $C_{ts}$ is defined as $C_{ts} = \text{Tr}_B [\widetilde{B}(t)\widetilde{B}(s) \rho_B]$ where $\widetilde{B}(t) = e^{iH_B t} B e^{-iH_B t}$ and $\rho_B = e^{-\beta H_B}/Z_B$ with $Z_B$ the partition function. To find the environment correlation function, we introduce the spectral density function $J(\omega) = G \omega^s \omega_{c}^{1-s} e^{- \omega/ \omega_c}$, where $G$ parametrizes the system-environment coupling strength, $s$ characterizes the Ohmicity of the environment, and $\omega_c$ is the cutoff frequency. We can then numerically solve this differential equation to find the system density matrix at any time $t$ and consequently the Bloch vector of the system. We consequently know the optimal projective measurement that needs to performed. \begin{comment} In the absence of the system-environment coupling, the state of the system will remain in the excited state. In the presence of the system-environment coupling, an excitation of one of the modes in the environment can cause transitions in the system: the quantum mechanical particle can make the transition from being in the excited state to the ground state, $\ket{\downarrow_z}$. The probability that the particle is in the ground state and the $k^{th}$ of the environment has been excited is given by $|f_{k}(\tau)|^2$. At $\tau = 0$, the $|f_{k}(\tau)|^2$ is zero for every $k$ since the system is in the excited state and the environment is in the ground state; hence, we have that $n_z(0)$ is 1. At a later time $\tau$, the system could have made a transition to the ground state with at least any one of the modes of the system being excited. Clearly, in this case $n_z(0)$ is less than $1$. The effect of such a time evolution would be that the magnitude of $n_z$ will continue to decrease until it is effectively at $-1$. We are interested in the time $\tau^*$ when the system's Bloch vector, specified by $n_z$, flips its sign. This is true because for $\tau < \tau^*$, the Bloch vector of the system would be parallel to the Bloch vector specifying the pure state, $\ket{\uparrow_z}$. Hence, we would continue to measure the projector $\ket{\uparrow_z} \bra {\uparrow_z}$ in so far as $\tau < \tau^*$. If $\tau > \tau^*$, the Bloch vector of the system would be parallel to the Bloch vector specifying the pure state, $\ket{\downarrow_z}$. Hence, it would now be optimal to measure the projector $\ket{\downarrow_z} \bra {\downarrow_z}$ in so far as $\tau > \tau^*$. \end{comment} \begin{comment} While first-order time-dependent perturbation theory can be used to calculate $f_k(\tau)$ -- and hence $f(\tau)$ -- which will yield an expression for the survival probability -- we choose not to use this approach since for very weak system-environment coupling, the time at which the Bloch vector flips, $\tau^*$, may be very large. Perturbation theory is almost certain to fail for large times. Instead, we numerically evaluate the dynamics of the system using the Redfield master equation\cite{ChaudhryThesis} : \begin{equation} \label{masterequation} \frac{d \rho (\tau)}{d \tau} = i[\rho(\tau),\mathcal{H}_{S}(\tau)] + \int_{t_0}^{\tau} ds \Big \{ [\bar{F}(\tau,s)\rho(\tau), F(\tau) ]C_{\tau s} + \; \text{h.c.} \Big \}, \end{equation} where the first term denotes coherent evolution of the system. $F(\tau)$ denotes the system's operator at time $\tau$ present in the system-environment coupling term, which is assumed to be of the form $F(\tau) \otimes B(\tau)$; here, $B(\tau)$ is the environment's operator in the coupling term. The effect of the environment's operator in the coupling term is captured in the bath correlation function, $C_{\tau s}$. In addition, $\bar{F}(\tau,s)$ is given by the equation $\bar{F}(\tau,s) = U_{S}(\tau, s) F(s) U_{S}^{\dagger}(\tau, s).$ \begin{figure} {\includegraphics[scale = 0.55]{PopulationDecayMerged1.pdf}} \caption{ \label{PopDecayMerged1} \textbf{Behaviour of both the survival probability and the effective decay rate for the population decay model.} \textbf{(a)} $s(\tau)$ versus $\tau$. The purple dashed curve shows the survival probability if the excited state is repeatedly measured; the black curve shows the survival probability if the optimal projective measurement is repeatedly made. \textbf{(b)} $\Gamma(\tau)$ versus $\tau$. The blue dashed curve shows the decay rate if the excited state is repeatedly measured; the red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.1, \omega_c = 50$ and $\varepsilon = 1$. In this case, $\tau^* \approx 0.6$. For $\tau = 1.2$ and $N = 2$, $\Delta s(\tau)= 0.3 !$} \end{figure} We first note the coherent evolution of the system is zero. This is clear because the density matrix of the system, \eqref{reduceddebsitymatrix}, is diagonal for all times $\tau$; hence, it commutes with the system's Hamiltonian, $\frac{\varepsilon}{2} \sigma_z$ at all times $\tau$. We also note that the coupling term in the Hamiltonian is given by $\sum_{k} (g_{k}^{*}b_{k}\sigma^{+} + g_{k}b_{k}^{\dagger}\sigma^{-})$, which is of the form $\sum_{\alpha = 1}^{2} F_{\alpha}(\tau) \otimes B_{\alpha}(\tau)$. In order to use \eqref{masterequation}, we must cast the coupling term to be a single tensor product of two operators -- as stated in the previous paragraph. We take the coupling term to be $\sum_{k} \sigma_{x} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger})$. The only difference between the new coupling term and the old coupling term is that the new coupling term contains the non-rotating wave approximation terms. However, we expect that these additional terms to not play a significant role in the weak system environment-coupling regime. Noting that all the operators in our case show no explicit time dependence, the master equation for our model assumes the form: \begin{equation} \label{masterequation1} \frac{d \rho (\tau)}{d \tau} = \int_{t_0}^{\tau} ds \Big \{ [\bar{F}(\tau,s)\rho(\tau), F]C(\tau) + \; \text{h.c.} \Big \}, \end{equation} where $\bar{F}(\tau,s) = U_{S}(\tau, s) F U_{S}^{\dagger}(\tau, s)$ and $C(\tau) \equiv C_{\tau s} = \langle \bar{B}(\tau) B \rangle$. Note that in the aforementioned operators, the time dependence arises since we are considering the Heisenberg picture operators. We also note that h.c. denotes the hermitian conjugate of the expression in the curly bracket. We also note that the Redfield master equation holds true when one assumes both that the system is weakly coupled to the environment and that the composite state of the system is given by a product state at time $\tau = 0$. Since we have consistently made these assumptions in our analysis thus far, we proceed to numerically evaluate the master equation, \eqref{masterequation1}. \end{comment} \begin{figure} {\includegraphics[scale = 0.5]{Dephasing1.pdf}} \caption{\label{Dephasing1}\textbf{Graphs of the effective decay rate under making optimal projective measurements in the pure dephasing model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1, 0, 0)$. The thickened light gray curve curve shows the decay rate if the initial state is repeatedly measured; the green curve shows the decay rate if the optimal projective measurement is repeatedly made. It is clear from the figure that the two curves identically overlap. \textbf{(b)} $\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3} )$. The blue dashed curve shows the decay rate if the initial state is repeatedly measured; the solid red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.1, \omega_c = 10, \beta = 0.5$. For $\tau = 1$ and $N = 3$, the difference in the survival probabilities is already $0.15$.} \end{figure} We now present a computational example (see the previous page). We plot the single measurement survival probability [see Fig.~\ref{PopDecayMerged2}(a)] and the effective decay rate [see Fig.~\ref{PopDecayMerged2}(b)] as a function of the measurement interval $\tau$. The dotted lines illustrate what happens if we keep on projecting the system state onto $\ket{\uparrow_z}$. For a small measurement interval, the optimal measurement is $\ket{\uparrow_z}\bra{\uparrow_z}$ since the system Bloch vector has only a positive $z$-component. On the other hand, if we realize that when the measurement interval is long enough such that, between each measurement, the Bloch vector flips direction, then to maximize the survival probability, we should project instead onto the state $\ket{\downarrow_z}$ and then apply $\pi$ pulse. Doing so, we can obtain a higher survival probability or, equivalently, a lower effective decay rate. This is precisely what we observe from the figure. For this population decay case, we find that if the measurement interval is larger than $\tau^* \approx 10.6$, then we are better off by performing the measurement $\ket{\downarrow_z}\bra{\downarrow_z}$. There is also a small change in the anti-Zeno behaviour. For our modified strategy of repeatedly preparing the quantum state, we find that beyond measurement interval $\tau = \tau^*$, there is a sharper signature of anti-Zeno behaviour as compared to the usual strategy of repeatedly measuring the excited state of the system. \begin{comment} From the figure, it is clear that $\tau^* \approx 0.6$. For $\tau < \tau^*$, one is already make the optimal projective measurement by repeatedly measuring the excited state. For $\tau > \tau^*$, it is now optimal to measure the ground state of the system. In so far as the condition $\tau \ggg \tau^*$ is not met, one can repeatedly measure the ground state of the system at equal time intervals $\tau$, such that $\tau > \tau^*$. If the the measurement leads to the quantum state of the system collapsing to the ground state, one can implement a unitary time evolution operation such that, $ U(\tau', \tau) \ket{\downarrow_z} = \ket{\uparrow_z} \quad \text{such that} \quad \tau' \approx \tau > \tau^*$. Using this method, one can `forcefully' collapse the quantum state of the system to the ground state and recover the excited state. If one waits for long times $\tau$ such that the condition $\tau \ggg \tau^*$ is met, the quantum state of the system is already likely to be in the ground state of the system due to spontaneous emission caused by the system-environment coupling. In that case, implementing the unitary time evolution is trivial since one is choosing to implement it after spontaneous emission has occurred. On the other hand as argued above, if $\tau \approx \tau^*$ and $\tau > \tau^*$, one can effectively freeze the quantum state of the system to be in the excited state by repeatedly measuring the ground state of the system followed by implementing the unitary time evolution. In this case, the relevant time interval in which one can make the relevant measurement and effectively freeze the quantum state of the system is approximately $[0.6, 1.0]$. In FIG. \ref{PopDecayMerged2}, we plot another computational example for the case where the system-environment coupling is weaker compared to FIG. \ref{PopDecayMerged1}. The same interpretation as in the previous paragraphs applies. In this case, $\tau^{*} \approx 10.6$. \end{comment} \subsection*{Pure dephasing model} \label{dephasing} \begin{figure} {\includegraphics[scale = 0.6]{Dephasing2.pdf}} \caption{\label{Dephasing2}\textbf{Graphs of the effective decay rate and the optimal spherical angles under making optimal projective measurements in the pure dephasing model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1/\sqrt{10}, 0, \sqrt{9/10} )$. The same parameters as used in Fig.~\ref{Dephasing1} have been used for this case as well. \textbf{(b)} Graphs of the optimal spherical angles that maximize the survival probability. $\theta$ is the polar angle and $\alpha$ is the azimuthal angle, which remains $0$ at all times.} \end{figure} We now analyze our strategy for the case of the pure dephasing model \cite{BPbook}. The system-environment Hamiltonian is now given by \begin{equation} H = \frac{\varepsilon}{2}\sigma_{z} + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + \sigma_{z} \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger}). \end{equation} The difference now compared to the previous population decay model is that the system-environment coupling term now contains $\sigma_z$ instead of $\sigma_x$. This difference implies that the diagonal entries of the system density matrix $\rho_S(t)$ (in the $\ket{\uparrow_z}, \bra{\uparrow_z}$ basis) cannot change - only dephasing can possibly take place, which is the reason that this model is known as the pure dephasing model\cite{ChaudhryPRA2014zeno}. Furthermore, this pure dephasing model is exactly solvable. The off-diagonal elements of the density matrix undergo both unitary time evolution due to the system Hamiltonian and non-unitary time evolution due to the coupling with the environment. Assuming the initial state of the total system is the standard product state $\rho_S(0) \otimes \rho_B/Z_B$, the off-diagonals of the density matrix $[\rho_S(t)]_{mn}$, once the evolution due to the system Hamiltonian itself is removed, are given by $[\rho_S(t)]_{mn} = [\rho_{S}(0)]_{mn} e^{- \gamma(\tau)}$ where $\gamma(t) = \sum_k 4|g_k|^2 \frac{(1 - \cos (\omega_k t))}{\omega_k^2} \coth \left( \frac{\beta \omega_k}{2} \right)$\cite{ChaudhryPRA2014zeno}. Writing an arbitrary initial state of the system as $\ket \psi = \cos \Big (\frac{\theta}{2} \Big ) \ket{\uparrow_z} + \sin \Big (\frac{\theta}{2} \Big ) e^{i \phi} \ket{\downarrow_z}$, it is straightforward to find that \begin{align} \label{DephasingBlochVector} n_x(t) = e^{- \gamma(t)} n_{x}(0) , \; n_y(t) = e^{- \gamma(t)} n_{y}(0), \; n_z(t) & = n_{z}(0). \end{align} The optimal survival probability obtained using optimized measurements is then \begin{equation}\label{probnew} s^{*}(\tau) = \frac{1}{2} \Big (1 + \sqrt{n_z(0)^2 + (e^{- \gamma(\tau)})^2(n_x(0)^2 + n_z(0)^2)} \; \Big ), \end{equation} where Eq.~\eqref{optimizedprobability} has been used. On the other hand, if we keep on preparing the initial state $\ket{\psi}$ by using the projective measurements $\ket{\psi}\bra{\psi}$, we find that \begin{equation} \label{probold} s(\tau) = \frac{1}{2} \Big (1 + n_z(0)^2 + e^{- \gamma(\tau)} \big ( n_x(0)^2 + n_z(0)^2 \big ) \; \Big ). \end{equation} We now analyze Eqs.~\eqref{probnew} and \eqref{probold} to find conditions under which we can lower the effective decay rate by using optimized projective measurements. It is clear that if the initial state, in the Bloch sphere picture, lies in the equatorial plane, then $n_z(0) = 0$ while $n_x(0)^2 + n_y(0)^2 = 1$. In this case, Eqs.~\eqref{probnew} and \eqref{probold} give the same survival probability. Thus, in this case, there is no advantage of using our strategy of optimized measurements as compared with the usual strategy. The reasoning is clear. In the Bloch sphere picture, the magnitude of the time evolved Bloch vector of the density matrix reduces such that the time evolved Bloch vector is always parallel to the Bloch vector of the initial pure system state. As argued before, the optimal projector to measure at time $\tau$, $\ket \chi \bra \chi$, must be parallel to the Bloch vector of the density matrix at time $\tau$. Hence in this case, the optimal projector to measure is $\ket \psi \bra \psi$, corresponding to the initial state. The computational example shown in Fig.~\ref{Dephasing1}(a) illustrates our predictions. On the other hand, if we make some other choice of the initial state that we repeatedly prepare, we can easily see that our optimized strategy can give us an advantage. We simply look for the case where the evolved Bloch vector (after removal of the evolution due to the system Hamiltonian) no longer remains parallel with the initial Bloch vector. Upon inspecting Eq.~\eqref{DephasingBlochVector}, we find that our optimized strategy can be advantageous if $n_z(0) \neq 0$ (excluding, of course, the cases $n_z(0) = \pm 1$). In other words, if the Bloch vector of the initial state does not lie in the equatorial plane, then the Bloch vector of this state at some later time will not remain parallel to the initial Bloch vector. In this case then, our optimal measurement scheme will give a higher survival probability as compared to repeatedly measuring the same initial state. This is illustrated in Fig.~\ref{Dephasing1}(b) where we show the effective decay rate and the survival probability after a single measurement for the initial state specified by the Bloch vector $(1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3} )$. After the time at which the transition between the Zeno and the anti-Zeno regimes occurs, we clearly observe that the decay rate is lower when one makes the optimal projective measurements. Although this difference on first sight may not appear as very significant, if we perform a relatively large number of repeated measurements, the difference is very significant. For example, even for three measurements with measurement interval $\tau = 1$, we find that the quantum state has $0.15$ greater survival probability with the optimized measurements as compared with the usual unoptimized strategy of repeatedly preparing the quantum state. Another computational example has been provided in Fig.~\ref{Dephasing2} where the initial state is now given by the Bloch vector $(1/\sqrt{10}, 0, \sqrt{9/10} )$. In Fig.~\ref{Dephasing2}(a) we have again illustrated that our optimized strategy of repeatedly preparing the quantum state is better at protecting the quantum compared to the usual strategy. In Fig.~\ref{Dephasing2}(b) we have shown how the optimal projective measurement that needs to be performed changes with the measurement interval $\tau$. In order to do so, we have parametrized the Bloch vector corresponding to $\ket{\chi}\bra{\chi}$ using the usual spherical polar angles $\theta$ and $\alpha$. Note that the value of the azimuthal angle $\alpha$ is expected to remain constant since we have $\alpha(\tau) = \arctan ( n_{y}(\tau)/n_{x}(\tau) ) = \alpha(0)$. On the other hand, the optimal value of the polar angle $\theta$ changes with the measurement interval. This is also expected since as the system dephases, $e^{- \gamma(\tau)} \rightarrow 0$, ensuring that $n_{x}(\tau), n_{y}(\tau) \rightarrow 0$. Thus, for long measurement intervals, the system Bloch vector becomes effectively parallel to the $z-\text{axis}$. It follows that $\theta \rightarrow 0$ for long measurement intervals. These predictions are borne out by the behaviour of $\theta$ and $\alpha$ in Fig.~\ref{Dephasing2}(b). \subsection*{The Spin-Boson Model} We now consider the more general system-environment model given by the Hamiltonian \begin{figure} {\includegraphics[scale = 0.55]{SB1.pdf}}\caption{\textbf{Graphs of the effective decay rate under making optimal projective measurements in the spin-boson model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ (low temperature) for the state specified by $\theta = \pi/2$ and $\alpha = 0$. The blue dashed curve shows the decay rate in the spin boson model ($\Delta = 2, \varepsilon = 2$) if the initial state is repeatedly measured, and the solid red curve shows the effective decay rate with the optimal measurements. We have used $G = 0.01, \omega_c = 10$ and $s = 1$. \textbf{(b)} is the same as \textbf{(a)} except for the domain of the graph.} \label{SB1} \end{figure} \begin{equation}\label{spinboson} H = \frac{\varepsilon}{2} \sigma_z + \frac{\Delta}{2} \sigma_x + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + \sigma_{z} \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger}), \end{equation} where $\Delta$ can be understood as the tunneling amplitude for the system, and the rest of the parameters are defined as before. This is the well-known spin-boson model\cite{LeggettRMP1987,Weissbook,BPbook}, which can be considered as an extension of the previous two cases in that we can now generally have both population decay and dephasing taking place. Experimentally, such a model can be realized, for instance, using superconducting qubits \cite{ClarkeNature2008, YouNature2011,SlichterNJP2016} and the properties of the environment can be appropriately tuned as well \cite{HurPRB2012}. Once again, assuming that the system and the environment are interacting weakly with each other, we can use the master equation that we have used before [see Eq.~\eqref{masterequation}] to find the system density matrix as a function of time. We now have $H_S = \frac{\varepsilon}{2}\sigma_z + \frac{\Delta}{2}\sigma_x$ and $F = \sigma_z$. It should be remembered that once we find the density matrix just before the measurement $\rho_S(\tau)$, we remove the evolution due to system Hamiltonian via $\rho_S(\tau) \rightarrow e^{iH_S \tau} \rho_S(\tau)e^{-iH_S\tau}$. Let us first choose as the initial state $n_x(0) = 1$ (or, in the words, the state that is paramterized by $\theta = \pi/2$ and $\alpha = 0$ on the Bloch sphere). In Fig.~\ref{SB1}, we plot the behaviour of the effective decay rate as a function of the measurement interval using both our optimized strategy (the solid red lines) and the unoptimized usual strategy (the dotted, blue curves). It is clear from Fig.~\ref{SB1}(a) that for relatively short measurement intervals, there is little to be gained by using the optimal strategy. As we have seen before in the pure dephasing case, for the state in the equatorial plane of the Bloch sphere, there is no advantage to be gained by following the optimized strategy. On the other hand, for longer time intervals $\tau$, population decay can be considered to become more significant. This then means we that see significant difference at long measurement intervals if we use the optimized strategy. This is precisely what we observe in Fig.~\ref{SB1}(b). \begin{figure} {\includegraphics[scale = 0.4]{SB2andSB4.pdf}}\caption{\label{SB2} \textbf{Graphs of the effective decay rate under making optimal projective measurements in the spin-boson model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ (low temperature) for the state specified by $\theta = \pi/2$ and $\phi = 0$. We have used the same parameters as in \ref{SB1} except that we have now modeled a sub-Ohmic environment with $s = 0.8$. \textbf{(b)} same as \textbf{(a)}, except that we have now modeled a super-Ohmic environment with $s = 2.0$. \textbf{(c)} We have used $G = 0.025, \omega_c = 10$ and $s = 1$. For this case ($\varepsilon \gg \Delta$), we have used $\varepsilon = 6, \Delta = 2$. \textbf{(d)} same as \textbf{(c)}, except that for this case ($\Delta \gg \varepsilon $), we have used $\varepsilon = 2, \Delta = 6$.} \end{figure} For completeness, let us also investigate how the the effective decay rate depends on the functional form of the spectral density. In Fig.~\ref{SB2}(a) and Fig.~\ref{SB2}(b), we investigate what happens in the case of a sub-Ohmic and a super-Ohmic environment respectively. The case of a sub-Ohmic environment with $s = 0.8$ is similar to the case with $s = 1$ (Ohmic environment) - once again, the optimal projective measurements decrease the decay rate substantially only at long periods of time. For the case of a super-Ohmic environment with $s=2$ [see Fig.~\ref{SB2}(b)], we find that the optimal projective measurements do not substantially lower the decay rate, even for long times. Thus, it is clear that the Ohmicity of the environment plays an important role in determining the usefulness of using the optimal projective measurements. Let us now revert to the Ohmic environment case to present more computational examples. First, if $\varepsilon \gg \Delta$, then the effect of dephasing is much more dominant than the effect of population decay. Results for this case are illustrated in Fig.~\ref{SB2}(c). We see that there is negligible difference upon using the optimal measurements. This agrees with what we found when we analyzed the pure dephasing model. We also analyze the opposite case where the effect of population decay is more dominant than the effect of dephasing. This is done by setting $\Delta \gg \varepsilon$. We consider higher temperature in this case \underline{what value???}. We now observe differences between the unoptimized and optimized decay rates for relatively short times [see Fig.~\ref{SB2}(d)], and the difference becomes even bigger at longer times. In fact, while we observe predominantly only the Zeno effect with the unoptimized measurements, we observe very distinctly both the Zeno and the anti-Zeno regimes with the optimized measurements. \subsection*{Large Spin System} \begin{figure} {\includegraphics[scale = 0.55]{LSJ1.pdf}} \caption{\label{LSJ1}\textbf{Graphs of the effective decay rate and the optimal spherical angles under making optimal projective measurements in the large spin model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ for $J=1$. Here we have $\Delta = 0$. The blue dashed curve shows the decay rate if the initial state is repeatedly measured; the red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $G = 0.01, \omega_c = 50, \beta = 1$ and we take $\theta = \pi/2$ and $\phi = 0$ as parameters for the initial state. \textbf{(b)} Same as (a), except now that $\Delta \neq 0$. The insets show how the optimal measurements change with the measurement interval $\tau$.} \end{figure} We extend our study to a collection of two-level systems interacting with a common environment. This Hamiltonian can be considered to be a generalization of the usual spin-boson model to a large spin $j = N_s/2$ \cite{ChaudhryPRA2014zeno,VorrathPRL2005 ,KurizkiPRL2011}, where $N_s$ is the number of two-level systems coupled to the environment. Physical realizations include a two-component Bose-Einstein condensate \cite{GrossNature2010,RiedelNature2010} that interacts with a thermal reservoir via collisions \cite{KurizkiPRL2011}. In this case, the system-environment Hamiltonian is given by \begin{equation} H = \varepsilon J_{z} + \Delta J_x + \sum_{k} \omega_{k} b_{k}^{\dagger} b_{k} + 2 J_{z} \sum_{k} (g_{k}^{*}b_{k} + g_{k}b_{k}^{\dagger}), \end{equation} where $J_x$ and $J_z$ are the usual angular momentum operators and the environment is again modeled as a collection of harmonic oscillators. We first look at the pure dephasing case by settting $\Delta = 0$. In this case, the system dynamics can be found exactly. The system density matrix, in the eigenbasis of $J_z$, after removal of the evolution due to the system Hamiltonian can be written as $[\rho(t)]_{mn} = [\rho(0)]_{mn} e^{- i\triangle(t) (m^2 - n^2)} e^{- \gamma(t) (m - n)^2}$. Here $\gamma(t)$ has been defined before, and $\Delta(t) = \sum_k 4|g_k|^2 \frac{[\sin(\omega_k t) - \omega_k t]}{\omega_k^2}$\text{\cite{ChaudhryPRA2014zeno}} describes the indirect interaction between the two-level systems due to their interaction with a common environment. For vanishingly small time $t$, $\triangle(t) \approx 0$. On the other hand, as $t$ increases, the effect of $\triangle(t)$ becomes more pronounced. Thus, we expect significant differences as compared to the single two-level system case for long measurement intervals. However, it is important to note that we can no longer find the optimal measurements using the formalism presented before since our system is no longer a single two-level system. In principle, we need then need to carry out a numerical optimization procedure in order to find the projector $\ket{\chi}\bra{\chi}$ such that the survival probability is maximized. Rather than looking at all possible states $\ket{\chi}$, we instead restrict ourselves to the SU(2) coherent states since these projective measurements are more readily experimentally accessible. In other words, we look at $\ket{\chi}\bra{\chi}$ where \begin{equation} \ket{\chi} = \ket{\zeta, J} = (1 + |\zeta|^2)^{- J} \sum_{m=-J}^{m = J} \sqrt{\binom{2J}{J + m}} \zeta^{J + m} \ket{J, m}, \end{equation} and $\zeta = e^{i \phi'} \tan(\theta'/2)$ with the states $\ket{J, m}$ being the angular momentum eigenstates of $J_z$. Suppose that we prepare the coherent state $\ket{\eta, J}$ with a fixed, pre-determined value of $ \eta = e^{i \phi} \tan(\theta/2)$ repeatedly. In order to do so, we project, with time interval $\tau$, the system state onto the coherent state $\ket{\zeta, J}$. After each measurement, we apply a suitable unitary operator to arrive back at the state $\ket{\eta,J}$. Again assuming the system-environment correlations are negligible, we find that \begin{align}\label{largedephasingdecay} \Gamma(\tau) & = - \frac{1}{\tau} \ln \Bigg \{ \Bigg [ \frac{|\zeta|}{1 + |\zeta|^2} \Bigg ]^{2J} \Bigg [ \frac{|\eta|}{1 + |\eta|^2} \Bigg ]^{2J} \sum_{m, n = -J}^{J} (\zeta^{*} \eta)^m (\eta^{*} \omega)^n \; \binom{2J}{J + m}\binom{2J}{J + n} e^{- i\triangle(\tau) (m^2 - n^2)} e^{- \gamma(\tau) (m - n)^2} \Bigg \}. \end{align} For equally spaced measurement time intervals, we numerically optimize Eq.~\eqref{largedephasingdecay} over the variables $\phi'$ and $\theta'$. We present a computational example in Fig.~\ref{LSJ1}(a). We take as the initial state the SU(2) coherent state with $\theta = \pi/2$ and $\phi = 0$ and we let $J = 1$. This is simply the generalization of the pure dephasing model that we have looked at before to $J = 1$. Previously, there was no difference in the optimized and unoptimized probabilities. Now, we see that because of the indirect interaction, there is a very noticeable difference. Where we observe the Zeno regime with the unoptimized measurements, we instead see the anti-Zeno regime with the optimized measurements. Furthermore, the survival probability can be significantly enhanced using the optimized measurements. For completeness, we have also considered the more general case with $\Delta \neq 0$. In this case, the system dynamics cannot be solved exactly, so we resort again to the master equation to find the system dynamics. With the system dynamics known, we again find the projector, parametrized by $\theta'$ and $\phi'$, such that the decay rate is minimized. Results are illustrated in ~\ref{LSJ1}(b). Once again, using optimizing projective measurements change the Zeno and anti-Zeno behaviour quatitatively as well as qualitatively. \begin{comment} \begin{figure}[t] {\includegraphics[scale = 0.6]{LargeSB.eps}}\caption{\label{LargeSpinBoson1} \textbf{Graphs of the effective decay rate and the optimal spherical angles under making optimal projective measurements in the large spin-boson model.} \textbf{(a)} $\Gamma(\tau)$ versus $\tau$ (low temperature) for the state specified by $\theta = \pi/2$ and $\phi = 0$. The blue dashed curve shows the decay rate if the initial state is repeatedly measured; and the red curve shows the decay rate if the optimal projective measurement is repeatedly made. We have used $\varepsilon = 2, \Delta = 2, J = 1, G = 0.01, \omega_c = 50$ and $s = 1$. In the inset, we show the 2D plots that traces the change in the values of the optimal spherical angles. \textbf{(b)} We show the 3D plot that traces the change in the values of the optimal spherical angles. $\chi$ is the polar angle and $\alpha$ is the azimuthal angle. For $\tau = 1$ and $N = 3, \Delta s(\tau) \approx 0.25!$. \textbf{(c)} same as \textbf{(a)} except that we have used $\varepsilon = 8, \Delta = 2$. \textbf{(d)} same as a \textbf{(a)}. For $\tau = 2.1$ and $N = 1, \Delta s(\tau) \approx 0.3!$.} \end{figure} Let us now generalize to the case $\Delta \neq 0$. We now consider $N$ two-level systems interacting with a common environment. The Hamiltonian for the system is given by: \begin{equation} \mathcal{H} = \varepsilon J_z + \Delta J_x + \sum_k \omega_k b_k^\dagger b_k + 2J_z \sum_k (g_k^* b_k + g_k b_k^\dagger), \end{equation} where $\varepsilon$ and $\Delta$ are the same as as in the case of the spin-boson model (for a single two-level system) and $J_x$ and $J_z$ are the angular momentum operators. We analyze the model in the same way we analyzed the spin-boson model for a single two level system. In addition, we analyze the model for the case of SU(2) coherent states, as in section \ref{LargeDephasing}. We choose the initial state such that it is parameterized by the spherical angles $\theta = \pi/2$ and $\phi = 0$. We first analyze the case where both dephasing and population decay are given the same relative weight, with the parameters set to $\varepsilon = \Delta = 2$. The results are plotted in FIG. \ref{LargeSpinBoson1}. The interesting aspect of this result is that it is in stark contrast to the analagous case plotted in FIG. \ref{SB1}. In that figure, we plotted the case in which both population decay and dephasing were given the same relative weight in the spin-boson model for a single two-level system. The results suggested that the effect of making optimal projective measurements on the decay rate is alomost negligible at short times. In the large spin-boson model, however, we observe the opposite effect: within the first 1 second, we observe considerable differences in the decay rate if one makes the optimal projective measurements. This result, in analogy with the results derived in section \ref{LargeDephasing} arisse due to the indirect interaction between the two two-level systems. This interaction accounts for considerable changes in the decay rates at short times, an effect that was not observed before. While we don't have a functional form of the indirect coupling between the two level systems, we expect it to show a similar behavior as in the case of the large spin model with pure dephasing, where the functional form of the relevant expression is given by equations \ref{interaction1} and \ref{interaction2}. However, the effect of population decay is expected to dominate the effect of dephasing at intermediate to long time ranges, making the observation of the aforementined effect difficult, especially in the case where both population decay and dephasing have been given the same relative weight.In FIG. \ref{LargeSpinBoson1}(b), we show a 3D plot that traces the change in the optimal spherical angles with time. We clearly observe that in this case, the optimal angles change both erratically and cyclically. This observation is in accordance with the aforementioned comment: since the interaction term is expected to show some sinusoidal pattern, the optimal spherical angles also reflect the same changes -- at least at short times scales where where pure dephasing has the more dominant contribution. We show another computational examples in FIG. \ref{LargeSpinBoson1}. In this example, the value of $\varepsilon$ is four times than that of $\Delta$, indicating that dephasing has been given more relative weight as compared to population decay. We in part recover the results already observed in section \ref{LargeDephasing}: the optimal projective measurements exhibit a significantly lesser decay rate as compared to the case where one does not make the optimal measurement. With dephasing given more weight compared to population decay, we indeed expect this to be case. The more interesting point to note is that compared to the case where pure dephasing is given a larger weight compared to population decay in the spin-boson model as shown in FIG. \ref{SB4}, we now observe larger differences in the decay rates when one makes optimal measurements. This effect, once again, can be attributed to the indirect interaction between the two two-level systems. Hence, we once again observe that the effect of making optimal measurements is much more pronounced for the case of interacting two-level systems. \end{comment} \section*{Discussion} Our central idea is that instead of repeatedly preparing the quantum state of a system using only projective measurements, we can repeatedly prepare the quantum state by using a combination of both projective measurements and unitary operations. This then allows us to consider the projective measurements that yield the largest survival probability. if the central quantum system is a simple two-level system, we have derived an expression that optimizes the survival probability, or equivalently the effective decay rate. This expression implies that the optimal projective measurement at time $\tau$ corresponds to the projector that is parallel to the Bloch vector of the system's density matrix at that time. We consequently applied our expression for the optimized survival probability to various models. For the population decay model, we found that beyond a critical time $\tau^{*}$, we should flip the measurement and start measuring the ground state rather than the excited state. For the pure dephasing model, we found that for states prepared in the equatorial plane of the Bloch sphere, it is optimal to measure the initial state - determining and making the optimal projective measurement has no effect on the effective decay rate. In contrast, for states prepared outside of the equatorial plane, the effect of making the optimal projective measurement substantially lowers the effective decay rate in the anti-Zeno regime. In the general spin-boson model, we have found that there can a considerable difference between the effective decay rate if we use the optimal measurements. We then extended our analysis by analyzing the case of large spin systems. In this case, we find that the indirect interaction between the two-level systems causes the optimal measurements to be even more advantageous. The results of this paper show that by exploiting the choice of the measurement performed, we could substantially decrease the effective decay rate for various cases. This allows us to `effectively freeze' the state of the quantum system with a higher probability. Experimental implementations of the ideas presented in this paper are expected to be important for measurement-based quantum control. \begin{comment} We ignore first exponential factor since it represents a unitary transformation (rotation) on the Bloch sphere due to the system's time evolution as in section \ref{optimal condition}. Including the effect of the system's unitary time evolution potentially has the effect of producing sinusoidal changes in the survival probability -- producing multiple peaks in the plot of the survival probability (and the effective decay), which allows one to over-identify the number of Zeno and anti-Zeno regimes (and the transitions between the Zeno and Anti-Zeno regimes) and the numerical values of decay rate(s). The details of this result are substantiated below by means of a computational example. Ignoring the unitary time evolution of the system, the system's density matrix is $\rho_{S}(\tau)]_{mn} = [\rho_{S}(0)]_{mn} e^{-i \triangle(\tau)(m^2 - n^2) - \gamma(\tau)(m - n)^2}$. For the case of a two-level system, the first exponential is one for all the four entries; hence, we can ignore it. The second exponential contributes changes only the off diagonal terms. \end{comment} \begin{comment} We next turn to finding the the time-evolved Bloch vector for the case of an arbitrary initial state of the system, $\ket \psi = \cos \Big (\frac{\theta}{2} \Big ) \ket{\uparrow_z} + \sin \Big (\frac{\theta}{2} \Big ) e^{i \phi} \ket{\downarrow_z}$. Using the expression (omitted) for the density matrix of the system at time $\tau$, \end{comment} \begin{comment} The density operator of the system at time $\tau$ is: \begin{align} \rho_{S}(0) & = \frac{1}{2}\begin{bmatrix} 1 + \cos\theta & \sin \theta e^{-i \phi} \\[6pt] \sin \theta e^{i \phi} & 1 - \cos \theta \end{bmatrix}. \end{align} Using the explicit expression of $\rho_{S}$, we have: \begin{equation} \rho_{S}(\tau) = \frac{1}{2}\begin{bmatrix} 1 + \cos \theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\ e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos \theta \end{bmatrix}. \end{equation} \end{comment} \begin{comment} we find that the time-evolved Bloch vector for the time-evolved density matrix is given by: \begin{align} n_{x}(\tau) & = \Tr (\sigma_{x} \rho_{S}(\tau)), \nonumber \\ & = \frac{1}{2} \Tr \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 + \cos\theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\ e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos\theta \end{bmatrix}. \end{align} Keeping track of only the diagonal terms, we have: \begin{align} n_x(\tau) & = \frac{1}{2} \Tr \begin{bmatrix} e^{- \gamma(\tau)} \sin \theta e^{i \phi} & ... \\ ... & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \end{bmatrix}, \end{align} which implies that \begin{align}\label{dephasing.n_x} n_x(\tau) & = \frac{1}{2} \Big ( e^{- \gamma(\tau)} \sin \theta ( e^{i \phi} + e^{-i \phi} ) \Big ), \nonumber \\ & = \frac{1}{2} \Big (2 e^{- \gamma(\tau)} \sin \theta \cos \phi \Big ), \nonumber \\ & = e^{- \gamma(\tau)} \sin \theta \cos \phi. \end{align} \begin{align} n_{y}(\tau) & = \Tr (\sigma_{y} \rho_{S}(\tau)), \nonumber \\ & = \frac{1}{2} \Tr \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \begin{bmatrix} 1 + \cos \theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\ e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos \theta \end{bmatrix}. \end{align} Keeping track of only the diagonal terms, we have: \begin{align} n_y(\tau) & = \frac{1}{2} \Tr \begin{bmatrix} -i e^{- \gamma(\tau)} \sin \theta e^{i \phi} & ... \\ ... & i e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \end{bmatrix}, \end{align} which implies that \begin{align}\label{dephasing.n_y} n_y(\tau) & = \frac{1}{2} \Big ( -i e^{- \gamma(\tau)} \sin \theta (e^{i \phi} - e^{-i \phi}) \Big ), \nonumber \\ & = \frac{1}{2} \Big (2 e^{- \gamma(\tau)} \sin \theta \sin \phi \Big ), \nonumber \\ & = e^{- \gamma(\tau)} \sin \theta \sin \phi. \end{align} \begin{align} n_{z}(\tau) & = \Tr (\sigma_{z} \rho_{S}(\tau)), \nonumber \\ & = \frac{1}{2} \Tr \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} 1 + \cos\theta & e^{- \gamma(\tau)} \sin \theta e^{-i \phi} \\ e^{- \gamma(\tau)} \sin \theta e^{i \phi} & 1 - \cos\theta \end{bmatrix}. \end{align} Keeping track of only the diagonal terms, we have: \begin{align} n_z(\tau) & = \frac{1}{2} \Tr \begin{bmatrix} 1 + \cos \theta & ... \\ ... & - 1 + \cos \theta \end{bmatrix}, \end{align} which implies that \begin{align}\label{dephasing.n_z} n_z(\tau) & = \frac{1}{2} \Big (1 + \cos \theta - 1 + \cos \theta), \nonumber \\ & = \frac{1}{2} (2 \cos \theta), \nonumber \\ & = \cos \theta. \end{align} Noting that \begin{align} \label{dephsing.bloch} n_x(0) & = \sin \theta \cos \phi, \nonumber \\ n_y(0) & = \sin \theta \sin \phi; \; \text{and} \nonumber \\ n_z(0) & = \cos \theta, \end{align} equations \eqref{dephasing.n_x}, \eqref{dephasing.n_y}, \eqref{dephasing.n_z}, \eqref{dephsing.bloch} imply that we have: \end{comment} \begin{comment} We first analyze the claim made previously in the section: including the effect of the system's unitary time evolution allows one to over-identify the number of Zeno and anti-Zeno regimes (and the transitions between the Zeno and anti-Zeno regimes). We analyze this claim in the context of an example. Consider the quantum state $\ket \psi = \frac{1}{\sqrt{2}} \ket {\uparrow_z} + \frac{1}{\sqrt{2}} \ket {\downarrow_z}$. The time-evolved density matrix for the system is given by: \begin{equation} \rho_{S}(\tau) = \frac{1}{2}\begin{bmatrix} 1 & e^{- \gamma(\tau)} e^{- i \varepsilon \tau} \\ e^{- \gamma(\tau)} e^{i \varepsilon \tau} & 1 \end{bmatrix}. \end{equation} \begin{figure} {\includegraphics[scale = 0.58]{Dephasing0.eps}} \caption{\label{Dephasing0} \textbf{Comparing the survival probability and the effective decay if the system's unitary evolution is removed or not removed.} \textbf{(a)} $s(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1, 0, 0 )$. The purple dashed curve shows the survival probability if system's unitary time evolution is ignored; the black curve shows the survival probability system's unitary time evolution is not ignored. \textbf{(b)}$\Gamma(\tau)$ versus $\tau$ for the initial state specified by the Bloch vector $(1, 0, 0 )$. The blue dashed curve shows the survival probability if system's unitary time evolution is ignored; the red curve shows the survival probability system's unitary time evolution is not ignored. We have used $G = 0.01, \omega_c = 15, \beta = 1$ and $\varepsilon = 2$.} \end{figure} We assume that we choose to measure the initial state of the system at time $\tau$. It can easily be shown that survival probability in this case is given by $s'(\tau) = \frac{1}{2} \Big (1 + e^{- \gamma(\tau)} \cos (\varepsilon \tau) \Big )$. Assuming one ignores the unitary time evolution due to the system, \eqref{probold} implies that the survival probability is given by $s(\tau) = \frac{1}{2} \Big (1 + e^{- \gamma(\tau)} \Big )$. The graphs for these expressions are shown in FIG. \ref{Dephasing0}. The effect of including the system's unitary time evolution is surprisingly large. The periodic change in the survival probability (red curve) reflects the unitary time evolution of $\ket \psi$ in the equatorial plane on the Bloch sphere. The exponentially decaying change in the survival probability (blue curve) reflects the non unitary evolution of the quantum state in the equatorial plane due to the system-environment coupling. Both of these effects are reflected in the red curve -- which oscillates sinuously with an exponentially decaying amplitude. The unintended consequence of including both these effects is as stated before: one over-identifies the number of Zeno and anti-Zeno regimes (and the transitions between the Zeno and anti-Zeno regimes). This is clearly evident when one plots the decay rates, which we also plot in FIG \ref{Dephasing0}. The graph of the decay rate with the system's unitary time evolution included (red curve) shows both i) an unusually large decay rate for a single measurement and ii) multiple Zeno and anti-Zeno regimes. \end{comment}
2024-02-18T23:40:10.307Z
2017-02-07T02:10:55.000Z
algebraic_stack_train_0000
1,534
10,560
proofpile-arXiv_065-7616
\section{Results and Discussion} Let us first formulate our results. We show that in epitaxially connected array of NCs, the ratio between the tandem tunneling $\tau_{T}^{-1}$ and F{\"o}rster rates $\tau_{F}^{-1}$ is \begin{equation} \label{eq:the_answer_foster} \frac{\tau_{T}^{-1}}{\tau_{F}^{-1}} = \left(8.7 \frac{a_{B}}{a_{0}}\right)^{4} \left(\frac{\kappa_{NC}+2\kappa}{\kappa_{NC}} \right)^{4} \left(\frac{\rho}{d}\right)^{12}. \end{equation} Here $a_{B} = 4\pi \hbar^{2} \varepsilon_{0} \kappa_{NC}/\sqrt{m_{e}m_{h}} e^{2}$ is the unconventional effective exciton Bohr radius, $e a_{0}$ is the dipole moment matrix element taken between the valence- and conduction-band states and $\kappa_{NC}$ is the high frequency dielectric constant of the material. In the Table 1 we summarize our estimates of the ratio (\ref{eq:the_answer_foster}) for different NCs. We used $d = 6~\mathrm{nm}$ and $\rho= 0.2 d \simeq 1 ~\mathrm{nm}$. Values of $\kappa_{NC}$ are taken from Ref. \cite{978-3-642-62332-5}. For epitaxially connected NCs we use $\kappa=2\kappa_{NC}\rho/d$ (see Ref. \cite{dielectric_constant_touching_NC}). The ratio (\ref{eq:the_answer_foster}) is derived for materials with an isotropic single band hole $m_{h}$ and electron $m_{e}$ masses. For most materials the spectra are more complex. Below we explain how we average the masses for these materials and also how we calculate $a_{0}$. We see that, the tandem tunneling can be comparable with the F{\"o}rster mechanism in semiconductors like InP, CdSe where the effective mass is small. The tandem tunneling can be more efficient in cases where the F{\"o}rster mechanism is forbidden. For example, in indirect band gap semiconductors like Si, where $a_{0}$ is small and the F{\"o}rster mechanism is not effective, the tandem tunneling mechanism dominates. In another situation the tandem tunneling dominates at low temperatures. Excitons can be in bright or dark spin states \cite{Efros_dark_exciton}. Only the bright exciton can hop due to the F{\"o}rster mechanism. The dark exciton has smaller energy and the dark-bright exciton splitting is of the order of a few meV. So at small temperatures an exciton is in the dark state and cannot hop by the F{\"o}rster mechanism. At the same time the tandem tunneling is not affected by a spin state of an exciton. Dexter \cite{Dexter_transfer} suggested another exciton transfer mechanism which also is not affected by spin state of an exciton. Two electrons of two NCs exchange with each other (see Fig. \ref{fig:Scheme}c). We show below that for an array of NCs the ratio between rates for tandem tunneling and the Dexter mechanism is: \begin{equation} \label{eq:the_answer} \frac{\tau_{T}^{-1}}{\tau_{D}^{-1}} = \left(\frac{\Delta_{e}\Delta_{h}}{4E_{c}^{2}}\right)^{2}. \end{equation} In most cases $\Delta_{e,h} \gg E_{c}$ and as one can see from Table 1 that the tandem tunneling rate is much larger than the Dexter rates with the exception of ZnO. It is worth noting that the same ratio holds not only for epitaxially connected NCs but for NCs separated by ligands. Of course, if NCs are separated by ligands say by distance $s$ and wave functions decay in ligands as $\exp(-s/b)$, where $b$ is the decay length of an electron outside of a NC, both rates acquire additional factor $\exp(-4s/b)$. Also, the difference between the tandem mechanism and Dexter transfer emerges only in NCs, where $\Delta_{e,h} \gg E_{c}$. In atoms and molecules, where essentially $E_{c} \simeq \Delta$ there is no such difference between the two mechanisms. For epitaxially connected Si and InP NCs where the tandem tunneling is substantial these predictions can be verified in the following way. One can transform the bright exciton to the dark one by varying magnetic field or temperature. The exciton in the dark state cannot hop by the F{\"o}rster mechanism, and usually hops much slower \cite{energy_transport_NC_Rodina,Exciton_CdSe_H_T}. For epitaxially connected NCs, where the tandem rate is larger than the F{\"o}rster one the exciton transfer should not be affected by magnetic field or temperature. Let us switch to derivation of the main result. For that we first should discuss electron wave functions in epitaxially connected NCs. \emph{Wave functions of two epitaxially connected NCs.} Below we describe the envelope wave functions in two epitaxially connected NCs. Here we present only scaling estimates and calculate numerical coefficients in the methods section. The wave functions for electrons and holes are the same, so we concentrate only on the electron. In an isolated NC the electron wave function is: \begin{equation} \label{eq:psi_0} \psi_{0}(r) = \frac{1}{\sqrt{\pi d} r} \sin \left(2\pi \frac{r}{d}\right), \end{equation} where $r$ is the distance from the center of the NC. We focus on two NCs shown on Fig \ref{fig:Scheme}, which touch each other by the small facet in the plane $z=0$. In this situation the wave function for an electron in the left NC $\Psi^{L}$ leaks through this small facet, so that it is finite in the plane of the facet $z=0$ and in the right NC. The derivative $\partial \Psi^{L}/\partial r$ is hardly changed by this small perturbation, so that the wave function in the plane $z=0$ acquires a finite value: \begin{equation} \label{eq:4} \Psi^{L}(z=0) \simeq \rho \frac{\partial \psi_{0}}{\partial z} \simeq \frac{\rho}{d^{5/2}}. \end{equation} The same happens with the wave functions of an electron in the right NC $\Psi^{R}$. $\Psi^{L}$ and $\Psi^{R}$ are symmetric with respect to the plane $z=0$. \emph{Tunneling matrix element.} We calculate the matrix element (\ref{eq:M_T}) of an electron and hole tunneling through the contact facet in the second order perturbation theory. $E_c$ is the energy of the intermediate state, in which the electron moves to the right NC, while the hole is still in the left NC. In other words the left NC plays the role of donor (D) and the right one the role of acceptor (A) so that intermediate state is $D^{+}A^{-}$ state. For touching NCs the energy of $D^{+}A^{-}$ state is evaluated in the methods section and is shown to be $\xi E_{c}$, where $|\xi-1| < 0.1$. Therefore in Eq. (\ref{eq:M_T}) and through out the paper we use $\xi=1$. In Eq. (\ref{eq:M_T}) factor $2$ takes care about two possible orders of electron and hole hops. Matrix elements $t_e,t_h$ for the electron and hole single particle tunneling from one NC to another can be written as \cite{landau_quantum} (see the methods section) \begin{equation} \label{eq:t} t_{e,h}=\frac{\hbar^2}{m_{e,h}} \int \Psi^{L*} (r_1) \frac{\partial}{\partial z} \Psi^{L} (r_1) dS, \end{equation} where the integration is over the plane $z=0$. Using Eqs. \eqref{eq:psi_0},~(\ref{eq:4}) we arrive to (\ref{eq:t_2}). Substituting (\ref{eq:t_2}) into Eq. (\ref{eq:M_T}) we get \begin{equation} \label{eq:our_Dexter} M_{T}= C_{T} \frac{\Delta_{e}\Delta_{h}}{E_{c}}\left(\frac{\rho}{d}\right)^{6}, \end{equation} where the numerical coefficient $C_{T}=2^{7}/9\pi^{2} \simeq 1.4$ is calculated in the methods section. Above we assumed that the energy spectra of electrons and holes are isotropic and have one band. In fact in most cases the hole energy spectrum has heavy and light band branches with masses $m_{hh}$ and $m_{hl}$ respectively. The energy of the lower state $\Delta_{h}$ can be determined with adequate accuracy if instead of a complicated valence band structure we consider a simple band ~\cite{Yassievich_excitons_Si,Holes_electrons_NCs} in which the holes have an average mass $m_{h}=3m_{hl}m_{hh}/(m_{hl}+2 m_{hh})$. For indirect band materials like Si an electron in the conduction band has an anisotropic mass in transverse $m_{et}$ and parallel $m_{ep}$ directions. The effective mass $m_{e}$, which determines the energy of the lower state $\Delta_{e}$ has a similar form $m_{e}=3m_{et}m_{ep}/(m_{et}+2 m_{ep})$. Using data for the electron and hole masses from Ref. \cite{978-3-642-62332-5} we get the values $\sqrt{m_{e}m_{h}}$ which is shown in the Table 1. \emph{F{\"o}rster matrix element.} Now we dwell on the F{\"o}rster matrix element. It is known \cite{Delerue_Foster_NC} that the matrix element for the F{\"o}rster transfer between two touching NCs is \begin{align} \label{eq:M_F} M_F & = \sqrt{\frac{2}{3}} \frac{e^{2}}{4\pi \varepsilon_{0} d^{3}} \eta a_{0}^{2}. \end{align} Here we assume that dipoles which interact with each other are concentrated in the center of NCs. The factor $\eta=9\kappa/(\kappa_{NC}+2\kappa)^{2}$ takes into account that the dipole-dipole interaction is screened \cite{Forster_Rodina}. The product $e a_{0}$ is the matrix element of the dipole moment between the conduction and valence band. Eqs. (\ref{eq:our_Dexter}) and (\ref{eq:M_F}) bring us to the ratio (\ref{eq:the_answer_foster}). In order to find $a_{0}$ we note that the matrix element of dipole moment is related to the band gap $E_{g}$ of a material and the momentum matrix element $p$ as ~\cite{laser_devices_Blood} $$a_{0}^{2} = \frac{\hbar^{4}p^{2}}{m^{2}E_{g}^{2}}.$$ According to the Kane model $p$ determines the effective electron mass \cite{Efros_review_NCs}, so we can say that \begin{equation} \label{eq:a0} a_{0}^{2}=\frac{3}{4} \frac{\hbar^{2}}{E_{g}m_{e}}. \end{equation} The estimate for $a_{0}$ for direct gap materials is given in the Table 1. For an indirect band gap semiconductor such as Si the dipole-dipole transition is forbidden. However, in small NCs this transition is possible due to the confinement or the phonon assistance. One can get estimate of the effective $a_{0}$ in the following way. The transfer rate for InAs is $10^{7}$ times larger than for Si \cite{Delerue_Foster_NC}, because their dielectric constants are close we assume that the difference in rates is due to $a_{0}$. Thus for Si, effective $a_{0}$ is $55$ times smaller than for InAs, which we get with the help of the Eq. (\ref{eq:a0}). \emph{Dexter matrix element.} The physics of the Dexter transfer mechanism\cite{Dexter_transfer} involves electron tunneling, but differs from that of the tandem tunneling mechanism in the following sense. The Dexter matrix element $M_{D}$ is calculated below in the first order perturbation theory in electron-electron interaction between two-electron wave function. The tandem tunneling matrix element was calculated in Eq. (\ref{eq:M_T}) in the second order perturbation theory, where $t_e$ and $t_h$ are single particle transfer integrals calculated between one-electron wave functions. Here we calculate the Dexter matrix element and show that at $\Delta \gg E_{c}$ it is much smaller than the tandem one. It is easier to consider this mechanism in the electron representation. The Dexter exciton transfer happens due to potential exchange interaction between two electrons in NCs. The initial state is $\Psi^{L*} (r_1) \Psi^{R}(r_{2})$ \textit{i.e.} the first electron in the conduction band of the left NC and the second electron is in the valence band of the right NC. The final state is $\Psi^{R} (r_1) \Psi^{L} (r_2)$, \textit{i.e.} the first electron in the conduction band of the right NC and the second electron in the valence band of the left NC (see Fig. \ref{fig:Scheme} a). The matrix element has the following form: \begin{equation} \label{eq:M_D} M_D = \int \Psi^{L*} (r_1) \Psi^{R*} (r_2) V(r_{1},r_{2}) \Psi^{R} (r_1) \Psi^{L} (r_2) d^3r_1 d^3 r_2. \end{equation} Here $V(r_{1},r_{2})$ is the interaction energy between electrons in points $r_{1}$ and $r_{2}$, which is of the order of $E_{c}$. In general, calculating the matrix element is a difficult problem. For our case, however, a significant simplification is available because the internal dielectric constant $\kappa_{NC}$ is typically much larger than the external dielectric constant $\kappa$ of the insulator in which the NC is embedded. The large internal dielectric constant $\kappa_{NC}$ implies that the NC charge is homogeneously redistributed over the NC surface. As a result a semiconductor NC can be approximately considered as a metallic one in terms of its Coulomb interactions, namely that when electrons are in two different NCs, the NCs are neutral and there is no interaction between them and $V=0$. When electrons are in the same NC, both NCs are charged and $V=E_{c}$. Thus, we can approximate Eq. (\ref{eq:M_D}) as: \begin{equation} \label{eq:M_D_2} M_D = 2 E_{c} \left(\int \Psi^{L}(r) \Psi^{R}(r) d^{3}r\right)^{2}. \end{equation} The integral above is equal to $2 t_{e}/\Delta_{e}$ (see methods section) and we get: \begin{equation} \label{eq:7} M_D = C_{D} E_{c}\left(\frac{\rho}{d}\right)^{6}, \end{equation} where $C_{D} = 2^{9}/9\pi^{2} \simeq 5.7$ is the numerical coefficient. Let us compare Eqs. (\ref{eq:7}) and (\ref{eq:our_Dexter}) for matrix elements $M_D$ and $M_T$ of Dexter and tundem processes. We see that $M_D$ is proportional to $E_c$, while $M_T$ is inverse proportional to $E_c$. (The origin of this difference is related to the fact that in Anderson terminology \cite{Anderson_potential_kinetic_exchange} the former one describes ``potential exchange'', while the latter one describes ``kinetic exchange''. In the magnetism theory \cite{Anderson_potential_kinetic_exchange} the former leads to ferromagnetism and the latter to antiferromagnetism). Note that the ratio (\ref{eq:the_answer}) is inverse proportional to the fourth power of the effective mass. As a result in semiconductors with small effective mass such as InP and CdSe the ratio of tandem and Dexter rates is very large (up to $100$). Using Ref. \cite{dielectric_constant_touching_NC} $\kappa=2\kappa_{NC}\rho/d$ in the Table 1 we calculate the ratio for different NCs. We see that typically the tandem tunneling rate is larger or comparable with the Dexter one. So far we dealt only with NCs in which the quantization energy $\Delta$ is smaller than half of the semiconductor energy gap and one can use parabolic electron and hole spectra. This condition is violated in semiconductor NCs with very small effective masses $\sim 0.1 ~m$ and small energy gaps $\sim 0.2 \div 0.3$ eV such as InAs and PbSe. In these cases, the quantization energy $\Delta$ should be calculated using non-parabolic ("relativistic") linear part of the electron and hole spectra $|\epsilon| = \hbar v k$, where $v \simeq 10^{8} ~\mathrm{cm/s}$~\cite{Wise_PbSe,PbS_spectrum,PbSe_NC_spectrum_Delerue}. This gives $\Delta = 2 \pi \hbar v/d$. We show in the methods section that substitution of $\Delta_{e,h}$ in Eq. (\ref{eq:t_2}) by $\Delta/2$ leads to the correct ``relativistic'' modification of the single particle tunneling matrix element $t$ between two such NCs. Then for InAs and PbSe NCs with the same geometrical parameters as in the Table 1 we arrive at ratios $\tau^{-1}_{T}/\tau^{-1}_{D}$ as large as $1000$ (see Table 2). One can see however that inequalities (\ref{eq:condition}) are only marginally valid so that this case deserves further attention. \begin{table}[h] \label{tab:parameters} \begin{tabular}{ c | c| c| c| c| c |c| c |c } \hline NC & $\kappa_{NC}$ & $a_{0} ~~\mathrm{\AA}$ &$ \Delta$&$\alpha \Delta$&$E_{c}$ &$t_{e}$&$\tau_{T}^{-1}/\tau_{F}^{-1}$ & $\tau_{T}^{-1}/\tau_{D}^{-1}$ \\ \hline PbSe & 23 &25.8 &660 &33&25&2&0.1& $10^{3}$ \\ InAs & 12.3 &19.7 &660 &33&46&2&$10^{-2}$& $10^{2}$ \\ \hline \end{tabular} \caption{Parameters and results for ``relativistic'' NCs PbSe and InAs. As in the Table 1 we use $d=6 ~\mathrm{nm}$, $\rho=0.2 d \simeq 1 ~\mathrm{nm}$ and $\alpha=0.05$} \end{table} \section{Conclusion} In this paper, we considered the exciton transfer in the array of epitaxially connected through the facets with small radius $\rho$ NCs. After evaluation of matrix elements for F{\"o}rster and Dexter rates in such arrays we proposed an alternative mechanism of tunneling of the exciton where electron and hole tunnel in tandem through the contact facet. The tandem tunneling happens in the second order perturbation theory through the intermediate state in which the electron and the hole are in different NCs. For all semiconductor NCs we studied except ZnO the tandem tunneling rate is much larger than the Dexter one. The tandem tunneling rate is comparable with the F{\"o}rster one for bright excitons and dominates for dark excitons. Therefore it determines exciton transfer at low temperatures. For silicon NCs the tandem tunneling rate substantially exceeds the F{\"o}rster rate. \section{Methods} \subsection{Calculation of $M_{T}$} \label{sec:Appendix_wave_function_t} If two NCs are separated their 1S ground state is degenerate. When they touch each other by small facet with radius $\rho \ll d$, the degeneracy is lifted and the 1S state is split into two levels $U_{s}$ and $U_{a}$ corresponding to the electron wave functions: \begin{equation} \label{eq:psi_g_u} \psi_{s,a}=\frac{1}{\sqrt{2}} [ \Psi^{L}(-z) \pm \Psi^{L}(z)], \end{equation} which are symmetric and antisymmetric about the plane $z=0$. The difference between two energies $U_{a}-U_{s}=2 t$, where $t$ is the overlap integral between NCs. Similarly to the problem 3 in $\S$50 of Ref. \cite{landau_quantum} we get Eq. (\ref{eq:t}). Below we find $\Psi^{L}$ in the way which is outlined in \cite{Rayleigh_diffraction_aperure,localization_length_NC}. We look for solution in the form \begin{equation} \label{eq:psi_definition} \Psi^{L}=\psi_{0}+\psi, \end{equation} where $\psi_{0}$ is non-zero only inside a NC. $\psi$ is the correction which is substantial only near the contact facet with the radius $\rho \ll d$ so $\nabla^{2} \psi \gg \psi \Delta_{e,h}$ and we can omit the energy term in the Schrodinger equation: \begin{equation} \label{eq:Laplacian} \nabla^{2} \psi=0. \end{equation} Near the contact facet with $\rho \ll d$ two touching spheres can be seen as an impenetrable plane screen and the the contact facet as the aperture in the screen. The boundary conditions for $\psi$ are the following: $\psi=0$ on the screen, while in the aperture the derivative $d\Psi^{L}/dz$ is continuous: \begin{equation} \label{eq:psi_derivative} \left . \frac{\partial \psi}{\partial z} \right|_{z=+0} = \left . \frac{\partial \psi_{0}}{\partial z} + \frac{\partial \psi}{\partial z} \right|_{z=-0}. \end{equation} As shown in Refs. \cite{Rayleigh_diffraction_aperure, localization_length_NC} $\psi$ is symmetric with respect to the plane $z=0$. As a result, \begin{equation} \label{eq:psi_z} \left. \frac{\partial \psi}{\partial z} \right|_{z=+0} = -2\frac{\sqrt{\pi}}{d^{5/2}} =A. \end{equation} It is easy to solve the Laplace equation with such boundary condition in the oblate spheroidal coordinates $\varphi$ ,$\xi$, $\mu$, which are related with cylindrical coordinates $z$, $\rho'$, $\theta$ (see Fig.~\ref{fig:oblate}) as \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{figure2.pdf} \caption{\label{fig:oblate} The contact with radius $\rho$ between two spheres with diameter $d \gg \rho$ can be represented by a screen with an aperture. In oblate spheroidal coordinates the aperture corresponds to the plane $\xi=0$ and the screen corresponds to the plane $\mu=0$ } \end{figure} \begin{eqnarray} \rho' & = &\rho \sqrt{(1+\xi^{2})(1-\mu^{2})} \nonumber \\ z & = &\rho \xi \mu \\ \varphi & =& \varphi \nonumber \label{eq:relation} \end{eqnarray} The Laplace equation can then be rewritten \cite{9780387184302}: \begin{equation} \label{eq:Laplace} \frac{\partial}{\partial \xi} (1+\xi^{2}) \frac{\partial \psi}{\partial \xi} + \frac{\partial}{\partial \mu} (1-\mu^{2}) \frac{\partial \psi}{\partial \mu}=0. \end{equation} The boundary conditions in this coordinates will be $\psi=0$ for $\mu=0$ ($z=0$ and $\rho'>\rho$) and for the region $\xi=0$ ($z=0$, $\rho'<\rho$) $$ \left. \frac{\partial \psi}{\partial \xi}\right|_{\xi=0} \frac{1}{\rho \mu} = A. $$ One can check by direct substitution that the solution at $z>0$ of the equation with these boundary conditions is: \begin{equation} \label{eq:solution} \psi= \frac{2\rho A}{\pi} \mu (1-\xi \arccot \xi) \end{equation} Thus in the contact between two spheres $\xi=0$ ($z=0$, $\rho'<\rho$): \begin{equation} \label{eq:surface_psi} \psi= \frac{4}{\sqrt{\pi}} \frac{1}{d^{3/2}} \frac{\rho}{d} \sqrt{1-\frac{\rho'^{2}}{\rho^{2}}}. \end{equation} Now we can calculate the integral (\ref{eq:t}) using expression for $\psi$ in the contact between two NCs (\ref{eq:surface_psi}) and arrive at Eq. (\ref{eq:t_2}). \subsection{The energy of the intermediate state} Here we study a cubic lattice of touching NCs with the period $d$. For large $\kappa_{NC}$ it can be considered as the lattice of identical capacitors with capacitance $C_{0}$ connecting nearest neighbor sites. One can immediately get that the macroscopic dielectric constant of the NC array is $\kappa=4\pi C_{0}/d$. We calculate the energy for the intermediate state, where an electron and a hole occupy the nearest-neighbor NC, the reference point of energy being energy of all neutral NCs. The Coulomb energy necessary to add one electron (or hole) to a neutral NC is called the charging energy $E_{e}$ . It was shown \cite{dielectric_constant_touching_NC} that for touching NCs which are arranged in the cubic lattice this energy is: \begin{equation} \label{eq:chargin_energy} E_{e}=1.59 E_{c}. \end{equation} We show here that the interaction energy between two nearest neighbors NC is $ E_{I}=-2\pi/3 E_{c}$, so that the energy of the intermediate state is $2E_{e}+E_{I} =\xi E_{c}$, where $\xi \simeq 1.08$. Let us first remind the derivation of the result (\ref{eq:chargin_energy}). By the definition the charging energy is \begin{equation} \label{eq:charging_energy_capacitance} E_{e} = \frac{e^{2}}{2C}, \end{equation} where $C$ is the capacitance of a NC immersed in the array. It is known that the capacitance between a site in the cubic lattice made of identical capacitance $C_{0}$ and the infinity is $C=C_{0}/\beta$, $\beta \simeq 0.253$ ~\cite{PhysRevB.70.115317,Lattice_green_function}. We see that $1/\beta$ plays the role of the effective number of parallel capacitors connecting this site to infinity. Thus we arrive at \begin{equation} \label{eq:charging_energy_capacitance_2} E_{e} = \frac{e^{2}}{2C} =2\pi \beta E_{c} \simeq 1.59 E_{c} \end{equation} Here we also need the interaction energy between two oppositely charged nearest sites of the cubic lattice. \begin{equation} \label{eq:ineraction_energy} E_{I} = -\frac{e^{2}}{2 C_{12}}, \end{equation} where $C_{12}$ is the total capacitance between the two nearest-neighbor NCs. It is easy to get that $C_{12}=3C_{0}$, so that \begin{equation} \label{eq:ineraction_energy} E_{I} = -\frac{e^{2}}{2 C_{12}} = -\frac{2\pi}{3}E_{c}. \end{equation} Thus we arrive at the energy of the intermediate state for the cubic lattice: $2E_{e}+E_{I} \simeq 1.08 E_{c}$, \textit{i.e.} for this case we get $\xi=1.08$. We repeated this derivation for other lattices. We arrived at $\xi=0.96$ and $\xi=0.94$ for bcc and fcc latices of capacitors, respectively. \subsection{Calculation of $M_{D}$} \label{sec:Dexter} One can calculate the integral (\ref{eq:M_D_2}) in the following way. $\Psi^{R}$ in the left NC can be written as $\psi$. We start from the second Green identity for functions $\Psi^{L}$ and $\psi$: \begin{equation} \label{eq:Green_identity} \int d^{3}r (\psi \nabla^{2} \Psi^{L} - \Psi^{L} \nabla^{2} \psi ) = \int dS (\psi \nabla \Psi^{L} - \Psi^{L} \nabla \psi), \end{equation} Because $\psi$ satisfies the Eq. (\ref{eq:Laplacian}) and $\Psi^{L}$ is zero on the surface of a NC except the contact facet, where it is equal to $\psi$ we get: $$ \int \psi(r) \Psi^{L}(r) d^{3}r = \frac{2t}{\Delta} $$ \subsection{Non-parabolic band approximation} Below we use non-parabolic ``relativistic'' Kane approach ~\cite{PbS_spectrum}. Namely we assume that the wave function $\psi_{0}$ of an electron and hole in the ground state of the isolated spherical NC satisfies Klein-Gordon equation: \begin{equation} \label{eq:KG} -\hbar^{2}v^{2} \Delta \psi_{0}+m^{*2}v^{4}\psi_{0}=E^{2}\psi_{0}. \end{equation} This approximation works well for the ground state of an electron and hole~\cite{PbS_spectrum}. The energy spectrum is: \begin{equation} \label{eq:spectrum} E(k)=\pm\sqrt{m^{*2}v^{4}+\hbar^{2}v^{2}k^{2}}. \end{equation} One can immediately see that the bulk band gap $E_{g}=2m^{*}v^{2}$. The solution of the equation (\ref{eq:KG}) for spherical isolated NC is the same as in the parabolic band approximation (see Eq. (\ref{eq:psi_0})). The kinetic energy $\Delta$ becomes: \begin{equation} \label{eq:Delta_relativistic} \Delta=\sqrt{m^{*2}v^{4}+\hbar^{2}v^{2}\left(\frac{2\pi}{d}\right)^{2}}-m^{*}v^{2}. \end{equation} Let us now concetrate on the expression for $t$. If two NCs are separated their 1S ground state is degenerate. When they touch each other by small facet with radius $\rho \ll d$, the degeneracy is lifted and the 1S state is split into two levels $U_{s}$ and $U_{a}$ corresponding to the electron wave functions: \begin{equation} \label{eq:psi_g_u} \psi_{s,a}=\frac{1}{\sqrt{2}} [ \Psi^{L}(-z) \pm \Psi^{L}(z)], \end{equation} which are symmetric and antisymmetric about the plane $z=0$. The difference between two energies $U_{a}-U_{s}=2 t$, where $t$ is the overlap integral between NCs. Similarly to the problem 3 in $\S$50 of Ref. \cite{landau_quantum} we use that $\psi_{s}$ satisfies the Eq. (\ref{eq:KG}) with the energy $U_{s}$ and $\Psi^{L}$ satisfies the same equation with the energy $E_{L}$. As a result we get the difference: \begin{equation} \label{eq:difference} E^{2}_{L}-U^{2}_{s} = \hbar^{2} c^{2} \int \left(\Psi_{L}\Delta \psi_{s} - \psi_{s} \Delta \Psi_{L} \right) dV \left( \int \psi_{s} \Psi_{L} dV \right)^{-1} \end{equation} Repeating the same step for $\psi_{a}$ we arrive at: \begin{equation} \label{eq:t_resistivity} t=(U_{a}^{2}-U_{s}^{2}) /4 =\frac{\hbar^{2}v^{2}}{U_{s}} \int \Psi^{L} \frac{\partial \Psi^{L}}{\partial z} dS. \end{equation} One can check that this expression at $m^{*}v^{2} \gg \hbar v k$ leads to (\ref{eq:t}). For $m*=0$ we get: \begin{equation} \label{eq:t_r} t = \frac{\hbar v d}{2\pi} \int \Psi^{L} \frac{\partial \Psi^{L}}{\partial z} dS. \end{equation} Using the same approach for the calculation of the integral as in S1 we get: \begin{equation} \label{eq:t_R} t=\frac{4}{3\pi} \Delta \left(\frac{\rho}{d}\right)^{3} \end{equation} In that case the Eq. (\ref{eq:the_answer_foster}) for the ratio tandem and F{\"o}rster rates can be written as: \begin{equation} \label{eq:Tandem_forster} \frac{\tau_{T}^{-1}}{\tau_{F}^{-1}}=3.7 \left(\frac{\hbar v}{e^{2}}\right)^{4} \left(\frac{d}{a_{0}}\right)^{4} (\kappa+2\kappa_{NC})^{4} \left(\frac{\rho}{d}\right)^{12}. \end{equation} and the ratio between tandem and Dexter rates is: \begin{equation} \label{eq:Tandem_dexter} \frac{\tau_{T}^{-1}}{\tau_{D}^{-1}}=\frac{\pi^{4}}{2^{4}} \left(\frac{\hbar v}{e^{2}}\right)^{4} \kappa_{NC}^{4}. \end{equation} Eqs. (\ref{eq:Tandem_forster}) and (\ref{eq:Tandem_dexter}) are used to calculate the ratio in the Table 2. \section{Acknowledgement} We are grateful to A. V. Chubukov, P. Crowell, Al. L. Efros, H. Fu, R. Holmes, A. Kamenev, U. R. Kortshagen, A. V. Rodina, I. Rousochatzakis, M. Sammon, B. Skinner, M.V. Voloshin, D. R. Yakovlev and I. N. Yassievich for helpful discussions. This work was supported primarily by the National Science Foundation through the University of Minnesota MRSEC under Award No. DMR-1420013. \providecommand{\latin}[1]{#1} \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{64} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Shirasaki \latin{et~al.}(2012)Shirasaki, Supran, Bawendi, and Bulovic]{ki_Supran_Bawendi_Bulovic_2012} Shirasaki,~Y.; Supran,~G.~J.; Bawendi,~M.~G.; Bulovic,~V. Emergence of Colloidal Quantum-Dot Light-Emitting Technologies. \emph{Nat. Photonics} \textbf{2012}, \emph{7}, 13--23\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liu \latin{et~al.}(2009)Liu, Holman, and Kortshagen]{Kortshagen_Si_solar_cell} Liu,~C.-Y.; Holman,~Z.~C.; Kortshagen,~U.~R. Hybrid Solar Cells from {P3HT} and Silicon Nanocrystals. \emph{Nano Lett.} \textbf{2009}, \emph{9}, 449--452\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gur \latin{et~al.}(2005)Gur, Fromer, Geier, and Alivisatos]{gur_air-stable_2005} Gur,~I.; Fromer,~N.~A.; Geier,~M.~L.; Alivisatos,~A.~P. Air-Stable All-Inorganic Nanocrystal Solar Cells Processed from Solution. \emph{Science} \textbf{2005}, \emph{310}, 462--465\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Yang \latin{et~al.}(2015)Yang, Zheng, Cao, Titov, Hyvonen, Manders, Xue, Holloway, and Qian]{LED_Yang_2015} Yang,~Y.; Zheng,~Y.; Cao,~W.; Titov,~A.; Hyvonen,~J.; Manders,~J.~R.; Xue,~J.; Holloway,~P.~H.; Qian,~L. High-Efficiency Light-Emitting Devices Based on Quantum Dots with Tailored Nanostructures. \emph{Nat. Photonics} \textbf{2015}, \emph{9}, 259-266\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gong \latin{et~al.}(2016)Gong, Yang, Walters, Comin, Ning, Beauregard, Adinolfi, Voznyy, and Sargent]{Near_infrared_LED_2016} Gong,~X.; Yang,~Z.; Walters,~G.; Comin,~R.; Ning,~Z.; Beauregard,~E.; Adinolfi,~V.; Voznyy,~O.; Sargent,~E.~H. Highly Efficient Quantum Dot Near-Infrared Light-Emitting Diodes. \emph{Nat. Photonics} \textbf{2016}, \emph{10}, 253--257\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bae \latin{et~al.}(2013)Bae, Park, Lim, Lee, Padilha, McDaniel, Robel, Lee, Pietryga, and Klimov]{Robel_Lee_Pietryga_Klimov_2013} Bae,~W.~K.; Park,~Y.-S.; Lim,~J.; Lee,~D.; Padilha,~L.~A.; McDaniel,~H.; Robel,~I.; Lee,~C.; Pietryga,~J.~M.; Klimov,~V.~I. Controlling the Influence of Auger Recombination on the Performance of Quantum-Dot Light-Emitting Diodes. \emph{Nat. Commun.} \textbf{2013}, \emph{4}, 2661\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Straus \latin{et~al.}(2015)Straus, Goodwin, Gaulding, Muramoto, Murray, and Kagan]{Kagan_surface_trap_passivation} Straus,~D.~B.; Goodwin,~E.~D.; Gaulding,~E.~A.; Muramoto,~S.; Murray,~C.~B.; Kagan,~C.~R. Increased Carrier Mobility and Lifetime in {CdSe} Quantum Dot Thin Films through Surface Trap Passivation and Doping. \emph{J. Phys. Chem. Lett.} \textbf{2015}, \emph{6}, 4605--4609\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Choi \latin{et~al.}(2012)Choi, Fafarman, Oh, Ko, Kim, Diroll, Muramoto, Gillen, Murray, and Kagan]{Murray_Bandlike} Choi,~J.-H.; Fafarman,~A.~T.; Oh,~S.~J.; Ko,~D.-K.; Kim,~D.~K.; Diroll,~B.~T.; Muramoto,~S.; Gillen,~J.~G.; Murray,~C.~B.; Kagan,~C.~R. Bandlike Transport in Strongly Coupled and Doped Quantum Dot Solids: A Route to High-Performance Thin-Film Electronics. \emph{Nano Lett.} \textbf{2012}, \emph{12}, 2631--2638\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Reich \latin{et~al.}(2014)Reich, Chen, and Shklovskii]{Reich_Shklovskii} Reich,~K.~V.; Chen,~T.; Shklovskii,~B.~I. Theory of a Field-Effect Transistor Based on a Semiconductor Nanocrystal Array. \emph{Phys. Rev. B} \textbf{2014}, \emph{89}, 235303\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liu \latin{et~al.}(2013)Liu, Tolentino, Gibbs, Ihly, Perkins, Liu, Crawford, Hemminger, and Law]{iu_Crawford_Hemminger_Law_2013} Liu,~Y.; Tolentino,~J.; Gibbs,~M.; Ihly,~R.; Perkins,~C.~L.; Liu,~Y.; Crawford,~N.; Hemminger,~J.~C.; Law,~M. {PbSe} Quantum Dot Field-Effect Transistors with Air-Stable Electron Mobilities above $7~\mathrm{cm^2V^{-1}s^{-1}}$. \emph{Nano Lett.} \textbf{2013}, \emph{13}, 1578--1567\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Choi \latin{et~al.}(2016)Choi, Wang, Oh, Paik, Sung, Sung, Ye, Zhao, Diroll, Murray, and Kagan]{Kagan_FET_2016} Choi,~J.-H.; Wang,~H.; Oh,~S.~J.; Paik,~T.; Sung,~P.; Sung,~J.; Ye,~X.; Zhao,~T.; Diroll,~B.~T.; Murray,~C.~B.; Kagan,~C.~R. Exploiting the Colloidal Nanocrystal Library to Construct Electronic Devices. \emph{Science} \textbf{2016}, \emph{352}, 205--208\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Keuleyan \latin{et~al.}(2011)Keuleyan, Lhuillier, Brajuskovic, and Guyot-Sionnest]{Philippe_HgTe} Keuleyan,~S.; Lhuillier,~E.; Brajuskovic,~V.; Guyot-Sionnest,~P. Mid-Infrared {HgTe} Colloidal Quantum Dot Photodetectors. \emph{Nat. Photonics} \textbf{2011}, \emph{5}, 489--493\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jeong and Guyot-Sionnest(2016)Jeong, and Guyot-Sionnest]{Philippe_mid_infrared_2016} Jeong,~K.~S.; Guyot-Sionnest,~P. Mid-Infrared Photoluminescence of CdS and {CdSe} Colloidal Quantum Dots. \emph{ACS Nano} \textbf{2016}, \emph{10}, 2225--2231\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Talapin \latin{et~al.}(2010)Talapin, Lee, Kovalenko, and Shevchenko]{Lee_Kovalenko_Shevchenko_2010} Talapin,~D.~V.; Lee,~J.-S.; Kovalenko,~M.~V.; Shevchenko,~E.~V. Prospects of Colloidal Nanocrystals for Electronic and Optoelectronic Applications. \emph{Chem. Rev. (Washington, DC, U. S.)} \textbf{2010}, \emph{110}, 389--458\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kagan and Murray(2015)Kagan, and Murray]{Kagan_QD_review} Kagan,~C.~R.; Murray,~C.~B. Charge Transport in Strongly Coupled Quantum Dot Solids. \emph{Nat. Nanotechnol.} \textbf{2015}, \emph{10}, 1013–1026\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liu \latin{et~al.}(2013)Liu, Lee, and Talapin]{Talapin_2013_high_mobility} Liu,~W.; Lee,~J.-S.; Talapin,~D.~V. {III-V} Nanocrystals Capped with Molecular Metal Chalcogenide Ligands: High Electron Mobility and Ambipolar Photoresponse. \emph{J. Am. Chem. Soc.} \textbf{2013}, \emph{135}, 1349--1357\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lanigan and Thimsen(2016)Lanigan, and Thimsen]{Thimsen_ZnO_MIT} Lanigan,~D.; Thimsen,~E. Contact Radius and the Insulator–Metal Transition in Films Comprised of Touching Semiconductor Nanocrystals. \emph{ACS Nano} \textbf{2016}, \emph{10}, 6744–6752\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Oh \latin{et~al.}(2014)Oh, Berry, Choi, Gaulding, Lin, Paik, Diroll, Muramoto, Murray, and Kagan]{Kagan_ALD_NC_2014} Oh,~S.~J.; Berry,~N.~E.; Choi,~J.-H.; Gaulding,~E.~A.; Lin,~H.; Paik,~T.; Diroll,~B.~T.; Muramoto,~S.; Murray,~C.~B.; Kagan,~C.~R. Designing High-Performance {PbS} and {PbSe} Nanocrystal Electronic Devices through Stepwise, Post-Synthesis, Colloidal Atomic Layer Deposition. \emph{Nano Letters} \textbf{2014}, \emph{14}, 1559--1566\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Delerue(2016)]{Review_NC__touching_array} Delerue,~C. Nanocrystal Solids: Order and Progress. \emph{Nat. Mater.} \textbf{2016}, \emph{15}, 498--499\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Williams \latin{et~al.}(2009)Williams, Tisdale, Leschkies, Haugstad, Norris, Aydil, and Zhu]{strong_coupling_superlattice} Williams,~K.~J.; Tisdale,~W.~A.; Leschkies,~K.~S.; Haugstad,~G.; Norris,~D.~J.; Aydil,~E.~S.; Zhu,~X.-Y. Strong Electronic Coupling in Two-Dimensional Assemblies of Colloidal {PbSe} Quantum Dots. \emph{ACS Nano} \textbf{2009}, \emph{3}, 1532--1538\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Walravens \latin{et~al.}(2016)Walravens, Roo, Drijvers, ten Brinck, Solano, Dendooven, Detavernier, Infante, and Hens]{QD_epitaxial_conneccted} Walravens,~W.; Roo,~J.~D.; Drijvers,~E.; ten Brinck,~S.; Solano,~E.; Dendooven,~J.; Detavernier,~C.; Infante,~I.; Hens,~Z. Chemically Triggered Formation of Two-Dimensional Epitaxial Quantum Dot Superlattices. \emph{ACS Nano} \textbf{2016}, \emph{10}, 6861–6870\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Baumgardner \latin{et~al.}(2013)Baumgardner, Whitham, and Hanrath]{lattice_NC_Tobias} Baumgardner,~W.~J.; Whitham,~K.; Hanrath,~T. Confined-but-Connected Quantum Solids \emph{via} Controlled Ligand Displacement. \emph{Nano Lett.} \textbf{2013}, \emph{13}, 3225--3231\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sandeep \latin{et~al.}(2014)Sandeep, Azpiroz, Evers, Boehme, Moreels, Kinge, Siebbeles, Infante, and Houtepen]{facet_PbSE_mobility_2} Sandeep,~C. S.~S.; Azpiroz,~J.~M.; Evers,~W.~H.; Boehme,~S.~C.; Moreels,~I.; Kinge,~S.; Siebbeles,~L. D.~A.; Infante,~I.; Houtepen,~A.~J. Epitaxially Connected {PbSe} Quantum-Dot Films: Controlled Neck Formation and Optoelectronic Properties. \emph{ACS Nano} \textbf{2014}, \emph{8}, 11499--11511\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li \latin{et~al.}(2016)Li, Zhitomirsky, Dave, and Grossman]{facet_PbSE_mobility_1} Li,~H.; Zhitomirsky,~D.; Dave,~S.; Grossman,~J.~C. Toward the Ultimate Limit of Connectivity in Quantum Dots with High Mobility and Clean Gaps. \emph{ACS Nano} \textbf{2016}, \emph{10}, 606--614\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Whitham \latin{et~al.}(2016)Whitham, Yang, Savitzky, Kourkoutis, Wise, and Hanrath]{transport_lattice_QD} Whitham,~K.; Yang,~J.; Savitzky,~B.~H.; Kourkoutis,~L.~F.; Wise,~F.; Hanrath,~T. Charge Transport And Localization In Atomically Coherent Quantum Dot Solids. \emph{Nat. Mater.} \textbf{2016}, \emph{15}, 557--563\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Evers \latin{et~al.}(2015)Evers, Schins, Aerts, Kulkarni, Capiod, Berthe, Grandidier, Delerue, van~der Zant, van Overbeek, Peters, Vanmaekelbergh, and Siebbeles]{lattice_NC_Vanmaekelbergh_THzmobility} Evers,~W.~H.; Schins,~J.~M.; Aerts,~M.; Kulkarni,~A.; Capiod,~P.; Berthe,~M.; Grandidier,~B.; Delerue,~C.; van~der Zant,~H. S.~J.; van Overbeek,~C.; Peters,~J.~L. ;Vanmaekelbergh,~D.; Siebbeles~L.~D.~A. High Charge Mobility in Two-Dimensional Percolative Networks of {PbSe} Quantum Dots Connected by Atomic Bonds. \emph{Nat. Commun.} \textbf{2015}, \emph{6}, 8195\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jang \latin{et~al.}(2015)Jang, Dolzhnikov, Liu, Nam, Shim, and Talapin]{Talapin_NCs_bridges} Jang,~J.; Dolzhnikov,~D.~S.; Liu,~W.; Nam,~S.; Shim,~M.; Talapin,~D.~V. Solution-Processed Transistors Using Colloidal Nanocrystals with Composition-Matched Molecular {\textquotedblleft}Solders{\textquotedblright}: Approaching Single Crystal Mobility. \emph{Nano Lett.} \textbf{2015}, \emph{15}, 6309--6317\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fu \latin{et~al.}(2016)Fu, Reich, and Shklovskii]{localization_length_NC} Fu,~H.; Reich,~K.~V.; Shklovskii,~B.~I. Hopping Conductivity and Insulator-Metal Transition in Films of Touching Semiconductor Nanocrystals. \emph{Phys. Rev. B} \textbf{2016}, \emph{93}, 125430\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chen \latin{et~al.}(2016)Chen, Reich, Kramer, Fu, Kortshagen, and Shklovskii]{Ting_MIT} Chen,~T.; Reich,~K.~V.; Kramer,~N.~J.; Fu,~H.; Kortshagen,~U.~R.; Shklovskii,~B.~I. Metal-Insulator Transition in Films of Doped Semiconductor Nanocrystals. \emph{Nat. Mater.} \textbf{2016}, \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kagan \latin{et~al.}(1996)Kagan, Murray, and Bawendi]{Kagan_Foster_CdSe} Kagan,~C.~R.; Murray,~C.~B.; Bawendi,~M.~G. Long-Range Resonance Transfer of Electronic Excitations in Close-Packed {CdSe} Quantum-Dot Solids. \emph{Phys. Rev. B} \textbf{1996}, \emph{54}, 8633--8643\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Crooker \latin{et~al.}(2002)Crooker, Hollingsworth, Tretiak, and Klimov]{Klimov_exciton_transfer} Crooker,~S.~A.; Hollingsworth,~J.~A.; Tretiak,~S.; Klimov,~V.~I. Spectrally Resolved Dynamics of Energy Transfer in Quantum-Dot Assemblies: Towards Engineered Energy Flows in Artificial Materials. \emph{Phys. Rev. Lett.} \textbf{2002}, \emph{89}, 186802\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Miyazaki and Kinoshita(2012)Miyazaki, and Kinoshita]{Kinoshita_exciton_hopping} Miyazaki,~J.; Kinoshita,~S. Site-Selective Spectroscopic Study on the Dynamics of Exciton Hopping in an Array of Inhomogeneously Broadened Quantum Dots. \emph{Phys. Rev. B} \textbf{2012}, \emph{86}, 035303\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Achermann \latin{et~al.}(2003)Achermann, Petruska, Crooker, and Klimov]{Klimov_Foster} Achermann,~M.; Petruska,~M.~A.; Crooker,~S.~A.; Klimov,~V.~I. Picosecond Energy Transfer in Quantum Dot Langmuir−Blodgett Nanoassemblies. \emph{J. Phys. Chem. B} \textbf{2003}, \emph{107}, 13782--13787\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kholmicheva \latin{et~al.}(2015)Kholmicheva, Moroz, Bastola, Razgoniaeva, Bocanegra, Shaughnessy, Porach, Khon, and Zamkov]{Zamkov_exciton_diffusion} Kholmicheva,~N.; Moroz,~P.; Bastola,~E.; Razgoniaeva,~N.; Bocanegra,~J.; Shaughnessy,~M.; Porach,~Z.; Khon,~D.; Zamkov,~M. Mapping the Exciton Diffusion in Semiconductor Nanocrystal Solids. \emph{ACS Nano} \textbf{2015}, \emph{9}, 2926--2937\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Poulikakos \latin{et~al.}(2014)Poulikakos, Prins, and Tisdale]{Tisdale_exciton_migration} Poulikakos,~L.~V.; Prins,~F.; Tisdale,~W.~A. Transition from Thermodynamic to Kinetic-Limited Excitonic Energy Migration in Colloidal Quantum Dot Solids. \emph{J. Phys. Chem. C} \textbf{2014}, \emph{118}, 7894--7900\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Crisp \latin{et~al.}(2013)Crisp, Schrauben, Beard, Luther, and Johnson]{uben_Beard_Luther_Johnson_2013} Crisp,~R.~W.; Schrauben,~J.~N.; Beard,~M.~C.; Luther,~J.~M.; Johnson,~J.~C. Coherent Exciton Delocalization in Strongly Coupled Quantum Dot Arrays. \emph{Nano Lett.} \textbf{2013}, \emph{13}, 4862--4869\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mork \latin{et~al.}(2014)Mork, Weidman, Prins, and Tisdale]{Tisdale_Foster_radius} Mork,~A.~J.; Weidman,~M.~C.; Prins,~F.; Tisdale,~W.~A. Magnitude of the {F}{\"o}rster Radius in Colloidal Quantum Dot Solids. \emph{J. Phys. Chem. C} \textbf{2014}, \emph{118}, 13920--13928\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Akselrod \latin{et~al.}(2014)Akselrod, Prins, Poulikakos, Lee, Weidman, Mork, Willard, Bulovic, and Tisdale]{Tisdale_subdiffusive_transport} Akselrod,~G.~M.; Prins,~F.; Poulikakos,~L.~V.; Lee,~E. M.~Y.; Weidman,~M.~C.; Mork,~A.~J.; Willard,~A.~P.; Bulovic,~V.; Tisdale,~W.~A. Subdiffusive Exciton Transport in Quantum Dot Solids. \emph{Nano Lett.} \textbf{2014}, \emph{14}, 3556--3562\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Murray \latin{et~al.}(2000)Murray, Kagan, and Bawendi]{Murray-AnnuRev30-2000} Murray,~C.~B.; Kagan,~C.~R.; Bawendi,~M.~G. Synthesis And Characterization Of Monodisperse Nanocrystals And Close-Packed Nanocrystal Assemblies. \emph{Annu. Rev. Mater. Sci.} \textbf{2000}, \emph{30}, 545--610\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kalesaki \latin{et~al.}(2014)Kalesaki, Delerue, Morais~Smith, Beugeling, Allan, and Vanmaekelbergh]{Delerue_band} Kalesaki,~E.; Delerue,~C.; Morais~Smith,~C.; Beugeling,~W.; Allan,~G.; Vanmaekelbergh,~D. Dirac Cones, Topological Edge States, and Nontrivial Flat Bands in Two-Dimensional Semiconductors with a Honeycomb Nanogeometry. \emph{Phys. Rev. X} \textbf{2014}, \emph{4}, 011010\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kalesaki \latin{et~al.}(2013)Kalesaki, Evers, Allan, Vanmaekelbergh, and Delerue]{Delerue_2D_band} Kalesaki,~E.; Evers,~W.~H.; Allan,~G.; Vanmaekelbergh,~D.; Delerue,~C. Electronic Structure of Atomically Coherent Square Semiconductor Superlattices with Dimensionality Below Two. \emph{Phys. Rev. B} \textbf{2013}, \emph{88}, 115431\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Beugeling \latin{et~al.}(2015)Beugeling, Kalesaki, Delerue, Niquet, Vanmaekelbergh, and Smith]{Delerue_topology_2D} Beugeling,~W.; Kalesaki,~E.; Delerue,~C.; Niquet,~Y.-M.; Vanmaekelbergh,~D.; Smith,~C.~M. Topological States in Multi-Orbital {HgTe} Honeycomb Lattices. \emph{Nat. Commun.} \textbf{2015}, \emph{6}, 6316\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Madelung(2013)]{978-3-642-62332-5} Madelung,~O. \emph{Semiconductors: Data Handbook}; Springer:New-York, 2004 \relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Reich and Shklovskii(2016)Reich, and Shklovskii]{dielectric_constant_touching_NC} Reich,~K.~V.; Shklovskii,~B.~I. Dielectric Constant and Charging Energy in Array of Touching Nanocrystals. \emph{Appl. Phys. Lett.} \textbf{2016}, \emph{108}, 113104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Nirmal \latin{et~al.}(1995)Nirmal, Norris, Kuno, Bawendi, Efros, and Rosen]{Efros_dark_exciton} Nirmal,~M.; Norris,~D.~J.; Kuno,~M.; Bawendi,~M.~G.; Efros,~A.~L.; Rosen,~M. Observation of the "Dark Exciton" in {CdSe} Quantum Dots. \emph{Phys. Rev. Lett.} \textbf{1995}, \emph{75}, 3728--3731\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dexter(1953)]{Dexter_transfer} Dexter,~D.~L. A Theory of Sensitized Luminescence in Solids. \emph{J. Chem. Phys.} \textbf{1953}, \emph{21}, 836--850\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liu \latin{et~al.}(2015)Liu, Rodina, Yakovlev, Golovatenko, Greilich, Vakhtin, Susha, Rogach, Kusrayev, and Bayer]{energy_transport_NC_Rodina} Liu,~F.; Rodina,~A.~V.; Yakovlev,~D.~R.; Golovatenko,~A.~A.; Greilich,~A.; Vakhtin,~E.~D.; Susha,~A.; Rogach,~A.~L.; Kusrayev,~Y.~G.; Bayer,~M. F\"orster Energy Transfer of Dark Excitons Enhanced by a Magnetic Field in an Ensemble of {CdTe} Colloidal Nanocrystals. \emph{Phys. Rev. B} \textbf{2015}, \emph{92}, 125403\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Blumling \latin{et~al.}(2012)Blumling, Tokumoto, McGill, and Knappenberger]{Exciton_CdSe_H_T} Blumling,~D.~E.; Tokumoto,~T.; McGill,~S.; Knappenberger,~K.~L. Temperature- and Field-Dependent Energy Transfer in {CdSe} Nanocrystal Aggregates Studied by Magneto-Photoluminescence Spectroscopy. \emph{Phys. Chem. Chem. Phys.} \textbf{2012}, \emph{14}, 11053\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Landau and Lifshits(1977)Landau, and Lifshits]{landau_quantum} Landau,~L.; Lifshits,~E. \emph{Quantum Mechanics: Non-relativistic Theory}; Butterworth Heinemann: New-York, 1977\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Moskalenko and Yassievich(2004)Moskalenko, and Yassievich]{Yassievich_excitons_Si} Moskalenko,~A.~S.; Yassievich,~I.~N. Excitons in {Si} Nanocrystals. \emph{Phys. Solid State} \textbf{2004}, \emph{46}, 1508--1519\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Burdov(2002)]{Holes_electrons_NCs} Burdov,~V.~A. Electron and Hole Spectra of Silicon Quantum Dots. \emph{J. Exp. Theor. Phys.} \textbf{2002}, \emph{94}, 411--418\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Allan and Delerue(2007)Allan, and Delerue]{Delerue_Foster_NC} Allan,~G.; Delerue,~C. Energy Transfer Between Semiconductor Nanocrystals: Validity of {F}{\"o}rster's Theory. \emph{Phys. Rev. B} \textbf{2007}, \emph{75}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Poddubny and Rodina(2016)Poddubny, and Rodina]{Forster_Rodina} Poddubny,~A.~N.; Rodina,~A.~V. Nonradiative and Radiative {F}{\"o}rster Energy Transfer Between Quantum Dots. \emph{J. Exp. Theor. Phys.} \textbf{2016}, \emph{122}, 531--538\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Blood(2015)]{laser_devices_Blood} Blood,~P. \emph{Quantum Confined Laser Devices: Optical gain and recombination in semiconductors}; Oxford University Press: Oxford, 2015\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Efros and Rosen(2000)Efros, and Rosen]{Efros_review_NCs} Efros,~A.~L.; Rosen,~M. The Electronic Structure of Semiconductor Nanocrystals. \emph{Annu. Rev. Mater. Sci.} \textbf{2000}, \emph{30}, 475--521\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Anderson(1963)]{Anderson_potential_kinetic_exchange} Anderson,~P.~W. In \emph{Solid State Physics}; Seitz,~F., Turnbull,~D., Eds.; Academic Press: New-York, 1963; Vol.~14; pp 99 -- 214\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kang and Wise(1997)Kang, and Wise]{Wise_PbSe} Kang,~I.; Wise,~F.~W. Electronic Structure and Optical Properties of {PbS} and {PbSe} Quantum Dots. \emph{J. Opt. Soc. Am. B} \textbf{1997}, \emph{14}, 1632--1646\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wang \latin{et~al.}(1987)Wang, Suna, Mahler, and Kasowski]{PbS_spectrum} Wang,~Y.; Suna,~A.; Mahler,~W.; Kasowski,~R. {PbS} in Polymers. From Molecules to Bulk Solids. \emph{J. Chem. Phys.} \textbf{1987}, \emph{87}, 7315--7322\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Allan and Delerue(2004)Allan, and Delerue]{PbSe_NC_spectrum_Delerue} Allan,~G.; Delerue,~C. Confinement Effects in {PbSe} Quantum Wells and Nanocrystals. \emph{Phys. Rev. B} \textbf{2004}, \emph{70}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rayleigh(1897)]{Rayleigh_diffraction_aperure} Rayleigh,~F. R.~S. On the Passage of Waves Through Apertures in Plane Screens, and Allied Problems. \emph{Philos. Mag. (1798-1977)} \textbf{1897}, \emph{43}, 259--272\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Spencer Domina~Eberle(1988)]{9780387184302} Moon, P.; Spencer, D.E. \emph{Field Theory Handbook: Including Coordinate Systems- Differential Equations- and Their Solutions}; A Springer-Verlag Telos: New-York, 1988\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zhang and Shklovskii(2004)Zhang, and Shklovskii]{PhysRevB.70.115317} Zhang,~J.; Shklovskii,~B.~I. Density of States and Conductivity of a Granular Metal or an Array of Quantum Dots. \emph{Phys. Rev. B} \textbf{2004}, \emph{70}, 115317\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Guttmann(2010)]{Lattice_green_function} Guttmann,~A.~J. Lattice Green's Functions in All Dimensions. \emph{J. Phys. A: Math. Theor.} \textbf{2010}, \emph{43}, 305205\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
2024-02-18T23:40:10.809Z
2017-02-07T02:07:27.000Z
algebraic_stack_train_0000
1,547
8,107
proofpile-arXiv_065-7638
\section{Introduction} While randomized clinical trials (RCT) remain the gold standard, large-scale observational data such as electronic medical record (EMR), mobile health, and insurance claims data are playing an increasing role in evaluating treatments in biomedical studies. Such observational data are valuable because they can be collected in contexts where RCTs cannot be run due to cost, ethical, or other feasibility issues \citep{rosenbaum2002observational}. In the absence of randomization, adjustment for covariates $\mathbf{X}$ that satisfy ``no unmeasured confounding'' (or ``ignorability'' or ``exchangeability'') assumptions is needed to avoid bias from confounding. Many methods based on propensity score (PS), outcome regression, or a combination have been developed to estimate treatment effects under these assumptions (for a review of the basic approaches see \cite{lunceford2004stratification} and \cite{kang2007demystifying}). These methods were initially developed in settings where $p$, the dimension of $\mathbf{X}$, was small relative to the sample size $n$. Modern observational data however tend to include a large number of variables with little prior knowledge known about them. Regardless of the size of $p$, adjustment for different covariates among all observed $\mathbf{X}$ yields different effects. Let $\mathcal{A}_{\pi}$ index the subset of $\mathbf{X}$ that contributes to the PS $\pi_1(\mathbf{x})=\mathbb P(T=1\mid\mathbf{X}=\mathbf{x})$, where $T\in\{0,1\}$ is the treatment. Let $\mathcal{A}_{\mu}$ index the subset of $\mathbf{X}$ contributing to either $\mu_1(\mathbf{x})$ or $\mu_0(\mathbf{x})$, where $\mu_k(\mathbf{x}) = \mathbb E(Y\mid \mathbf{X}=\mathbf{x},T=k)$ and $Y$ is an outcome. Beyond adjusting for covariates indexed in $\mathcal{A}_{\pi} \cap \mathcal{A}_{\mu}$ to mitigate confounding, it is well-known that adjusting for covariates in $\mathcal{A}_{\pi}^c\cap\mathcal{A}_{\mu}$ improves the efficiency of PS-based treatment effect estimators, whereas adjusting for covariates in $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$ decreases efficiency \citep{lunceford2004stratification,hahn2004functional,brookhart2006variable,rotnitzky2010note,de2011covariate}. This parallels similar phenomena that occur when adjusting for covariates in regression \citep{tsiatis2008covariate,de2008consequences}. Early studies using PS-based approaches cautioned against excluding variables among $\mathbf{X}$ to avoid incurring bias from excluding confounders despite potential efficiency loss \citep{perkins2000use,rubin1996matching}. \cite{vanderweele2011new} proposed a simple criteria to adjust for covariates that are known to be either a cause of $T$ or $Y$. These strategies are not feasible when $p$ is large. Initial data-driven approaches based on screening marginal associations between covariates in $\mathbf{X}$ with $T$ and $Y$ were considered in \cite{schneeweiss2009high} and \cite{hirano2001estimation}. These approaches, however, can be misleading because marginal associations need not agree with conditional associations. \cite{vansteelandt2012model} and \cite{gruber2015consistent} distinguish the problem of variable selection for estimating causal effects from variable selection for building predictive models and propose stepwise procedures focusing on optimizing treatment effect estimation. More recently, a number of authors considered regularization methods for variable selection in treatment effect estimation. For example, \cite{belloni2013inference} estimated the joint support $\mathcal{A}_{\pi}\cup\mathcal{A}_{\mu}$ through regularization and obtained treatment effects through a partially linear regression model. \cite{belloni2017program} then considered estimating treatment effects using orthogonal estimating equations (i.e. those of the doubly-robust (DR) estimator \citep{robins1994estimation}) using regularization to estimate models for $\pi_1(\mathbf{x})$ and $\mu_k(\mathbf{x})$. \cite{farrell2015robust} similarly considers estimating treatment effects with the DR estimator using group LASSO to group main and interaction effects in the outcome model. These papers focus on developing theory for existing treatment effect estimators to allow for valid inferences following variable selection. \cite{wilson2014confounder} considered estimating $\mathcal{A}_{\pi} \cup \mathcal{A}_{\mu}$ through a regularized loss function, which simplifies to an adaptive LASSO \citep{zou2006adaptive} problem with a modified penalty that selects out covariates not conditionally associated with either $T$ or $Y$. Treatment effects are estimated through an outcome model after selection. \cite{shortreed2017outcome} proposed an approach that also modifies the weights in an adaptive LASSO for a PS model to select out covariates in $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$, estimating the final treatment effect through an inverse probability weighting (IPW) estimator. These approaches modify existing regularization techniques to identify relevant covariates, but there is limited theory to support their performance compared to existing causal inference estimators used with regularization. Furthermore, by relying exclusively on PS- and outcome regression-based approaches to estimate the treatment effect in the end, the double-robustness property is often forfeited. Alternatively, \cite{koch2017covariate} proposed an adaptive group LASSO to estimate models for $\pi_k(\mathbf{x})$ and $\mu_k(\mathbf{x})$ through simultaneously minimizing the sum of their loss functions with a group LASSO penalty grouping together coefficients between the models for the same covariate. The penalty is also weighted by the inverse of the association between each covariate and $Y$ from an initial outcome model to select out covariates belonging to $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$. The estimated nuisance functions are plugged into the standard doubly-robust estimator to estimate the treatment effect. However, selecting out covariates in $\mathcal{A}_{\pi}\cap\mathcal{A}_{\mu}^c$ may inadvertently induce a misspecified model for $\pi_1(\mathbf{x})$ when it is in fact correctly specified given covariates indexed in $\mathcal{A}_{\pi}$. Moreover, the asymptotic distribution is not identified when the nuisance functions are estimated, making it difficult to compare its efficiency with other methods. Bayesian model averaging provides an alternative to regularization methods for variable selection in causal inference problems \citep{wang2012bayesian,zigler2014uncertainty,talbot2015bayesian}. These methods, however, rely on strong parametric assumptions and encounters burdensome computations when $p$ is not small. \cite{cefalu2017model} applied Bayesian model averaging to doubly-robust estimators, averaging doubly-robust estimates over posterior model probabilities of a large collection of combinations of parametric models for the nuisance functions. Priors that prefer models not including covariates belonging to $\mathcal{A}_{\pi}\cap\mathcal{A}_{\boldsymbol\beta}^c$ ease the computations. Despite this innovation, the computations could still be burdensome and possibly unfeasible for large $p$. Most of the aforementioned methods did not consider asymptotic properties allowing $p$ to diverge with $n$. We consider an IPW-based approach to estimate treatment effects with possibly high-dimensional $\mathbf{X}$. We first use regularized regression to estimate a parametric model for $\pi_1(\mathbf{x})$. Since this neglects associations between $\mathbf{X}$ and $Y$, we also use regularized regression to estimate a model for $\mu_k(\mathbf{x})$, for $k=0,1$. We then calibrate the initial PS estimates by performing nonparametric smoothing of $T$ over the linear predictors for $\mathbf{X}$ from both the initial PS and outcome models. Smoothing over the linear predictors from the outcome model, which can be viewed as a working prognostic score \citep{hansen2008prognostic}, uses the variation in $\mathbf{X}$ predictive of $Y$ to inform estimation of the calibrated PS. We show that our proposed estimator is doubly-robust and locally semiparametric efficient for the ATE under a nonparametric model. Moreover, we show that it achieves potentially substantial gains in robustness and efficiency under misspecification of working models for $\pi_1(\mathbf{x})$ and $\mu_k(\mathbf{x})$. The results are shown to hold allowing $p$ to diverge with $n$ under sparsity assumptions with suitable regularization. The broad approach is similar to a method proposed for estimating mean outcomes in the presence of data missing at random \citep{hu2012semiparametric}, except we use the double-score to estimate a PS instead of an outcome model. In contrast to their results, we show that a higher-order kernel is required due to the two-dimensional smoothing, find explicit efficiency gains under misspecification of the outcome model, and consider asymptotics with diverging $p$. The combined use of PS and prognostic scores has also recently been suggested for matching and subclassification \citep{leacy2014joint}, but the theoretical properties have not been established. The rest of this paper is organized as follows. The method is introduced in Section \ref{s:method} and its asymptotic properties are discussed in Section \ref{s:asymptotics}. A perturbation-resampling method is proposed for inference in Section \ref{s:perturbation}. Numerical studies including simulations and applications to estimating treatment effects in an EMR study and a cohort study with a large number of covariates is presented in Section \ref{s:numerical}. We conclude with some additional discussion in Section \ref{s:discussion}. Regularity conditions and proofs are relegated to the Web Appendices. \section{Method} \label{s:method} \subsection{Notations and Problem Setup} Let $\mathbf{Z}_i = (Y_i,T_i,\mathbf{X}_i^{\sf \tiny T})^{\sf \tiny T}$ be the observed data for the $i$th subject, where $Y_i$ is an outcome that could be modeled by a generalized linear model (GLM), $T_i\in\{ 0,1\}$ a binary treatment, and $\mathbf{X}_i$ is a $p$-dimensional vector of covariates with compact support $\mathcal{X}\subseteq\mathbb{R}^{p}$. Here we allow $p$ to diverge with $n$ such that such that $log(p)/log(n) \to \nu$, for $\nu \in [0,1)$. The observed data consists of $n$ independent and identically distributed (iid) observations $\mathscr{D}=\{ \mathbf{Z}_i : i=1,\ldots,n\}$ drawn from the distribution $\mathbb P$. Let $Y_i^{(1)}$ and $Y_i^{(0)}$ denote the counterfactual outcomes \citep{rubin1974estimating} had an individual been treated with treatment $1$ or $0$. Based on $\mathscr{D}$, we want to make inferences about the average treatment effect (ATE): \begin{align} \Delta = \mathbb E\{ Y^{(1)}\} - \mathbb E\{ Y^{(0)}\} = \mu_1 - \mu_0. \end{align} For identifiability, we require the following standard causal inference assumptions: \begin{align} &Y = T Y^{(1)} + (1-T) Y^{(0)} \text{ with probability } 1\label{a:consi}\\ &\pi_1(\mathbf{x}) \in [\epsilon_{\pi},1-\epsilon_{\pi}] \text{ for some } \epsilon_{\pi} >0, \text{ when } \mathbf{x} \in \mathcal{X} \label{a:posi}\\ &Y^{(1)} \perp\!\!\!\perp T \mid \mathbf{X} \text{ and } Y^{(0)}\perp\!\!\!\perp T \mid \mathbf{X}, \label{a:nuca} \end{align} where $\pi_k(\mathbf{x}) = P(T=k\mid \mathbf{X}=\mathbf{x})$, for $k=0,1$. Through the third condition we assume from the onset no unmeasured confounding holds given the entire $\mathbf{X}$, which could be plausible when a large set of baseline covariates are included in $\mathbf{X}$. Under these assumptions, it is well-known that $\Delta$ can be identified from the observed data distribution $\mathbb P$ through the g-formula \citep{robins1986new}: \begin{align} \Delta^* = \mathbb E\{ \mu_1(\mathbf{X}) - \mu_0(\mathbf{X})\} = \mathbb E\left\{ \frac{I(T=1)Y}{\pi_1(\mathbf{X})}-\frac{I(T=0)Y}{\pi_0(\mathbf{X})}\right\}, \end{align} where $\mu_k(\mathbf{x}) = \mathbb E(Y\mid \mathbf{X}=\mathbf{x},T=k)$, for $k=0,1$. We will consider an estimator based on the IPW form that will nevertheless be doubly-robust so that it is consistent under parametric models where either $\pi_k(\mathbf{x})$ or $\mu_k(\mathbf{x})$ is correctly specified. \subsection{Parametric Models for Nuisance Functions} \label{s:models} As $\mathbf{X}$ is high-dimensional, we consider parametric modeling as a means to reduce the dimensions of $\mathbf{X}$ when estimating the nuisance functions $\pi_k(\mathbf{x})$ and $\mu_k(\mathbf{x})$. For reference, let $\mathcal{M}_{np}$ be the nonparametric model for the distribution of $\mathbf{Z}$, $\mathbb P$, that has no restrictions on $\mathbb P$ except requiring the second moment of $\mathbf{Z}$ to be finite. Let $\mathcal{M}_{\pi} \subseteq \mathcal{M}_{np}$ and $\mathcal{M}_{\mu} \subseteq \mathcal{M}_{np}$ respectively denote parametric working models under which: \begin{align} &\pi_1(\mathbf{x}) = g_{\pi}(\alpha_0+\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}), \label{e:psmod} \\ \text{and } &\mu_k(\mathbf{x}) = g_{\mu}(\beta_0 + \beta_1 k + \boldsymbol\beta_k^{\sf \tiny T}\mathbf{x}), \text{ for } k=0,1,\label{e:ormod} \end{align} where $g_{\pi}(\cdot)$ and $g_{\mu}(\cdot)$ are known link functions, and $\vec{\balph}=(\alpha_0,\boldsymbol\alpha^{\sf \tiny T})^{\sf \tiny T}\in\Theta_{\balph} \subseteq \mathbb{R}^{p+1}$ and $\vec{\boldsymbol\beta} = (\beta_0,\beta_1,\boldsymbol\beta_0^{\sf \tiny T},\boldsymbol\beta_1^{\sf \tiny T})^{\sf \tiny T} \in \Theta_{\bbeta} \subseteq \mathbb{R}^{2p+2}$ are unknown parameters. The specifications in \eqref{e:psmod} or \eqref{e:ormod} could be made more flexible by applying basis expansion functions, such as splines, to $\mathbf{X}$. In \eqref{e:ormod} slopes are allowed to differ by treatment arms to allow for heterogeneous effects of $\mathbf{X}$ between treatments. In data where it is reasonable to assume heterogeneity is weak or nonexistent, it may be beneficial for efficiency to restrict $\boldsymbol\beta_0 = \boldsymbol\beta_1$, in which case \eqref{e:ormod} is simply a main effects model. We discuss concerns about ancillarity with the main effects model in the \hyperref[s:discussion]{Discussion}. Regardless of the validity of either working model (i.e. whether $\mathbb P$ belongs in $\mathcal{M}_{\pi} \cup \mathcal{M}_{\mu}$), we first obtain estimates of $\boldsymbol\alpha$ and $\boldsymbol\beta_k$'s through regularized estimation: \begin{align} &(\widehat{\alpha}_0,\widehat{\balph}^{\sf \tiny T})^{\sf \tiny T} = \argmin_{\vec{\balph}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\pi}(\vec{\balph};T_i,\mathbf{X}_i) + p_{\pi}(\vec{\balph}_{\{-1\}}; \lambda_{n})\right\} \label{e:psest}\\ &(\widehat{\beta}_0,\widehat{\beta}_1,\widehat{\boldsymbol\beta}_0^{\sf \tiny T},\widehat{\boldsymbol\beta}_1^{\sf \tiny T})^{\sf \tiny T} = \argmin_{\vec{\boldsymbol\beta}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\mu}(\vec{\boldsymbol\beta};\mathbf{Z}_i) + p_{\mu}(\vec{\boldsymbol\beta}_{\{-1\}};\lambda_{n})\right\}, \label{e:orest} \end{align} where $\ell_{\pi}(\vec{\balph};T_i,\mathbf{X}_i)$ denotes the log-likelihood for $\vec{\balph}$ under $\mathcal{M}_{\pi}$ given $T_i$ and $\mathbf{X}_i$, $\ell_{\mu}(\vec{\boldsymbol\beta};\mathbf{Z}_i)$ is a log-likelihood for $\vec{\boldsymbol\beta}$ from a GLM suitable for the outcome type of $Y$ under $\mathcal{M}_{\mu}$ given $\mathbf{Z}_i$, and, for any vector $\mathbf{v}$, $\mathbf{v}_{\{-1\}}$ denotes the subvector of $\mathbf{v}$ excluding the first element. We require penalty functions $p_{\pi}(\mathbf{u};\lambda)$ and $p_{\mu}(\mathbf{u};\lambda)$ to be chosen such that the oracle properties \citep{fan2001variable} hold. An example is the adaptive LASSO \citep{zou2006adaptive}, where $p_{\pi}(\vec{\balph}_{\{-1\}};\lambda_n) =\lambda_n \sum_{j=1}^p \abs{\alpha_j}/\abs{\widetilde{w}_{\pi,j}}^{\gamma} $ with initial weights $\widetilde{w}_{\pi,j}$ estimated from ridge regression, tuning parameter $\lambda_n$ such that $\lambda_n\sqrt{n} \to 0$ and $\lambda_n n^{(1-\nu)(1+\gamma)/2}\to\infty$, and $\gamma > 2\nu/(1-\nu)$ \citep{zou2009adaptive}. When additional structure is known, other penalties yielding the oracle properties such as adaptive elastic net \citep{zou2009adaptive} or Group LASSO \citep{wang2008note} penalties can also be used. In principle other variable selection and estimation procedures \citep{wilson2014confounder,koch2017covariate} can also be used, but the theoretical properties may be difficult to verify without the oracle properties. We assume the true $\boldsymbol\alpha$ and $\boldsymbol\beta_k$'s to be sparse when working models are correctly specified (i.e. when $\mathbb P$ belongs to $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$). For example, in EMRs, a large number of covariates extracted from codified or narrative data can be expected to actually be irrelevant to either the outcome or the treatment assignment processes. Even when $\mathcal{M}_{\pi}$ or $\mathcal{M}_{\mu}$ is not exactly correctly specified, we assume that the correlation between covariates are not so strong such that the limiting values of $\widehat{\balph}$ and $\widehat{\boldsymbol\beta}_k$, $\bar{\boldsymbol\alpha}$ and $\bar{\boldsymbol\beta}_k$, are still sparse. The oracle properties then ensure that the correct support can be selected in large samples. Let $\mathcal{A}_{\boldsymbol\alpha}$ and $\mathcal{A}_{\boldsymbol\beta_k}$ denote the respective support of $\bar{\boldsymbol\alpha}$ and $\bar{\boldsymbol\beta}_k$, regardless of the validity of either working model. By selecting out irrelevant covariates belonging to $\mathcal{A}_{\boldsymbol\alpha}^c \cap \mathcal{A}_{\boldsymbol\beta_k}^c$ when estimating the PS $\pi_k(\mathbf{x})$, the efficiency of subsequent IPW estimators for $\mu_k$ can be expected to improve substantially when $p$ is not small. The regularization in \eqref{e:psest} selects all variables belonging to $\mathcal{A}_{\boldsymbol\alpha}$, which guards against misspecification of the model in \eqref{e:psmod} if variables in $\mathcal{A}_{\boldsymbol\alpha}$ are selected out. But applying regularization to estimate the PS directly through \eqref{e:psest} only may be inefficient because covariates belonging to $\mathcal{A}_{\boldsymbol\alpha}^c \cap \mathcal{A}_{\boldsymbol\beta_k}$ would be selected out \citep{lunceford2004stratification,brookhart2006variable}. In the following we consider a calibrated PS based on both $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$ and $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ that addresses this shortcoming. \subsection{Double-Index Propensity Score and IPW Estimator} \label{s:estimator} To mitigate the effects of misspecification of \eqref{e:psmod}, one could \emph{calibrate} an initial PS estimate $g_{\pi}(\widehat{\alpha}_0+\widehat{\balph}^{\sf \tiny T}\mathbf{X})$ by performing nonparametric smoothing of $T$ over $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$. This adjusts the initial estimates $g_{\pi}(\widehat{\alpha}_0+\mathbf{X}^{\sf \tiny T}\widehat{\balph})$ closer to the true probability of receiving treatment $1$ given $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$. However, we consider a smoothing over not only $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$ but also $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ as well to allow variation in covariates predictive of the outcome, i.e. covariates indexed in $\mathcal{A}_{\boldsymbol\beta_k}$, to inform this calibration. In other words, this serves to as a means to include covariates indexed in $\mathcal{A}_{\boldsymbol\beta_k}$ in the smoothing, except that such covariates are initially reduced into $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$. The double-index PS (DiPS) estimator for each treatment is thus given by: \begin{align} \widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)= \frac{n^{-1}\sum_{j=1}^nK_h\{(\widehat{\balph},\widehat{\boldsymbol\beta}_k)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}I(T_j=k)}{n^{-1}\sum_{j=1}^n K_h\{(\widehat{\balph},\widehat{\boldsymbol\beta}_k)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}}, \text{ for } k=0,1, \end{align} where $\widehat{\boldsymbol\theta}_k = (\widehat{\balph}^{\sf \tiny T},\widehat{\boldsymbol\beta}_k^{\sf \tiny T})^{\sf \tiny T}$, $K_h(\mathbf{u})= h^{-2}K(\mathbf{u}/h)$, and $K(\mathbf{u})$ is a bivariate $q$-th order kernel function with $q>2$. A higher-order kernel is required here for the asymptotics to be well-behaved, which is the price for estimating the nuisance functions $\pi_k(\mathbf{x})$ using two-dimensional smoothing. This allows for the possibility of negative values for $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$. Nevertheless, $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$ are nuisance estimates not of direct interest, and we find in numerical studies that negative values occur infrequently and do not compromise the performance of the final estimator. A monotone transformation of the input scores for each treatment $\widehat{\mathbf{S}}_k = (\widehat{\balph},\widehat{\boldsymbol\beta}_k)^{\sf \tiny T}\mathbf{X}$ can be applied prior to smoothing to improve finite sample performance \citep{wand1991transformations}. In numerical studies, for instance, we applied a a probability integral transform based on the normal cumulative distribution function to the standardized scores to obtain approximately uniformly distributed inputs. We also scaled components of $\widehat{\mathbf{S}}_k$ such that a common bandwidth $h$ can be used for both components of the score. With $\pi(\mathbf{x})$ estimated by $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$, the estimator for $\Delta$ is given by $\widehat{\Delta} = \widehat{\mu}_1 - \widehat{\mu}_0$, where: \begin{align} \widehat{\mu}_k = \left\{ \sum_{i=1}^n \frac{I(T_i = k)}{\widehat{\pi}_k(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k)}\right\}^{-1}\left\{ \sum_{i=1}^n \frac{I(T_i = k)Y_i}{\widehat{\pi}_k(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k)}\right\}^{-1}, \text{ for } k=0,1. \end{align} This is the usual normalized IPW estimator, where, the PS is given by the double-index PS estimates. In the following, we show that this simple construction leads to an estimator that also possesses the robustness and efficiency properties of the doubly-robust estimator derived from semiparametric efficiency theory \citep{robins1994estimation}, and, in certain scenarios, achieves additional gains in robustness and efficiency. \section{Asymptotic Robustness and Efficiency Properties} \label{s:asymptotics} We first directly present the influence function expansion of $\widehat{\Delta}$ in general. Robustness and efficiency results are subsequently derived based on the expansion. To present the influence function expansion, let $\bar{\Delta} = \bar{\mu}_1-\bar{\mu}_0$ be the limiting estimand, with: \begin{align*} \bar{\mu}_k = \mathbb E\left\{ \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)}\right\}, \text{ for } k=0,1, \end{align*} $\bar{\boldsymbol\theta}_k = (\bar{\boldsymbol\alpha}^{\sf \tiny T},\bar{\boldsymbol\beta}_k^{\sf \tiny T})^{\sf \tiny T}$, and $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k) = \mathbb P(T_i = k \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x})$. Moreover, for any vector $\mathbf{v}$ of length $p$ and any index set $\mathcal{A} \subseteq \{ 1,2,\ldots,p\}$, let $\mathbf{v}_{\mathcal{A}}$ denote the subvector with elements indexed in $\mathcal{A}$. Let $\widehat{\mathcal{W}}_k = n^{1/2}(\widehat{\mu}_k - \bar{\mu}_k)$ for $k=0,1$ so that $n^{1/2}(\widehat{\Delta} - \bar{\Delta}) = \widehat{\mathcal{W}}_1 - \widehat{\mathcal{W}}_0$. We show in Web Appendix D the following result. \begin{theorem} \label{t:IFexp} Suppose that causal assumptions \eqref{a:consi}, \eqref{a:posi}, \eqref{a:nuca} and the regularity conditions in Web Appendix A hold. Let the sparsity of $\bar{\boldsymbol\alpha}$ and $\bar{\boldsymbol\beta}_k$, $\abs{\mathcal{A}_{\boldsymbol\alpha}} = s_{\boldsymbol\alpha}$ and $\abs{\mathcal{A}_{\boldsymbol\beta_k}} = s_{\boldsymbol\beta_k}$, be fixed. If $log(p)/log(n) \to \nu$ for $\nu \in [0,1)$, then, with probability tending to $1$, $\widehat{\mathcal{W}}_k$ has the expansion: \begin{align} \widehat{\mathcal{W}}_k &= n^{-1/2}\sum_{i=1}^n \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)} - \left\{ \frac{I(T_i = k)}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)}-1\right\}\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i,T_i =k ) -\bar{\mu}_k \label{e:IFeff}\\ &\qquad + \mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}^{\sf \tiny T} n^{1/2}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}} + \mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}}^{\sf \tiny T} n^{1/2}(\widehat{\boldsymbol\beta}_k - \bar{\boldsymbol\beta}_k)_{\mathcal{A}_{\boldsymbol\beta_k}} + O_p(n^{1/2}h^q + n^{-1/2}h^{-2}), \label{e:IFnui} \end{align} for $k=0,1$, where $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}$ and $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\beta_k}}$ are deterministic vectors, $n^{1/2}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}} = O_p(1)$, and $n^{1/2}(\widehat{\boldsymbol\beta}_k-\bar{\boldsymbol\beta}_k)_{\mathcal{A}_{\boldsymbol\beta_k}} = O_p(1)$. Under model $\mathcal{M}_{\pi}$ $\mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}} = \boldsymbol 0$, for $k=0,1$. Under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$, we additionally have that $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}} = \boldsymbol 0$, for $k=0,1$. \end{theorem} The challenge in showing Theorem \ref{t:IFexp} is to obtain an influence function expansion when the nuisance functions $\pi_k(\mathbf{x})$ are estimated with two-dimensional kernel-smoothing rather than finite-dimensional models. We show in Web Appendix D that a V-statistic projection lemma \citep{newey1994large} can be applied to obtain the expansion in this situation. Let $\widehat{\Delta}_{dr}$ denote the usual doubly-robust estimator with $\pi_k(\mathbf{x})$ and $\mu_k(\mathbf{x})$ estimated in the same way through \eqref{e:psest} and \eqref{e:orest}. The influence function expansion for $\widehat{\Delta}$ in Theorem \ref{t:IFexp} is nearly identical to that of $\widehat{\Delta}_{dr}$. The terms in \eqref{e:IFeff} are the same except $\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)$ and $\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i,T_i=k)$ replaces parametric models at the limiting estimates. Terms in \eqref{e:IFnui} analogously represent the additional contributions from estimating the nuisance parameters. The expansion also shows that asymptotically no contribution from smoothing is incurred. This similarity in the influence functions yields similar desirable robustness and efficiency properties, which are improved upon in some cases due to the calibration through smoothing. \subsection{Robustness} In terms of robustness, under $\mathcal{M}_{\pi}$, $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k) = \pi_k(\mathbf{x})$ so the limiting estimands are: \begin{align*} \bar{\mu}_k = \mathbb E\left\{ \frac{I(T_i = k)Y_i}{\pi_k(\mathbf{X}_i)}\right\} = \mathbb E\{ Y_i^{(k)}\} \text{, for } k =0,1. \end{align*} On the other hand, under $\mathcal{M}_{\mu}$, $\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x},T_i =k) = \mu_k(\mathbf{x})$ so that: \begin{align*} \bar{\mu}_k &= \mathbb E\left\{ \mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}^{\sf \tiny T}\mathbf{X}_i,T_i =k)\right\} =\mathbb E\left\{ \mu_k(\mathbf{X}_i)\right\} = \mathbb E\{ Y_i^{(k)}\}, \text{ for } k=0,1. \end{align*} Thus by Theorem \ref{t:IFexp}, under $\mathcal{M}_{\pi} \cup \mathcal{M}_{\mu}$, $\widehat{\Delta} - \Delta = O_p(n^{-1/2})$ provided that $h=O(n^{-\alpha})$ for $\alpha \in (\frac{1}{2q},\frac{1}{4})$. That is, $\widehat{\Delta}$ is \emph{doubly-robust} for $\Delta$. Beyond this usual form of double-robustness, if the PS model specification \eqref{e:psmod} is incorrect, we expect the calibration step to at least partially correct for the misspecfication since, in large samples, given $\mathbf{x}$, $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k)$ is closer to the true $\pi_k(\mathbf{x})$ than the misspecified parametric model $g_{\pi}(\bar{\alpha}_0+\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x})$. In some specific scenarios, the calibration can completely overcome the misspecification of the PS model. For example, let $\widetilde{\mathcal{M}}_{\pi}$ denote a model for $\mathbb P$ under which: \begin{align*} \pi_1(\mathbf{x}) = \widetilde{g}_{\pi}(\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}) \end{align*} for some \emph{unknown} link function $\widetilde{g}_{\pi}(\cdot)$ and unknown $\boldsymbol\alpha \in\mathbb{R}^{p}$, \emph{and} $\mathbf{X}$ are known to be elliptically distributed such that $\mathbb E(\boldsymbol a^{\sf \tiny T}\mathbf{X}\mid \boldsymbol\alpha_*^{\sf \tiny T}\mathbf{X})$ exists and is linear in $\boldsymbol\alpha_*^{\sf \tiny T}\mathbf{X}$, where $\boldsymbol\alpha_*$ denotes the true $\boldsymbol\alpha$ (e.g. if $\mathbf{X}$ is multivariate normal). Then, using the results of \cite{li1989regression}, it can be shown that $\bar{\boldsymbol\alpha}=c\boldsymbol\alpha_*$ for some scalar $c$. But since $\widehat{\pi}_k(\mathbf{x};\widehat{\boldsymbol\theta}_k)$ recovers $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k)=\mathbb P(T=k\mid\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x})$ asymptotically, it also recovers $\pi_k(\mathbf{x})$ under $\widetilde{\Mscr}_{\pi}$. Consequently, $\widehat{\Delta}$ is more than doubly-robust in that $\widehat{\Delta}-\Delta = O_p(n^{-1/2})$ under the larger model $\mathcal{M}_{\pi}\cup \widetilde{\Mscr}_{\pi}\cup\mathcal{M}_{\mu}$. The same phenomenon also occurs when estimating $\boldsymbol\beta_k$ under misspecification of the link in \eqref{e:ormod}, if we do not assume $\boldsymbol\beta_0=\boldsymbol\beta_1$ and use a common model to estimate the $\boldsymbol\beta_k$'s. In this case, if $\widetilde{\Mscr}_{\mu}$ is an analogous model under which: \begin{align*} \mu_1(\mathbf{x}) = \widetilde{g}_{\mu,1}(\boldsymbol\beta_1^{\sf \tiny T}\mathbf{x}) \quad \text{ and } \quad \mu_0(\mathbf{x})=\widetilde{g}_{\mu,0}(\boldsymbol\beta_0^{\sf \tiny T}\mathbf{x}) \end{align*} for some unknown link functions $\widetilde{g}_{\mu,0}(\cdot)$ and $\widetilde{g}_{\mu,1}(\cdot)$ and $\mathbf{X}$ are elliptically distributed, then $\widehat{\Delta}-\Delta = O_p(n^{-1/2})$ under the even larger model $\mathcal{M}_{\pi}\cup \widetilde{\Mscr}_{\pi} \cup \mathcal{M}_{\mu}\cup \widetilde{\Mscr}_{\mu}$. This does not hold when $\boldsymbol\beta_0=\boldsymbol\beta_1$ is assumed, as $T$ is binary so $(T,\mathbf{X}^{\sf \tiny T})^{\sf \tiny T}$ is not exactly elliptically distributed. But the result may still be expected to hold approximately when $\mathbf{X}$ is elliptically distributed. \subsection{Efficiency} In terms efficiency, let the terms contributed to the influence function for $\widehat{\Delta}$ when $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ are known be: \begin{align} \varphi_{i,k} = \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)} - \left\{ \frac{I(T_i = k)}{\pi_k(\mathbf{X}_i;\bar{\boldsymbol\theta}_k)}-1\right\}\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i,\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i,T_i =k ) -\bar{\mu}_k. \end{align} Under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$, $\varphi_{i,k}$ is the full influence function for $\widehat{\Delta}$. This influence function is the efficient influence function for $\Delta^*$ under $\mathcal{M}_{np}$ since $\mathbb E(Y_i \mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x},T_i=k)=\mu_k(\mathbf{x})$ and $\pi_k(\mathbf{x};\bar{\boldsymbol\theta}_k)=\pi_k(\mathbf{x})$ for all $\mathbf{x}\in\mathcal{X}$ at distributions for $\mathbb P$ belonging to $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$. An important consequence of this is that $\widehat{\Delta}$ reaches the semiparametric efficiency bound under $\mathcal{M}_{np}$, at distributions belonging to $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}$. That is, $\widehat{\Delta}$ is also a \emph{locally semiparametric efficient} estimator under $\mathcal{M}_{np}$. In addition, based on the same arguments as in the preceding subsection, the local efficiency property could potentially be broadened so that $\widehat{\Delta}$ is locally efficient at distributions for $\mathbb P$ belonging to $(\mathcal{M}_{\pi} \cup \widetilde{\Mscr}_{\pi})\cap (\mathcal{M}_{\mu} \cup \widetilde{\Mscr}_{\mu})$. Beyond this characterization of efficiency similar to that of $\widehat{\Delta}_{dr}$, there are additional benefits of $\widehat{\Delta}$ under model $\mathcal{M}_{\pi}\cap \mathcal{M}_{\mu}^c$. In this case, akin to $\widehat{\Delta}_{dr}$, estimating $\boldsymbol\beta_k$ does not contribute to the asymptotic variance since $\mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}}=\boldsymbol 0$, and a similar $n^{1/2}\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}^{\sf \tiny T}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}}$ term is contributed from estimating $\boldsymbol\alpha$. The analogous term in the expansion for $\widehat{\Delta}_{dr}$ contributes the negative of a projection of the preceding terms onto the space of the linear span of the scores for $\boldsymbol\alpha$ (restricted to $\mathcal{A}_{\boldsymbol\alpha}$) to its influence function. We show in Web Appendix D the same interpretation can be adopted for $\widehat{\Delta}$. \begin{theorem} \label{t:effgain} Let $\mathbf{U}_{\boldsymbol\alpha}$ be the score for $\boldsymbol\alpha$ under $\mathcal{M}_{\pi}$ and let $[\mathbf{U}_{\boldsymbol\alpha,\mathcal{A}_{\boldsymbol\alpha}}]$ denote the linear span of its components indexed in $\mathcal{A}_{\boldsymbol\alpha}$. In the Hilbert space of random variables with mean $0$ and finite variance $\mathcal{L}_2^0$ with inner product given by the covariance, let $\Pi\{ V \mid \mathcal{S}\}$ denote the projection of some $V\in \mathcal{L}_2^0$ into a subspace $\mathcal{S} \subseteq \mathcal{L}_2^0$. If the assumptions required for Theorem \ref{t:IFexp} hold , under $\mathcal{M}_{\pi}$, $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}^{\sf \tiny T} n^{1/2}(\widehat{\balph}-\bar{\boldsymbol\alpha})_{\mathcal{A}_{\boldsymbol\alpha}} = -n^{-1/2}\sum_{i=1}^n\Pi\{\varphi_{i,k}\mid[\mathbf{U}_{\boldsymbol\alpha,\mathcal{A}_{\boldsymbol\alpha}}]\} + o_p(1)$. \end{theorem} This result implies through the Pythagorean theorem the familiar efficiency paradox that under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$, $\widehat{\Delta}$ would be more efficient if $\boldsymbol\alpha$ were estimated, even if the true $\boldsymbol\alpha$ were known \citep{lunceford2004stratification}. Moreover, under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$, the influence function of $\widehat{\Delta}$ involves projecting $\varphi_{i,k}$ rather than: \begin{align*} \phi_{i,k} = \frac{I(T_i =k)Y_i}{\pi_k(\mathbf{X}_i)} - \left\{ \frac{I(T_i = k)}{\pi_k(\mathbf{X}_i)}-1\right\}g_{\mu}(\bar{\beta}_{0} + \bar{\beta}_1 k+\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i) -\bar{\mu}_{k}, \end{align*} which are the preceding influence function terms for $\widehat{\Delta}_{dr}$. But since $\mathbb E(Y_i\mid \bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\alpha}^{\sf \tiny T}\mathbf{x},\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i=\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x},T_i =k)$ better approximates $\mu_k(\mathbf{x})$ than the limiting parametric model $g_{\mu}(\bar{\beta}_0 + \bar{\beta}_1 k +\bar{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{x})$, it can be shown that $\mathbb E(\phi_{i,k}^2)>\mathbb E(\varphi_{i,k}^2)$ for $k=0,1$. Given that under $\mathcal{M}_{\pi}\cap \mathcal{M}_{\mu}^c$ the influence function for both $\widehat{\Delta}$ and $\widehat{\Delta}_{dr}$ are then in the form of a residual after projecting $\varphi_{i,k}$ and $\phi_{i,k}$ onto the same space, the asymptotic variance of $\widehat{\Delta}$ can be seen to be less than $\widehat{\Delta}_{dr}$. That is, $\widehat{\Delta}$ is more efficient than $\widehat{\Delta}_{dr}$ under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$. We show in the simulation studies that this improvement can lead to substantial efficiency gains under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$ in finite samples. These robustness and efficiency properties distinguish $\widehat{\Delta}$ from the usual doubly-robust estimators and their variants. Moreover, despite being motivated by data with high-dimensional $p$, these properties still hold if $p$ is small relative to $n$, which makes $\widehat{\Delta}$ effective in low-dimensional settings as well. We next consider a perturbation scheme to estimate standard errors (SE) and confidence intervals (CI) for $\widehat{\Delta}$. \section{Perturbation Resampling} \label{s:perturbation} Although the asymptotic variance of $\widehat{\Delta}$ can be determined through its influence function specified in Theorem \eqref{t:IFexp}, a direct empirical estimate based on the influence function is difficult because $\mathbf{u}_{k,\mathcal{A}_{\boldsymbol\alpha}}$ and $\mathbf{v}_{k,\mathcal{A}_{\boldsymbol\beta_k}}$ involve complicated functionals of $\mathbb P$ that are difficult to estimate. Instead we propose a simple perturbation-resampling procedure. Let $\mathcal{G} = \{ G_i : i=1,\ldots,n\}$ be a set of non-negative iid random variables with unit mean and variance that are independent of $\mathscr{D}$. The procedure then perturbs each ``layer'' of the estimation of $\widehat{\Delta}$. Let the perturbed estimates of $\vec{\balph}$ and $\vec{\boldsymbol\beta}$ be: \begin{align*} &(\widehat{\alpha}^*_0,\widehat{\balph}^{*\sf \tiny T})^{\sf \tiny T} = \argmin_{\vec{\balph}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\pi}(\vec{\balph};T_i,\mathbf{X}_i)G_i + p_{\pi}(\vec{\balph}_{\{-1\}}; \lambda_{n})\right\} \\ &(\widehat{\beta}^*_0,\widehat{\beta}^*_1,\widehat{\boldsymbol\beta}^{*\sf \tiny T}_0,\widehat{\boldsymbol\beta}^{*\sf \tiny T}_1)^{\sf \tiny T} = \argmin_{\vec{\boldsymbol\beta}}\left\{ - n^{-1}\sum_{i=1}^n \ell_{\mu}(\vec{\boldsymbol\beta};\mathbf{Z}_i)G_i + p_{\mu}(\vec{\boldsymbol\beta}_{\{-1\}};\lambda_{n})\right\}. \end{align*} The perturbed DiPS estimates are calculated by: \begin{align} \widehat{\pi}_k^*(\mathbf{x};\widehat{\boldsymbol\theta}_k^*)= \frac{\sum_{j=1}^nK_h\{(\widehat{\balph}^*,\widehat{\boldsymbol\beta}_k^*)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}I(T_j=k)G_j}{\sum_{j=1}^n K_h\{(\widehat{\balph}^*,\widehat{\boldsymbol\beta}_k^*)^{\sf \tiny T}(\mathbf{X}_j-\mathbf{x})\}G_j}, \text{ for } k=0,1. \end{align} Lastly the perturbed estimator is given by $\widehat{\Delta}^* = \widehat{\mu}_1^* - \widehat{\mu}_0^*$ where: \begin{align*} \widehat{\mu}_k^* = \left\{ \sum_{i=1}^n \frac{I(T_i = k)}{\widehat{\pi}_k^*(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k^*)}G_i\right\}^{-1}\left\{ \sum_{i=1}^n \frac{I(T_i = k)Y_i}{\widehat{\pi}_k^*(\mathbf{X}_i;\widehat{\boldsymbol\theta}_k^*)} G_i\right\}^{-1}, \text{ for } k=0,1. \end{align*} It can be shown based on arguments similar to those in \cite{tian2007model} that the asymptotically distribution of $n^{1/2}(\widehat{\Delta} - \bar{\Delta})$ coincides with that of $n^{1/2}(\widehat{\Delta}^* - \widehat{\Delta}) \mid \mathscr{D}$. We can thus approximate the SE of $\widehat{\Delta}$ based on the empirical standard deviation or, as a robust alternative, the empirical mean absolute deviations (MAD) of a large number of resamples $\widehat{\Delta}^*$ and construct CI's using empirical percentiles of resamples of $\widehat{\Delta}^*$. \section{Numerical Studies} \label{s:numerical} \subsection{Simulation Study} We performed extensive simulations to assess the finite sample bias and relative efficiency (RE) of $\widehat{\Delta}$ (DiPS) compared to alternative estimators. We also assessed in a separate set of simulations the performance of the perturbation procedure. Adaptive LASSO was used to estimate $\boldsymbol\alpha$ and $\boldsymbol\beta_k$, and a Gaussian product kernel of order $q=4$ with a plug-in bandwidth at the optimal order (see \hyperref[s:discussion]{Discussion}) was used for smoothing. Alternative estimators include an IPW with $\pi_k(\mathbf{x})$ estimated by adaptive LASSO (ALAS), $\widehat{\Delta}_{dr}$ (DR-ALAS) with nuisances estimated by adaptive LASSO, the DR estimator with nuisance functions estimated by ``rigorous'' LASSO (DR-rLAS) of \cite{belloni2017program}, outcome-adaptive LASSO (ALS) of \cite{shortreed2017outcome}, Group Lasso and Doubly Robust Estimation (GLiDeR) of \cite{koch2017covariate}, and model average double robust (MADR) of \cite{cefalu2017model}. ALS and GLiDeR were implemented with default settings from code provided in the Supplementary Materials of the respective papers. DR-rLAS was implemented using the \texttt{hdm} R package \citep{chernozhukov2016hdm} with default settings, and MADR was implemented using the \texttt{madr} package with $M=500$ MCMC iterations to reduce the computations. Throughout the numerical studies, unless noted otherwise, we postulated $g_{\pi}(u)=1/(1+e^{-u})$ for $\mathcal{M}_{\pi}$ and $g_{\mu}(u)=u$ with $\boldsymbol\beta_0=\boldsymbol\beta_1$ for $\mathcal{M}_{\mu}$ as the working models. For adaptive LASSO, the tuning parameter was chosen by an extended regularized information criteria \citep{hui2015tuning}, which showed good performance for variable selection. We re-fitted models with selected covariates as suggested in \cite{hui2015tuning} to reduce bias. We focused on a continuous outcome in the simulations, generating the data according to: \begin{align*} \mathbf{X} \sim N(\boldsymbol 0,\Sigma), \quad T\mid \mathbf{X} \sim Ber\{ \pi_1(\mathbf{X})\}, \quad \text{ and } Y\mid \mathbf{X},T \sim N\{ \mu_T(\mathbf{X}), 10^2\}, \end{align*} where $\Sigma = (\sigma^2_{ij})$ with $\sigma^2_{ij} = 1$ if $i=j$ and $\sigma^2_{ij} = .4(.5)^{\abs{i-j}/3} I(\abs{i-j} \leq 15)$ if $i\neq j$. The simulations were varied over scenarios where working models were correct or misspecified: \begin{align*} &\text{Both correct: } \pi_1(\mathbf{x}) = g_{\pi}(\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}), \quad \mu_k(\mathbf{x}) = k + \boldsymbol\beta_k^{\sf \tiny T}\mathbf{x} \\ &\text{Misspecified $\mu_k(\mathbf{x})$: } \pi_1(\mathbf{x}) = g_{\pi}(\boldsymbol\alpha^{\sf \tiny T}\mathbf{x}), \quad \mu_k(\mathbf{x}) = k + 3\left\{ \boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x} (\boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x}+3)\right\}^{1/3} \\ &\text{Misspecified $\pi_k(\mathbf{x})$: } \pi_1(\mathbf{x}) = g_{\pi}\left\{-1+\boldsymbol\alpha_1^{\sf \tiny T}\mathbf{x}(.5\boldsymbol\alpha_2^{\sf \tiny T}\mathbf{x} + .5)\right\}, \quad \mu_k(\mathbf{x}) = k + \boldsymbol\beta_k^{\sf \tiny T}\mathbf{x} \\ &\text{Both misspecified: } \pi_1(\mathbf{x}) = g_{\pi}\left\{-1+\boldsymbol\alpha_1^{\sf \tiny T}\mathbf{x}(.5\boldsymbol\alpha_2^{\sf \tiny T}\mathbf{x} + .5)\right\}, \quad \mu_k(\mathbf{x}) = k + 3\left\{ \boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x} (\boldsymbol\beta_{k}^{\sf \tiny T}\mathbf{x}+3)\right\}^{1/3}, \end{align*} where $\boldsymbol\alpha = (.4,-.3,.4,\mathbf{.3}_{2},.4,.3,-\mathbf{.3}_{3},\boldsymbol 0_{p-10})^{\sf \tiny T}$, $\boldsymbol\beta_0=\boldsymbol\beta_1=(-1,1,-1,\mathbf{1}_2,-1,\mathbf{1}_2,-\mathbf{1}_2,\boldsymbol 0_{p-10})^{\sf \tiny T}$, $\boldsymbol\alpha_{1} = (.9,0,-.9,0,.9,0,.9,0,-.9,0,\boldsymbol 0_{p-10})^{\sf \tiny T}$, and $\boldsymbol\alpha_{2} = (0,-.6,0,.6,0,.6,0,-.6,0,-.6,\boldsymbol 0_{p-10})^{\sf \tiny T}$, with $\mathbf{a}_{m}$ denoting a $1\times m$ vector that has all its elements as $a$. In the misspecified $\mu_k(\mathbf{x})$ scenario, $\mu_k(\mathbf{x})$ is actually a single-index model among subjects with either $T=1$ or $T=0$, which allows for complex nonlinearities and interactions. Nevertheless, this is a genuine misspecification of $\mu_k(\mathbf{x})$ (i.e. $\mathbb P$ would \emph{not} belong to $\widetilde{\Mscr}_{\mu}\cup \mathcal{M}_{\mu}$) because we assumed $\boldsymbol\beta_0=\boldsymbol\beta_1$ for $\mathcal{M}_{\mu}$. A double-index model is used for misspecifying $\pi_k(\mathbf{x})$ so that we consider when $\mathbb P$ would not belong to $\widetilde{\Mscr}_{\pi}\cup\mathcal{M}_{\pi}$. The simulations were run over $R=2,000$ repetitions. Table \ref{tab:bias} presents the bias and root mean square error (RMSE) for $n=1,000, 5,000$ under the different specifications when $p=15$. Among the four scenarios considered, the bias for DiPS is small relative to the RMSE and generally diminishes to zero as $n$ increases, verifying its double-robustness. In contrast, IPW-ALAS and OAL rely on consistent estimates of the PS and show non-negligible bias under the ``misspecified $\pi_1(\mathbf{x})$'' scenario. The extra robustness of DiPS is evident under the ``both misspecified'' scenario, where most other estimators incur non-negligible bias. As the true outcome model is close to being a single-index model with elliptical covariates, DiPS still estimates the averate treatment effect $\Delta$ with little bias. In results not shown, we also checked the bias for $p$ up to $p=75$, where we generally found similar patterns. DiPS incurred around 1-2\% more bias than other estimators when $n$ was small relative to $p$, which may be a consequence of smoothing in finite samples. \begin{table}[htbp] \scalebox{0.86}{ \begin{tabular}{rrcccccccc} \toprule & & \multicolumn{2}{c}{Both Correct} & \multicolumn{2}{c}{Misspecified $\mu_k(\mathbf{x})$} & \multicolumn{2}{c}{Misspecified $\pi_1(\mathbf{x})$} & \multicolumn{2}{c}{Both misspecified} \\ \multicolumn{1}{c}{\textbf{Size}} & \multicolumn{1}{c}{\textbf{Estimator}} & \textbf{Bias} & \textbf{RMSE} & \textbf{Bias} & \textbf{RMSE} & \textbf{Bias} & \textbf{RMSE} & \textbf{Bias} & \textbf{RMSE} \\ \hline \multicolumn{1}{c}{\multirow{7}[2]{*}{n=1,000}} & \multicolumn{1}{c}{IPW-ALAS} & 0.009 & 0.279 & 0.001 & 0.414 & -0.117 & 0.312 & -0.019 & 0.434 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-ALAS} & -0.002 & 0.252 & -0.014 & 0.401 & 0.003 & 0.244 & 0.102 & 0.462 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-rLAS} & -0.057 & 0.275 & -0.092 & 0.426 & -0.022 & 0.284 & 0.021 & 0.438 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{OAL} & 0.002 & 0.263 & -0.013 & 0.403 & 0.013 & 0.283 & 0.109 & 0.459 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{GLiDeR} & -0.002 & 0.244 & -0.016 & 0.379 & 0.004 & 0.239 & 0.092 & 0.426 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{MADR} & -0.002 & 0.249 & -0.015 & 0.402 & 0.002 & 0.239 & 0.101 & 0.451 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{DiPS} & 0.015 & 0.252 & 0.003 & 0.293 & 0.017 & 0.243 & -0.003 & 0.293 \\ \hline \multicolumn{1}{c}{\multirow{7}[2]{*}{n=5,000}} & \multicolumn{1}{c}{IPW-ALAS} & -0.003 & 0.116 & 0.001 & 0.186 & -0.127 & 0.181 & -0.033 & 0.192 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-ALAS} & -0.003 & 0.110 & 0.002 & 0.184 & 0.000 & 0.107 & 0.093 & 0.216 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{DR-rLAS} & -0.003 & 0.110 & 0.002 & 0.184 & 0.037 & 0.122 & 0.252 & 0.320 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{OAL} & -0.002 & 0.115 & 0.002 & 0.186 & -0.098 & 0.155 & 0.012 & 0.193 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{GLiDeR} & -0.003 & 0.108 & 0.001 & 0.180 & 0.000 & 0.108 & 0.098 & 0.217 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{MADR} & -0.003 & 0.110 & 0.002 & 0.184 & 0.000 & 0.107 & 0.094 & 0.217 \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{DiPS} & 0.005 & 0.109 & 0.008 & 0.117 & 0.005 & 0.106 & -0.002 & 0.118 \\ \bottomrule \end{tabular} } \vspace{1.5em} \caption{Bias and RMSE of estimators by $n$ and model specification scenario. } \label{tab:bias} \end{table} Figure \ref{fig:RE} presents the RE under the different scenarios for $n=1,000, 5,000$ and $p=15,30,75$. RE was defined as the mean square error (MSE) of DR-ALAS relative to that of each estimator, with RE $>1$ indicating greater efficiency compared to DR-ALS. Under the ``both correct'' scenario many of the estimators have similar efficiency since they are variants of the doubly-robust estimator and are locally semiparametric efficient. DiPS is also no less efficient than other estimators under the ``misspecified $\pi_k(\mathbf{x})$'' scenario. On the other hand, efficiency gains of more than 100\% are achieved relative to other estimators when $n$ is large relative to $p$, under the ``misspecified $\mu_k(\mathbf{x})$'' scenario. This demonstrates that the efficiency gain resulting from Theorem \ref{t:effgain} can be substantial. The gains diminish with larger $p$ for a fixed $n$ as expected but remain substantial even when $p=75$. Similar results occur under the ``both misspecified'' scenario, except that the efficiency gains for DiPS are even larger due its robustness to misspecification in terms of the bias. \begin{figure}[h!] \centering \centerline{(a) Both correct} \includegraphics[scale=.32]{fig1_a.png} \includegraphics[scale=.32]{fig1_b.png} \centerline{(b) Misspecified $\mu_k(\mathbf{x})$} \includegraphics[scale=.32]{fig1_c.png} \includegraphics[scale=.32]{fig1_d.png} \centerline{(c) Misspecified $\pi_1(\mathbf{x})$} \includegraphics[scale=.32]{fig1_e.png} \includegraphics[scale=.32]{fig1_f.png} \centerline{(d) Misspecified both} \includegraphics[scale=.32]{fig1_g.png} \includegraphics[scale=.32]{fig1_h.png} \caption{\baselineskip=1pt RE relative to DR-ALAS by $n$, $p$, and specification scenario. } \label{fig:RE}% \end{figure} Table \ref{tab:cov} presents the performance of perturbation for DiPS when $p=15$. SEs for DiPS were estimated using the MAD. The empirical SEs (Emp SE), calculated from the sample standard deviations of $\widehat{\Delta}$ over the simulation repetitions, were generally similar to the average of the SE estimates over the repetitions (ASE), despite some under slight underestimation. The coverage of the percentile CI's (Cover) were generally close to nominal 95\%, with slight over-coverage in small samples that diminished with larger $n$. \begin{table}[htbp] \begin{tabular}{ccccc} \toprule \textbf{p} & \textbf{n} & \textbf{Emp SE} & \textbf{ASE} & \textbf{Cover} \\ \midrule 15 & 500 & 0.350 & 0.326 & 0.971 \\ 15 & 1000 & 0.248 & 0.228 & 0.956 \\ 15 & 2000 & 0.170 & 0.161 & 0.954 \\ \hline 30 & 500 & 0.351 & 0.330 & 0.973 \\ 30 & 1000 & 0.248 & 0.230 & 0.963 \\ 30 & 2000 & 0.174 & 0.161 & 0.950 \\ \bottomrule \end{tabular} \vspace{1.5em} \caption{Perturbation performance under correctly specified models. Emp SE: empirical standard error over simulations, ASE: average of standard error estimates based on MAD over perturbations, Cover: Coverage of 95\% percentile intervals.} \label{tab:cov} \end{table} \subsection{Data Example: Effect of Statins on Colorectal Cancer Risk in EMRs} We applied DiPS to assess the effect of statins, a medication for lowering cholesterol levels, on the risk of colorectal cancer (CRC) among patients with inflammatory bowel disease (IBD) identified from EMRs of a large metropolitan healthcare provider. Previous studies have suggested that the use of statins have a protective effect on CRC \citep{liu2014association}, but few studies have considered the effect among IBD patients. The EMR cohort consisted of $n=10,817$ IBD patients, including 1,375 statin users. CRC status and statin use were ascertained by the presence of ICD9 diagnosis and electronic prescription codes. We adjusted for $p=15$ covariates as potential confounders, including age, gender, race, smoking status, indication of elevated inflammatory markers, examination with colonoscopy, use of biologics and immunomodulators, subtypes of IBD, disease duration, and presence of primary sclerosing cholangitis (PSC). For the working model $\mathcal{M}_{\mu}$, we specified $g_{\mu}(u)=1/(1+e^{-u})$ to accomodate the binary outcome. SEs for other estimators were obtained from the MAD over bootstrap resamples, except for DR-rLAS, which is obtained directly from the bootstrap procedure implemented in \texttt{hdm}. CIs were calculated from percentile intervals, except for DR-rLAS, which were based on normal approximation. We also calculated a two-sided p-value from a Wald test for the null that statins have no effect, using the point and SE estimates for each estimator. The unadjusted estimate (None) based on difference in means by groups was also calculated as a reference. The left side of Table \ref{tab:data} shows that, without adjustment, the naive risk difference is estimated to be -0.8\% with a SE of 0.4\%. The other methods estimated that statins had a protective effect ranging from around -1\% to -3\% after adjustment for covariates. DiPS and DR-rLAS were the most efficient estimators, with DiPS achieving estimated variance that ranged from at least 13\% to more than 100\% lower than that of other estimators. \begin{table} \centering \scalebox{0.85}{ \begin{tabular}{ccccccccc} \toprule \multicolumn{5}{c}{IBD EMR Study} & \multicolumn{4}{c}{FOS} \\ & \textbf{Est} & \textbf{SE} & \textbf{95\% CI} & \textbf{p-val} & \textbf{Est} & \textbf{SE} & \textbf{95\% CI} & \textbf{p-val} \\ \hline None & -0.008 & 0.004 & (-0.017, 0) & 0.054 & 0.180 & 0.061 & (0.07, 0.291) & 0.003 \\ IPW-ALAS & -0.022 & 0.004 & (-0.03, -0.015) & $<$0.001 & 0.182 & 0.061 & (0.057, 0.299) & 0.003 \\ DR-ALAS & -0.020 & 0.005 & (-0.028, -0.012) & $<$0.001 & 0.140 & 0.063 & (0.038, 0.276) & 0.025 \\ DR-rLAS & -0.008 & 0.003 & (-0.015, -0.002) & 0.011 & 0.175 & 0.057 & (0.063, 0.288) & 0.002 \\ OAL & -0.031 & 0.004 & (-0.017, 0) & $<$0.001 & 0.147 & 0.062 & (0.062, 0.289) & 0.018 \\ GLiDeR & -0.018 & 0.005 & (-0.04, -0.022) & $<$0.001 & 0.128 & 0.053 & (0.035, 0.254) & 0.015 \\ MADR & -0.030 & 0.005 & (-0.041, -0.022) & $<$0.001 & 0.142 & 0.058 & (0.036, 0.253) & 0.015 \\ DiPS & -0.024 & 0.003 & (-0.03, -0.015) & $<$0.001 & 0.141 & 0.053 & (0.046, 0.272) & 0.008 \\ \bottomrule \end{tabular} } \vspace{1.5em} \caption{Data example on the effect of statins on CRC risk in EMR data and the effect of smoking on logCRP in FOS data. Est: Point estimate, SE: estimated SE, 95\% CI: confidence interval, p-val: p-value from Wald test of no effect. } \label{tab:data} \end{table} \subsection{Data Example: Framingham Offspring Study} The Framingham Offspring Study (FOS) is a cohort study initiated in 1971 that enrolled 5,124 adult children and spouses of the original Framingham Heart Study. The study collected data over time on participants' medical history, physician examination, and laboratory tests to examine epidemiological and genetic risk factors of cardiovascular disease (CVD). A subset of the FOS participants also have their genotype from the Affymetrix 500K SNP array available through the Framingham SNP Health Association Resource (SHARe) on dbGaP. We were interested in assessing the effect of smoking on C-reactive protein (CRP), an inflammation marker highly predictive of CVD risk, while adjusting for potential confounders including gender, age, diabetes status, use of hypertensive medication, systolic and diastolic blood pressure measurements, and HDL and total cholesterol measurements, as well as a large number of SNPs in gene regions previously reported to be associated with inflammation or obesity. While the inflmmation-related SNPs are not likely to be associated with smoking, we include them as efficiency covariates since they are likely to be related with CRP. SNPs that had missing values in $>1\%$ of the sample as well as SNPs that had a correlation $>.99$ with other SNPs in the data were removed from the covariates. A small proportion of individuals who still had missing values in SNPs had their values imputed by the mean value. The analysis includes $n=1,892$ individuals with available information on the CRP and the $p=121$ covariates, of which 113 were SNPs. Since CRP is heavily skewed, we applied a log transformation so that the linear regression model in $\mathcal{M}_{\mu}$ better fits the data. SEs, CIs, and p-values were calculated in the same way as in the above example. The right side of Table \ref{tab:data} shows that different methods agree that smoking significantly increases logCRP. In general, point estimates tended to attenuate after adjusting for covariates since smokers are likely to have other characteristics that increase inflammation. DiPS and GliDeR were among the most efficient, with DiPS achieving up to 39\% lower estimated variance than other estimators. \section{Discussion} \label{s:discussion} In this paper we developed a novel IPW-based approach to estimate the ATE that accommodates settings with high-dimensional covariates. Under sparsity assumptions and using appropriate regularization, the estimator achieves double-robustness and local semiparametric efficiency when adjusting for many covariates. By calibrating the initial PS through a smoothing, we showed that additional gains in robustness and efficiency are guaranteed in large samples under misspecification of working models. Simulation results demonstrate that DiPS exhibits comparative performance to existing approaches under correctly specified models but achieves potentially substantial gains in efficiency under model misspecification. In numerical studies, we used the extended regularized information criterion \citep{hui2015tuning} to tune adaptive LASSO, which is shown to maintain selection consistency for the diverging $p$ case when $log(p)/log(n) = \nu$, for $\nu\in [0,1)$. Other criteria such as cross-validation can also be used and may exhibit better performance in some cases. To obtain a suitable bandwidth $h$, the bandwidth must be selected such that the dominating errors in the influence function, which are of order $O_p(n^{1/2}h^q + n^{-1/2}h^{-2})$, converges to $0$. This is satisfied for $h= O(n^{-\alpha})$ for $\alpha \in (\frac{1}{2q},\frac{1}{4})$. The optimal bandwidth $h^*$ is one that balances these bias and variance terms and is of order $h^*=O(n^{-1/(q+2)})$. In practice we use a plug-in estimator $\widehat{h}^* = \widehat{\sigma} n^{-1/(q+2)}$, where $\widehat{\sigma}$ is the sample standard deviation of either $\widehat{\balph}^{\sf \tiny T}\mathbf{X}_i$ or $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}_i$, possibly after applying a monotonic transformation. Similarly, cross-validation can also be used to obtain an optimal bandwidth for smoothing itself and re-scaled to be of the optimal order. The second estimated direction $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ can be considered a working prognostic score \citep{hansen2008prognostic}. In the case where $\boldsymbol\beta_0=\boldsymbol\beta_1$ is postulated in the working outcome model \eqref{e:ormod} but, in actuality $\boldsymbol\beta_0\neq\boldsymbol\beta_1$, $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ could be a mixture of the true prognostic and propensity scores and may not be ancillary for the ATE. This could bias estimates of ATE when adjusting only for $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$. DiPS avoids this source of bias by adjusting for both the working PS $\widehat{\balph}^{\sf \tiny T}\mathbf{X}$ and $\widehat{\boldsymbol\beta}_k^{\sf \tiny T}\mathbf{X}$ so that consistency is still maintained in this case, provided the working PS model \eqref{e:psmod} is correct (i.e. under $\mathcal{M}_{\pi} \cap \mathcal{M}_{\mu}^c$). Otherwise, if the PS model is also incorrect, no method is guaranteed to be consistent. If adaptive LASSO is used to estimate $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ and the true $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ are of order $O(n^{-1/2})$, $\widehat{\balph}$ and $\widehat{\boldsymbol\beta}_k$ are not $n^{1/2}$-consistent when the penalty is tuned to achieve consistent model selection \citep{potscher2009distribution}. This is a limitation of relying on procedures that satisfy oracle properties only under fixed-parameter asymptotics. DiPS is thus preferred in situations where $n$ is large and signals are not extremely weak. Moreover, if adaptive LASSO is used, when $\nu$ is large, a large power parameter $\gamma$ would be required to maintain the oracle properties, leading to an unstable penalty and potentially poor performance in finite samples. It would be of interest to consider other approaches to estimate $\boldsymbol\alpha$ and $\boldsymbol\beta_k$ that have good performance in broader settings, such as settings allowing for larger $p$ and more general sparsity assumptions. It would also be of interest to extend the approach to accommodate survival data. \backmatter \bibliographystyle{biom}
2024-02-18T23:40:10.906Z
2017-12-25T02:02:30.000Z
algebraic_stack_train_0000
1,553
9,899
proofpile-arXiv_065-7732
\section{Approach} \label{sec:Approach} \label{sec:Approach:Overview} In this section, we present the different steps of our framework: (1)~topic clustering, (2)~argument identification, and (3)~argument clustering according to topical aspects (see Figure~\ref{fig:Approach:Pipeline}). \subsection{Topic Clustering} \label{sec:Approach:DS} First, documents are grouped into topics. Such documents can be individual texts or collections of texts under a common title, such as posts on Wikipedia or debate platforms. We compute the topic clusters using unsupervised clustering algorithms and study the results of \textit{k-means} and \textit{HDBSCAN}~\cite{Campello2013} in detail. We also take the $argmax$ of the tf-idf vectors and LSA~\cite{Deerwester1990} vectors directly into consideration to evaluate how well topics are represented by single terms. Overall, we consider the following models: \begin{itemize} \item $ARGMAX_{none}^{tfidf}$: We restrict the vocabulary size and the maximal document frequency to obtain a vocabulary representing topics with single terms. Thus, clusters are labeled with exactly one term by choosing the $argmax$ of these tf-idf document vectors. \item $ARGMAX_{lsa}^{tfidf}$: We perform a dimensionality reduction with LSA on the tf-idf vectors. Therefore, each cluster is represented by a collection of terms. \item $KMEANS_{none}^{tfidf}$: We apply the k-means clustering algorithm directly to tf-idf vectors and compare the results obtained by varying the parameter $k$. \item $HDBSCAN_{umap}^{tfidf}$: We apply UMAP \cite{McInnes2018} dimensionality reduction on the tf-idf vectors. We then compute clusters using the HDBSCAN algorithm based on the resulting vectors. \item $HDBSCAN_{lsa+umap}^{tfidf}$: Using the best parameter setting from the previous model, we apply UMAP dimensionality reduction on LSA vectors. Then, we evaluate the clustering results obtained with HDBSCAN while the number of dimensions of the LSA vectors is varied. \end{itemize} \subsection{Argument Identification} \label{sec:Approach:SEG} For the second step of our argument search framework, we propose the segmentation of sentences into argumentative units. Related works define arguments either on document-level~\cite{Wachsmuth2017} or sentence-level~\cite{Levy2018}\cite{Stab2018}, while, in this paper, we define an argumentative unit as a sequence of multiple sentences. This yields two advantages: (1)~We can capture the context of arguments over multiple sentences (e.g., claim and its premises); (2)~Argument identification becomes applicable to a wide range of texts (e.g., user-generated texts). Thus, we train a sequence-labeling model to predict for each sentence whether it starts an argument, continues an argument, or is outside of an argument (i.e., \textit{BIO-tagging}). Based on the findings of \citet{Ajjour2017}, \citet{Eger2017}, and \citet{Petasis2019}, we use a BiLSTM over more complex architectures like BiLSTM-CRFs. The BiLSTM is better suited than a LSTM as the bi-directionality takes both preceding and succeeding sentences into account for predicting the label of the current sentence. We evaluate the sequence labeling results compared to a feedforward neural network as baseline classification model that predicts the label of each sentence independently of the context. We consider two ways to compute embeddings over sentences with BERT~\cite{Devlin2018}: \begin{itemize} \item \emph{bert-cls}, denoted as $MODEL_{cls}^{bert}$, uses the $[CLS]$ token corresponding output of BERT after processing a sentence. \item \emph{bert-avg}, denoted as $MODEL_{avg}^{bert}$, uses the average of the word embeddings calculated with BERT as a sentence embedding. \end{itemize} \subsection{Argument Clustering} \label{sec:Approach:AS} In the argument clustering task, we apply the same methods (k-means, DBSCAN) as in the topic clustering step to group the arguments within a specific topic by topical aspects. Specifically, we compute clusters of arguments for each topic and compare the performance of k-means and HDBSCAN with tf-idf, as well as \emph{bert-avg} and \emph{bert-cls} embeddings. Furthermore, we investigate whether calculating tf-idf within each topic separately is superior to computing tf-idf over all arguments in the document corpus (i.e., across topics). \section{Conclusion} % \label{sec:Conclusion} In this paper, we proposed an argument search framework that combines the keyword search with precomputed topic clusters for argument-query matching, applies a novel approach to argument identification based on sentence-level sequence labeling, and aggregates arguments via argument clustering. Our evaluation with real-world data showed that our framework can be used to mine and search for arguments from unstructured text on any given topic. It became clear that a full-fledged argument search requires a deep understanding of text and that the individual steps can still be improved. We suggest future research on developing argument search approaches that are sensitive to different aspects of argument similarity and argument quality. % \section{Evaluation} \label{sec:Evaluation} \subsection{Evaluation Data Sets} \label{sec:Evaluation:Datasets} In total, we use four data sets for evaluating the different steps of our argument retrieval framework. \begin{enumerate} \item \textbf{Debatepedia} is a debate platform that lists arguments to a topic on one page, including subtitles, structuring the arguments into different aspects.\footnote{We use the data available at \url{https://webis.de/}.\label{dataset:webis}} \item \textbf{Debate.org} is a debate platform that is organized in rounds where each of two opponents submits posts arguing for their side. Accordingly, the posts might also include non-argumentative parts used for answering the important points of the opponent before introducing new arguments.\footref{dataset:webis} \item \textbf{Student Essay} \cite{Stab2017} is widely used in research on argument segmentation \cite{Eger2017, Ajjour2017, Petasis2019}. Labeled on token-level, each document contains one major claim and several claims and premises. We can use this data set for evaluating argument identification. \item \textbf{Our Dataset} is based on a debate.org crawl.\footref{dataset:webis} It is restricted to a subset of four out of the total 23 categories -- \textit{politics}, \textit{society}, \textit{economics} and \textit{science} -- and contains additional annotations. 3 human annotators familiar with linguistics segmented these documents and labeled them as being of \emph{medium} or \emph{low quality}, to exclude low quality documents. The annotators were then asked to indicate the beginning of each new argument and to label argumentative sentences summarizing the aspects of the post as \emph{conclusion} and \emph{outside of argumentation}. In this way, we obtained a ground truth of labeled arguments on a sentence level (Krippendorff's $\alpha=0.24$ based on 20 documents and three annotators). A description of the data set is provided online.\footnote{See \url{https://github.com/michaelfaerber/arg-search-framework}.} \end{enumerate} \textbf{Train-Validation-Test Split.} We use the splits provided by \citet{Stab2017} for the student essay data set. The other data sets are divided into train, validation and test splits based on topics (15\% of the topics were used for testing). \begin{table*}[tb] \centering \caption{Results of unsupervised topic clustering on the \emph{debatepedia} data set $-$ \%$n$:~noise examples (HDBSCAN) } \label{tab:Results:ClusteringUnsupervised} \begin{small} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{@{}l rrrrr r rrrrr@{}} \toprule & \multicolumn{5}{c}{With noise}& &\multicolumn{5}{c}{\emph{Without noise}}\\ \cline{2-6} \cline{8-12} & \#$Clust.$ & $ARI$ &$Ho$ & $Co$ & $BCubed~F_1$ & \%$n$ & \#$Clust.$ & $ARI$ &$Ho$ & $Co$ & $F_{1}$ \\ \midrule $ARGMAX_{none}^{tfidf}$ &253 & 0.470 &0.849 & 0.829 & 0.591 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\ $ARGMAX_{lsa}^{tfidf}$ &157 & 0.368 & 0.776& 0.866 & 0.561& $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\ $KMEANS_{none}^{tfidf}$ &170 & \textbf{0.703} & 0.916 & 0.922 & 0.774 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ $HDBSCAN_{none}^{tfidf}$ & 206 & 0.141 &0.790 & 0.870 & 0.677 & 21.1 &205 & 0.815 &0.955& 0.937 & 0.839\\ $HDBSCAN_{umap}^{tfidf}$ & 155 & 0.673 &0.900 & 0.931 & 0.786 & 4.3 & 154 & 0.779 &0.927& 0.952& 0.827\\ $HDBSCAN_{lsa + umap}^{tfidf}$ &162 & 0.694 & 0.912& 0.935& \textbf{0.799} & 3.6 &161 & 0.775 &0.932 &0.950 & 0.831\\ \bottomrule \end{tabular} } \end{small} \end{table*} \begin{table*}[tb] \centering \caption{Results of unsupervised topic clustering on the \emph{debate.org} data set $-$ \%$n$:~noise examples (HDBSCAN)} \label{tab:Results:ClusteringUnsupervisedORG} \begin{small} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{@{}l rrrrr r rrrrr@{}} \toprule & \multicolumn{5}{c}{With noise}& &\multicolumn{5}{c}{\emph{Without noise}}\\ \cline{2-6} \cline{8-12} & \#$Clust.$ & $ARI$ & $Ho$ &$Co$& $F_{1}$ & \%$n$ &\#$Clust.$ & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ \\ \midrule $KMEANS_{none}^{tfidf}$ & 50 & \textbf{0.436} & 0.822 & 0.796 & \textbf{0.644} &$-$&$-$&$-$&$-$&$-$&$-$\\ $HDBSCAN_{umap}^{tfidf}$ & 20 & 0.354 &0.633& 0.791 & 0.479 & 7.1 & 19 & 0.401 & 0.648 & 0.831 & 0.502\\ $HDBSCAN_{lsa + umap}^{tfidf}$ &26 & 0.330 & 0.689 &0.777 & 0.520 & 5.8 & 25 & 0.355 & 0.701 & 0.790 &0.542\\ \bottomrule \end{tabular} } \end{small} \end{table*} \subsection{Evaluation Settings} \label{sec:Evaluation:Methods} We report the results by using the evaluation metrics precision, recall and $F_1$-measure concerning the classification, and adjusted rand index (ARI), homogeneity (Ho), completeness (Co), and $BCubed~F_1$ score concerning the clustering. \textbf{Topic Clustering.} We use HDBSCAN with the minimal cluster size of 2. Regarding k-means, we vary the parameter $k$ that determines the number of clusters. We only report the results of the best setting. For the model $ARGMAX_{none}^{tfidf}$ we restrict the vocabulary size and the maximal document frequency of the tf-idf to obtain a vocabulary that best represents the topics by single terms. % \textbf{Argument Identification.} We use a BiLSTM implementation with 200 hidden units and apply \textit{SigmoidFocalCrossEntropy} as loss function. Furthermore, we use the \emph{Adagrad} optimizer~\cite{Duchi2011} and train the model for 600 epochs, shuffling the data in each epoch, and keeping only the best-performing model as assessed on the validation loss. The baseline feed-forward neural network contains a single hidden dense layer of size 200 and gets compiled with the same hyperparameters. As BERT implementation, we use DistilBERT~\cite{Sanh2019}. % \begin{figure}[tb] \centering \includegraphics[width=0.78\linewidth]{images/subtopic_argument_correlation2} \caption{Linear regression of arguments and their topical aspects for the \emph{debatepedia} data set.}% \label{fig:Results:Correlation} \end{figure} \textbf{Argument Clustering.} We estimate the parameter $k$ for the k-means algorithm for each topic using a linear regression based on the number of clusters relative to the number of arguments in this topic. As shown in Figure \ref{fig:Results:Correlation}, we observe a linear relationship between the number of topical aspects (i.e., subtopics) and the argument count per topic in the \emph{debatepedia} data set. We apply HDBSCAN with the same parameters as in the topic clustering task (\emph{min\_cluster\_size}~=~2). \section{Introduction} \label{sec:Introduction} Arguments are an integral part of debates and discourse between people. For instance, journalists, scientists, lawyers, and managers often need to pool arguments and contrast pros and cons \cite{DBLP:conf/icail/PalauM09a}. In light of this, argument search has been proposed to retrieve relevant arguments for a given query (e.g., \emph{gay marriage}). Several argument search approaches have been proposed in the literature~\cite{Habernal2015,Peldszus2013}. However, major challenges of argument retrieval still exist, such as identifying and clustering arguments concerning controversial topics % and extracting arguments from a wide range of texts on a fine-grained level. For instance, the argument search system args.me \cite{Wachsmuth2017} uses a keyword search to match relevant documents and lacks the extraction of individual arguments. In contrast, the ArgumenText \cite{Stab2018} applies keyword matching on single sentences to identify relevant arguments and neglects the context, yielding rather shallow arguments. Furthermore, IBM Debater \cite{Levy2018} proposes a rule-based extraction of arguments from Wikipedia articles utilizing prevalent structures to identify sentence-level arguments. Overall, the existing approaches are lacking (a)~a semantic argument-query matching, (b)~a segmentation of documents into argumentative units of arbitrary size, and (c)~a clustering of arguments w.r.t. subtopics for a fully equipped framework. In this paper, we propose a novel argument search framework that addresses these aspects (see Figure~\ref{fig:Approach:Pipeline}): During the \textit{topic clustering} step, we group argumentative documents by topics and, thus, identify the set of % relevant documents for a given search query (e.g., \textit{gay marriage}). To overcome the limitations of keyword search approaches, we rely on semantic representations via embeddings in combination with established clustering algorithms. Based on the relevant documents, the \textit{argument segmentation} step aims at identifying and separating arguments. We hereby understand arguments to consist of one or multiple sentences and propose a BiLSTM-based sequence labeling method. % In the final \textit{argument clustering} step, we identify the different aspects of the topic at hand that are covered by the arguments identified in the previous step. We evaluate all three steps of our framework based on real-world data sets from several domains. By using the output of one framework's component as input for the subsequent component, our evaluation is particularly challenging but realistic. In the evaluation, we show that by using embeddings, text labeling, and clustering, we can extract and aggregate arguments from unstructured text to a considerable extent. Overall, our framework provides the basis for advanced argument search in real-world scenarios with little training data. \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{images/new_approach.png} \caption{Overview of our framework for argument search. % } \label{fig:Approach:Pipeline} \end{figure*} In total, we make the following contributions: \begin{itemize} \setlength\itemsep{0em} \item We propose a novel argument search framework for fine-grained argument retrieval based on topic clustering, argument identification, and argument clustering.\footnote{The source is available online at \url{https://github.com/michaelfaerber/arg-search-framework}.} \item We provide a new evaluation data set for sequence labeling of arguments. \item We evaluate all steps of our framework extensively based on four data sets. \end{itemize} In the following, after discussing related work in Section~\ref{sec:RelatedWork}, we propose our argument mining framework in Section~\ref{sec:Approach}. Our evaluation is presented in Section~\ref{sec:Evaluation}. Section~\ref{sec:Conclusion} concludes the paper. \section{Related Work} \label{sec:RelatedWork} \textbf{Topic Clustering. } Various approaches for modeling the topics of documents have been proposed, such as Latent Dirichlet Allocation (LDA) and Latent Semantic Indexing (LSI). Topic detection and tracking~\cite{Wayne00} and topic segmentation~\cite{Ji2003} have been pursued in detail in the IR community. \citet{Sun2007} introduce an unsupervised method for topic detection and topic segmentation of multiple similar documents. Among others, \citet{Barrow2020}, \citet{Arnold2019}, and \citet{Mota2019} propose models for segmenting documents and assigning topic labels to these segments, but ignore arguments. \textbf{Argument Identification. } \label{sec:RelatedWork:ArgumentRecognition} Argument identification can be approached on the \textit{sentence level} by deciding for each sentence whether it constitutes an argument. For instance, IBMDebater \cite{Levy2018} relies on a combination of rules and weak supervision for classifying sentences as arguments. In contrast, ArgumenText \cite{Stab2018} does not limit its argument identification to sentences. \citet{Reimers2019} show that contextualized word embeddings can improve the identification of sentence-level arguments. Argument identification has been approached on the level of \textit{argument units}, too. Argument units are defined as different parts of an argument. \citet{Ajjour2017} compare machine learning techniques for argument segmentation on several corpora. % The authors observe that BiLSTMs mostly achieve the best results. % Moreover, \citet{Eger2017} and \citet{Petasis2019} show that using more advanced models, such as combining a BiLSTM with CRFs and CNNs, hardly improves the BIO tagging results. Hence, we also create a BiLSTM model for argument identification. \textbf{Argument Clustering.} \label{sec:RelatedWork:ArgumentClustering} \citet{Ajjour2019a} approach argument aggregation by identifying non-overlapping \emph{frames}, defined as a set of arguments from one or multiple topics that focus on the same aspect. \citet{Bar-Haim2020} propose an argument aggregation approach by mapping similar arguments to common \emph{key points}, i.e., high-level arguments. They observe that models with BERT embeddings perform the best for this task. \citet{Reimers2019} propose the clustering of arguments based on the similarity of two sentential arguments with respect to their topics. Also here, a fine-tuned BERT model is most successful for assessing the argument similarity automatically. % Our framework, while being also based on BERT for argument clustering, can consist of several sentences, making it on the one hand more flexible but argument clustering on the other hand more challenging. \textbf{Argument Search Demonstration Systems. } % \label{sec:RelatedWork:Frameworks} \citet{Wachsmuth2017} propose the argument search framework \textit{args.me} using online debate platforms. % The arguments are considered on document level. \citet{Stab2018} propose the framework \textit{ArgumenText} for argument search. % The retrieval of topic-related web documents is based on keyword matching, while the argument identification is based on a binary sentence classification. \citet{Levy2018} propose \textit{IBM Debater} based on Wikipedia articles. Arguments are defined as single claim sentences that explicitly discuss a \emph{main concept} in Wikipedia and that are identified via rules. \textbf{Argument Quality Determination.} Several approaches and data sets have been published on determining the quality of arguments \cite{DBLP:conf/acl/GienappSHP20,DBLP:conf/cikm/DumaniS20,DBLP:conf/aaai/GretzFCTLAS20}, which is beyond the scope of this paper. \subsection{Evaluation Results} \label{sec:Results} In the following, we present the evaluation results for the tasks topic clustering, argument identification, and argument clustering. \begin{table*}[h] \centering \caption{Results of the clustering of arguments of the \emph{debatepedia} data set by topical aspects. \emph{across topics}:~tf-idf scores are computed across topics, \emph{without noise}:~{HDBSCAN} is only evaluated on those examples which are not classified as noise.} \label{tab:Results:ASUnsupervised} \begin{small} \resizebox{0.85\textwidth}{!}{ \begin{tabular}{@{}l l l cccc l@{}} \toprule Embedding & Algorithm & Dim. Reduction & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ & Remark\\ \midrule tf-idf & HDBSCAN & UMAP & 0.076 & 0.343 &0.366& 0.390 & \\ tf-idf & HDBSCAN & UMAP & 0.015 & 0.285 &0.300& 0.341 &\emph{across topics} \\ tf-idf & HDBSCAN & $-$ & \textbf{0.085} & \textbf{0.371} &\textbf{0.409}& \textbf{0.407} & \\ tf-idf & k-means & $-$ & 0.058 & 0.335 &0.362& 0.397 & \\ tf-idf & k-means & $-$ & 0.049 & 0.314 &0.352& 0.402 &\emph{across topics} \\ \midrule bert-cls & HDBSCAN & UMAP & 0.030 & 0.280 &0.298& 0.357 & \\ bert-cls & HDBSCAN & $-$ & 0.016 & 0.201 &0.324& 0.378 & \\ bert-cls & k-means & $-$ & 0.044 & 0.332 &0.326& 0.369 & \\ \midrule bert-avg & HDBSCAN & UMAP & 0.069 & 0.321 &0.352& 0.389 & \\ bert-avg & HDBSCAN & $-$ & 0.018 & 0.170 &0.325& 0.381 & \\ bert-avg & k-means & $-$ & 0.065 & 0.337 &0.349& 0.399 & \\ \midrule tf-idf & HDBSCAN & $-$ & 0.140 & 0.429 &0.451& 0.439 &\emph{without noise}\\ \bottomrule \end{tabular} } \end{small} \end{table*} \begin{figure*}[h] \centering \includegraphics[width = 0.42\textwidth]{key_results/AS/hdbscan_scatterplot2} \includegraphics[width = 0.42\textwidth]{key_results/AS/kmeans-bert_scatterplot2} \caption{Argument clustering results (measured by $ARI$, $BCubed F_1$, and $homogeneity$) for HDBSCAN on tf-idf embeddings and k-means on \emph{bert-avg} embeddings.} \label{fig:Results:Scatterplot} \end{figure*} \textbf{Topic Clustering.} We evaluate unsupervised topic clustering based on the 170 topics from the \textit{debatepedia} data set. Given Table~\ref{tab:Results:ClusteringUnsupervised}, we see that density-based clustering algorithms, such as {HDBSCAN}, applied to tf-idf document embeddings are particularly suitable for this task and clearly outperform alternative clustering approaches. We find that their ability to handle unclear examples as well as clusters of varying shapes, sizes, and densities is crucial to their performance. {HDBSCAN} in combination with a preceding dimensionality reduction step achieves an $ARI$ score of 0.779. However, these quantitative results must be considered from the standpoint that topics in the \emph{debatepedia} data set are overlapping and, thus, the reported scores are lower bound estimates. When evaluating the clustering results for the \emph{debatepedia} data set qualitatively, we find that many predicted clustering decisions are reasonable but evaluated as erroneous in the quantitative assessment. For instance, we see that documents on \textit{gay parenting} appearing in the debate about \textit{gay marriage} can be assigned to a cluster with documents on \textit{gay adoption}. Furthermore, we investigate the impact on the recall of relevant documents and observe a clear improvement of topic clusters over keyword matching for argument-query matching. For instance, given the topic \emph{gay adoption}, many documents from debatepedia.org use related terms like \emph{homosexual} and \emph{parenting} instead of explicitly mentioning \emph{`gay'} and \emph{`adoption'} and, thus, cannot be found by a keyword search. We additionally evaluate the inductive capability of our topic clustering approach by applying them to debates from debate.org (see Table~\ref{tab:Results:ClusteringUnsupervisedORG}). We observe that the application of the unsupervised clustering on tf-idf embeddings for debates from debate.org achieves a solid mediocre $ARI$ score of 0.436. \textbf{Argument Identification.} To evaluate the identification of argumentative units based on sentence-level sequence-labeling, we apply our approach to the \emph{Student Essay} data set (see Table~\ref{tab:Results:Segmentation}) and achieve a macro $F_1$ score of 0.705 with a BiLSTM-based sequence-learning model on sentence embeddings computed with BERT. Furthermore, we observe a strong influence of the data sources on the results for argument identification. For instance, in the case of the \textit{Student Essay} data set, information about the current sentence as well as the surrounding sentences is available yielding accurate segmentation results ($F_{1,macro}=0.71$, $F_{1,macro}^{BI}=0.96$). \textbf{Argument Clustering.} Finally, we proposed an approach to cluster arguments consisting of one or several sentences by topical aspects. We evaluate the clustering based on tf-idf and BERT embeddings of the arguments (see Table~\ref{tab:Results:ASUnsupervised}). Overall, the performance of the investigated argument clustering methods is considerably low. We find that this is due to the fact that information on topical aspects is scarce and often underweighted in case of multiple sentences. Given Figure~\ref{fig:Results:Scatterplot}, we can observe that the performance of the clustering does not depend on the number of arguments. \if0 \begin{table} \centering \caption{Confusion matrices of $BILSTM_{cls}^{bert}$: Rows represent ground-truth labels and columns represent predictions.\\Left: \emph{Student Essay} data set, Right: \emph{debate.org} data set.} \label{tab:Results:CM} \begin{small} \begin{tabular}{c| ccc} & B & I & O\\ \midrule B&15 & 33 & 0\\ I&24 & 226 & 0 \\ O&39 & 94 & 2\\ \bottomrule \end{tabular} \hspace{1cm} \begin{tabular}{c| ccc} & B & I & O\\ \midrule B&1 & 24 & 23\\ I&0 & 135 & 115 \\ O&0 & 50 & 85\\ \bottomrule \end{tabular} \end{small} \end{table} \begin{table*}[tb] \centering \caption{Overview of the topics with the highest $ARI$ scores for HDBSCAN and k-means.} \label{tab:Results:BestTopics} \begin{small} \begin{tabular}{@{}l p{7cm} rr cccc @{}} \toprule &Topic & \#$Clust._{true}$ & \#$Clust._{pred}$ & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ \\ \toprule \multicolumn{8}{c}{\textbf{HDBSCAN based on tf-idf embeddings}}\\ \midrule 1 &Rehabilitation vs retribution &2 &3 & 0.442 &1.000 &0.544 &0.793 \\ 2 &Manned mission to Mars &5 &6 &0.330 & 0.461&0.444 & 0.515\\ 3 & New START Treaty &5 &4 &0.265 &0.380 &0.483 &0.518 \\ 4 &Obama executive order to raise the debt ceiling &3 &6 & 0.247 &0.799 &0.443 &0.568 \\ 5 &Republika Srpska secession from Bosnia and Herzegovina &4 &6 &0.231 &0.629 &0.472 &0.534 \\ 6& Ending US sanctions on Cuba &11 &9 & 0.230&0.458 &0.480 &0.450 \\ \toprule \multicolumn{8}{c}{\textbf{k-means based on \emph{bert-avg} embeddings}}\\ \midrule 1 &Bush economic stimulus plan &7 &5 & 0.454 &0.640 &0.694 &0.697 \\ 2 &Hydroelectric dams &11 &10 &0.386 & 0.570&0.618 & 0.537\\ 3 & Full-body scanners at airports &5 &6 &0.301 &0.570 &0.543 &0.584 \\ 4 &Gene patents &4 &7 & 0.277 &0.474 &0.366 &0.476 \\ 5 &Israeli settlements &3 &4 &0.274 &0.408 &0.401 &0.609 \\ 6& Keystone XL US-Canada oil pipeline &2 &3 & 0.269&0.667 &0.348 &0.693 \\ \bottomrule \end{tabular} \end{small} \end{table*} \begin{figure}[tb] \centering \includegraphics[width=0.4\textwidth]{key_results/AS/ARI_two_models3} \includegraphics[width=0.4\textwidth]{key_results/AS/ARI_hdbscan2} \caption{Argument clustering consensus based on the $ARI$ between (top) HDBSCAN and k-means score and (bottom) HDBSCAN model and the ground-truth.} \label{fig:Results:Hist} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.7\linewidth]{images/subtopic_argument_correlation2} \caption{Linear regression of arguments and their topical aspects for the \emph{debatepedia} data set. \label{fig:Results:Correlation} \end{figure} \subsubsection{Topic Clustering} \label{sec:Results:DS} The results of the unsupervised clustering approach are shown in Table \ref{tab:Results:ClusteringUnsupervised}, based on a subset of the \emph{debatepedia} data set with 170 topics. The simple model $ARGMAX_{none}^{tfidf}$ computes clusters based on words with the highest tf-idf score within a document. Its $ARI$ score of $0.470$ indicates that many topics in the \emph{debatepedia} data set can already be represented by a single word. Considering the $ARGMAX_{lsa}^{tfidf}$ model, the lower $ARI$ score of $0.368$ indicate that using the topics found by LSA does not add value compared to tf-idf vectors. Furthermore, we evaluate k-means with the density-based clustering algorithm HDBSCAN. First, we find that $HDBSCAN_{lsa + umap}^{tfidf}$ and $KMEANS_{none}^{tfidf}$ with $ARI$ scores of $0.694$ and $0.703$, respectively, achieve a comparable performance on the tf-idf vectors. However, since HDBSCAN accounts for noise is the data which it pools within a single cluster, the five rightmost columns of Table \ref{tab:Results:ClusteringUnsupervised} have to be considered when deciding on a clustering method for further use. When excluding the noise cluster from the evaluation and thereby considering only approx. 96\% of instances where $HDBSCAN_{lsa + umap}^{tfidf}$ and $HDBSCAN_{umap}^{tfidf}$ are sure about the cluster membership, the $ARI$ score of the clustering amounts to $0.78$. Considering that 21.1\% of $HDBSCAN_{none}^{tfidf}$ are classified as noise, compared to the 4.3 \% of $HDBSCAN_{umap}^{tfidf}$, we conclude that applying an UMAP dimensionality reduction step before the HDBSCAN clustering is essential for its performance on the \emph{debatepedia} data set. \textbf{Qualitative Evaluation.} Looking at the predicted clusters reveals that many of the clusters considered as erroneous are reasonable. For example, $HDBSCAN_{lsa + umap}^{tfidf}$ maps the document with the title \emph{`Military recruiting: Does NCLB rightly allow military recruiting in schools?'} of the topic \emph{`No Child Left Behind Act'} to the cluster representing the topic \emph{`Military recruiting in public schools'}. Another such example is a cluster that is largely comprised of documents about \emph{`Gay adoption'}, but contains one additional document \emph{`Parenting: Can homosexuals do a good job of parenting?'} from the topic \emph{`Gay marriage'}. \textbf{Inductive Generalization.} We apply the unsupervised approach to the \emph{debate.org} data set to evaluate whether the model is able to generalize. The results, given in Table \ref{tab:Results:ClusteringUnsupervisedORG}, show that the unsupervised approach using HDBSCAN and k-means leads to far better results with $ARI$ scores of 0.354 and 0.436, respectively. K-means performs on the \emph{debate.org} data set distinctly better than HDBSCAN because the \emph{debate.org} data set is characterized by an high number of single-element topics. In contrast to k-means, HDBSCAN does not allow single-element clusters, thus getting pooled into a single noise cluster. Furthermore, this is reflected by the different numbers of clusters as well as the lower homogeneity scores of HDBSCAN of 0.633 and 0.689 compared to 0.822 of k-means while having a comparable completeness of approx. 0.8. \textbf{Topic Clusters for Argument-Query Matching.} Applying a keyword search would retrieve only a subset of the relevant arguments that mention the search phrase explicitly. Combining a keyword search on documents with the clusters found by $HDBSCAN$, enables an argument search framework to retrieve arguments from a broader set of documents. For example, in the data set \textit{debatepedia.org}, we observe that clusters of arguments related to \emph{gay adoption} include words like \emph{parents}, \emph{mother}, \emph{father}, \emph{sexuality}, \emph{heterosexual}, and \emph{homosexual} while neither \emph{gay} nor \emph{adoption} are not mentioned explicitly. \subsubsection{Argument Identification} \label{sec:Results:SEG} The results of the argument identification step based on the \emph{Student Essay} data set is given in Table \ref{tab:Results:Segmentation}. We evaluate a BiLSTM model in a sequence-labeling setting compared to a feedforward neural network (FNN) in a classification setting in order to investigate the importance of context for the identification of arguments. Using the BiLSTM improves the macro $F_1$ score relative to the feedforward neural network on the \emph{bert-avg} embeddings by 3.9\% and on the \emph{bert-cls} embeddings by 15.2\%. Furtheremore, we evaluate the effects of using the sentence embedding \emph{bert-avg} and \emph{bert-cls}. Using \emph{bert-cls} embeddings increases the macro $F_1$ score by 4.3\% in the classification setting and by 15.6\% in the sequence-learning setting relative to using \emph{bert-avg}. Considering also the sequence-learning and classification results, we conclude that the \emph{bert-cls} embeddings encode important additional information which gets only utilized in a sequence-learning setting. Our results also reflect the peculiarity of the \emph{Student Essay}'s training data set that most sentences are part of an argument and only $\sim$3\% of the sentences are labeled as outside (`O'). Accordingly, the models observe hardly any examples for outside sentences during training and, thus, have difficulties in learning to distinguish them from other sentences. Overall, this is reflected by the very low precision and recall for the class `O'. Considering the fact that the correct identification of a `B' sentence alone is already enough to separate two arguments from each other, the purpose of labeling a sentence as `O' is restricted to classifying the respective sentence as non-argumentative. Therefore, in case of the \emph{Student Essay} data set, the task of separating two arguments from each other becomes much more important than separating non-argumentative sentences from arguments. Therefore, the last column of Table \ref{tab:Results:Segmentation} indicates the macro $F_1$ score only for the `B' and `I' labels with a macro $F_1$ score of 0.956 for our best model. This reflects the model's high ability to separate arguments from each other. Furthermore, we evaluate whether the previously best-performing model $BILSTM_{cls}^{bert}$ is able to identify arguments on the \emph{debate.org} data set if trained on the \emph{Student Essay} data set. The results are given in Table \ref{tab:Results:SegORG}. The pretrained model performs very poor on \emph{`O'} sentences since not many examples of \emph{`O'} sentences were observed during training. Moreover, applying the pretrained $BILSTM_{cls}^{bert}$ to the \emph{debate.org} data set yields very low precision and recall on \emph{`B'} sentences. In contrast to the \emph{Student Essay} data set where arguments often begin with cue words (e.g., \emph{first}, \emph{second}, \emph{however}), the documents in the \emph{debate.org} data set use cue words less often, impacting the classification performance. The results from training the $BILSTM_{cls}^{bert}$ model from scratch on the \emph{debate.org} data set are very different to our results on the \emph{Student Essay} data set. As shown in Table \ref{tab:Results:CM}, the confusion matrix for the \emph{debate.org} data set show that the BiLSTM model has difficulties to learn which sentences start an argument in the \emph{debate.org} data set. In contrast to the \emph{Student Essay} data set, it cannot learn from the peculiarities (e.g., cue words) and apparently fails to find other indications for \emph{`B'} sentences. In addition, also the distinction between \emph{`I'} and \emph{`O'} sentences is not clear. These results match our experiences with the annotation of documents in the \emph{debate.org} data set where it was often difficult to decide whether a sentence form an argument and to which argument it belongs. This is also reflected by the inter-annotator agreement of 0.24 based on Krippendorff's $\alpha$ on a subset of 20 documents with three annotators. Overall, we find that the performance of the argument identification strongly depends on the peculiarities and quality of the underlying data set. For well curated data sets such as the \emph{Student Essay} data set, the information contained in the current sentence as well as the surrounding sentences yield an accurate identification of arguments. In contrast, data sets with poor structure or colloquial language, as given in the \emph{debate.org} data set, lead to less accurate results \subsubsection{Argument Clustering} \label{sec:Results:AS} In the final step of the argument search framework, we evaluate the argument clustering according to topical aspects based on the \emph{debatepedia} data set. Therefore, we evaluate the performance of the clustering algorithms HDBSCAN and k-means for different document embeddings that yielded the best results in the topic clustering step of our framework. We estimate the parameter $k$ for the k-means algorithm for each topic using a linear regression based on the number of clusters relative to the number of arguments in this topic. As shown in Figure \ref{fig:Results:Correlation}, we observe a linear relationship between the number of topical aspects and the argument count per topic in the \emph{debatepedia} data set. As shown in Table \ref{tab:Results:ASUnsupervised}, we perform the clustering based on the arguments in each ground-truth topic separately and average the results across the topics. We observe that HDBSCAN performs best on tf-idf embeddings with an averaged $ARI$ score of 0.085 while k-means achieves its best performance on \emph{bert-avg} embeddings with an averaged $ARI$ score of 0.065. Using HDBSCAN instead of k-means on tf-idf embeddings yields an improvement in the $ARI$ score of 2.7\%. Using k-means instead of HDBSCAN on \emph{bert-avg} and \emph{bert-cls} embeddings results in an improvement of 4.7\% and 2.8\%, respectively. Using an UMAP dimensionality reduction step before applying HDBSCAN outperforms k-means on \emph{bert-avg} embeddings with an $ARI$ score of 0.069. However, on tf-idf embeddings using an UMAP dimensionality reduction step slightly reduces the performance. Comparing the results of using \emph{bert-cls} embeddings versus \emph{bert-avg} embeddings, we find that \emph{bert-avg} embeddings result in slightly better scores with a maximum improvement of 2.1\%. We further compare calculating tf-idf vectors only within each topic relative to computing them based on all arguments in the whole dataset (\emph{across topics}). This change only affects the document frequencies used to calculate the tf-idf scores. Therefore, terms that are characteristic for a given topic are likely to show higher document frequencies and thus lower tf-idf scores when the computation is performed within each topic. Since tf-idf scores indicate the relevance of each term, clustering algorithms focus more on terms which distinguish the arguments from each other within a topic. Accordingly, the observed deviation of the $ARI$ score by 0.9\% for k-means and 6.1\% for HDBSCAN in combination with UMAP matches our expectation. We also evaluated the exclusion of the HDBSCAN noise clusters (\emph{without noise}) yielding an $ARI$ score of 0.140 and a $BCubed~F_1$ score of 0.439. Furthermore, we show in Table \ref{tab:Results:BestTopics} for the best performing k-means and HDBSCAN models the topics with the highest $ARI$ scores. We observe that the clustering performance on topics with the best clustering performance is still relative low, in particular when compared to the results of the topic clustering step. As indicated by decreasing $ARI$ scores, both models have only a few topics where they perform comparatively well. Figure~\ref{fig:Results:Scatterplot} shows the performance of the two models with respect to the number of arguments in each topic. Both $ARI$ and $BCubed~F_1$ scores show a very similar distribution for topics with different numbers of arguments, while the distributions of the homogeneity score show a slight difference for the two models. This indicates that the performance of the clustering algorithms does not depend on the number of arguments. Moreover, the upper histogram in Figure \ref{fig:Results:Hist} displays the distribution of the degree of consensus between the two models based on the $ARI$ scores for each topic. Comparing this with the distribution of the $ARI$ score computed between the HDBSCAN model and the ground-truth displayed on the histogram below, we observe that the consensus between the two models is, for most topics, rather low. Overall, our results show the difficulties of argument clustering based on topical aspects. Based on the \emph{debatepedia} data set, we show that unsupervised clustering algorithm with the proposed embedding methods cannot cluster arguments into topical aspects in a consistent and reasonable way. This result is in line with the results of Reimers et al.~\cite{Reimers2019} stating that even experts have difficulties to identify argument similarity based on topical aspects. Considering that their evaluation is based on sentence-level arguments, it seems likely that assessing argument similarity is even harder for arguments comprised of one or multiple sentences. Moreover, the authors report promising results for the pairwise assessment of argument similarity when using the output corresponding to the BERT $[CLS]$ token. However, our experiments show that their findings do not apply to \emph{debatepedia} data set. We assume that this is due to differences in argument similarity that are introduced by using prevalent topics in the \emph{debatepedia} data set rather than using explicitly annotated arguments. \fi \subsection{Evaluation Results} \label{sec:Results} In the following, we present the results for the three tasks topic clustering, argument identification, and argument clustering. \subsubsection{Topic Clustering} \label{sec:Results:DS} We start with the results of clustering the documents by topics based on the \emph{debatepedia} data set with 170 topics (see Table \ref{tab:Results:ClusteringUnsupervised}). The plain model $ARGMAX_{none}^{tfidf}$, which computes clusters based on words with the highest tf-idf score within a document, achieves an $ARI$ score of $0.470$. This indicates that many topics in the \emph{debatepedia} data set can already be represented by a single word. Considering the $ARGMAX_{lsa}^{tfidf}$ model, the $ARI$ score of $0.368$ shows that using the topics found by LSA does not add value compared to tf-idf. Furthermore, we find that the tf-idf-based $HDBSCAN_{lsa + umap}^{tfidf}$ and $KMEANS_{none}^{tfidf}$ achieve a comparable performance given the $ARI$ scores of $0.694$ and $0.703$. However, since HDBSCAN accounts for noise in the data which it pools within a single cluster, the five rightmost columns of Table \ref{tab:Results:ClusteringUnsupervised} need to be considered when deciding on a clustering method for further use. When excluding the noise cluster from the evaluation, the $ARI$ score increases considerably for all $HDBSCAN$ models ($HDBSCAN_{none}^{tfidf}$: $0.815$, $HDBSCAN_{umap}^{tfidf}$: $0.779$, $HDBSCAN_{lsa+umap}^{tfidf}$: $0.775$). Considering that $HDBSCAN_{none}^{tfidf}$ identifies 21.1\% of the instances as noise, compared to the 4.3\% in case of $HDBSCAN_{umap}^{tfidf}$, we conclude that applying an UMAP dimensionality reduction step before the HDBSCAN clustering is quite fruitful for the performance (at least for the \emph{debatepedia} data set). \begin{table*}[tb] \centering \caption{Results of argument identification on the \emph{Student Essay} data set. } \label{tab:Results:Segmentation} \begin{small} \begin{tabular}{@{}l cc cc cc ccc@{}} \toprule & \multicolumn{2}{c}{\emph{B}}& \multicolumn{2}{c}{\emph{I}}& \multicolumn{2}{c}{\emph{O}}& \multicolumn{2}{r}{}\\ \cline{2-3} \cline{4-5} \cline{6-7} & $Prec.$ & $Rec.$ & $Prec.$ & $Rec.$ & $Prec.$ & $Rec.$ &$F_{1, macro}$ & $F_{1, weighted}$ &$F_{1, macro}^{B, I}$ \\ \midrule majority class & 0.000 & 0.000 & 0.719 & 1.000 & 0.000 & 0.000 & 0.279 & 0.602 & 0.419\\ $FNN_{avg}^{bert}$ & 0.535 & 0.513 & 0.820 & 0.836 & 0.200 & 0.160 & 0.510 & 0.736 & 0.675\\ $FNN_{cls}^{bert}$ & 0.705 & 0.593 & 0.849 & 0.916 & \textbf{0.400} & 0.080 & 0.553 & 0.805 & 0.763\\ $BILSTM_{avg}^{bert}$ & 0.766 & 0.713 & 0.885 & 0.930 & 0.000 & 0.000 & 0.549 & 0.846 & 0.823 \\ $BILSTM_{cls}^{bert}$ & \textbf{0.959} & \textbf{0.914} & \textbf{0.967} & \textbf{0.985} & 0.208 & \textbf{0.200} & \textbf{0.705} & \textbf{ 0.951} & \textbf{0.956} \\ \bottomrule \end{tabular} \end{small} \end{table*} \begin{table*}[tb] \centering \caption{Argument identification results on the \emph{debate.org} data set; model trained on \emph{Student Essay} and \emph{debate.org}.} \label{tab:Results:SegORG} \begin{small} \begin{tabular}{@{}ll cc cc cc cc@{}} \toprule & & \multicolumn{2}{c}{\emph{B}}& \multicolumn{2}{c}{\emph{I}}& \multicolumn{2}{c}{\emph{O}}& \multicolumn{2}{r}{}\\ \cline{3-8} &\emph{trained on} & $Prec.$ & $Rec.$ & $Prec.$ & $Rec.$ & $Prec.$ & $Rec.$ &$F_{1, macro}$ & $F_{1, weighted}$ \\ \midrule $BILSTM_{cls}^{bert}$ & \emph{Student Essay} & 0.192 & 0.312 & 0.640 & 0.904 & 1.000 & 0.015 & 0.339 & 0.468\\ $BILSTM_{cls}^{bert}$ &\emph{debate.org} & 1.000 & 0.021 & 0.646 & 0.540 & 0.381 & 0.640 & 0.368 & 0.492\\ \bottomrule \end{tabular} \end{small} \end{table*} \textbf{Inductive Generalization.} We apply our unsupervised approaches to the \emph{debate.org} data set to evaluate whether the model is able to generalize. The results, given in Table \ref{tab:Results:ClusteringUnsupervisedORG}, show that k-means performs distinctly better on the \emph{debate.org} data set than HDBSCAN (ARI: $0.436$ vs. $0.354$ and $0.330$; $F_1$: $0.644$ vs. $0.479$ and $0.520$). This is likely because the \emph{debate.org} data set is characterized by a high number of single-element topics. In contrast to k-means, HDBSCAN does not allow single-element clusters, thus getting pooled into a single noise cluster. This is reflected by the different numbers of clusters as well as the lower homogeneity scores of HDBSCAN of 0.633 and 0.689 compared to 0.822 of k-means while having a comparable completeness of approx. 0.8. \textbf{Qualitative Evaluation.} Applying a keyword search would retrieve only a subset of the relevant arguments that mention the search phrase explicitly. Combining a keyword search on documents with the computed clusters enables an argument search framework to retrieve arguments from a broader set of documents. For example, for \textit{debatepedia.org}, we observe that clusters of arguments related to \emph{gay adoption} include words like \emph{parents}, \emph{mother}, \emph{father}, \emph{sexuality}, \emph{heterosexual}, and \emph{homosexual} while neither \emph{gay} nor \emph{adoption} are mentioned explicitly. \subsubsection{Argument Identification} \label{sec:Results:SEG} The results of the argument identification step based on the \emph{Student Essay} data set are given in Table~\ref{tab:Results:Segmentation}. We evaluate a BiLSTM model in a sequence-labeling setting compared to majority voting and a feedforward neural network (FNN) as baselines. Using the BiLSTM improves the macro $F_1$ score relative to the feedforward neural network on the \emph{bert-avg} embeddings by 3.9\% and on the \emph{bert-cls} embeddings by 15.2\%. Furthermore, using \emph{bert-cls} embeddings increases the macro $F_1$ score by 4.3\% in the classification setting and by 15.6\% in the sequence-learning setting compared to using \emph{bert-avg}. \textbf{BIO Tagging.} We observe a low precision and recall for the class `O'. This can be traced back to a peculiarity of the \emph{Student Essay} data set: Most sentences in the \emph{Student Essay}'s training data set are part of an argument and only 3\% of the sentences are labeled as non-argumentative (outside/`O'). Accordingly, the models observe hardly any examples for outside sentences during training and, thus, have difficulties in learning to distinguish them from other sentences. Considering the fact that the correct identification of a `B' sentence alone is already enough to separate two arguments from each other, the purpose of labeling a sentence as `O' is restricted to classifying the respective sentence as non-argumentative. Therefore, in case of the \emph{Student Essay} data set, the task of separating two arguments from each other becomes much more important than separating non-argumentative sentences from arguments. In the last column of Table \ref{tab:Results:Segmentation}, we also show the macro $F_1$ score for the `B' and `I' labels only. The high macro $F_1$ score of 0.956 for the best performing model reflects the model's high ability to separate arguments from each other. \begin{table} \centering \caption{Confusion matrices of $BILSTM_{cls}^{bert}$ (rows: ground-truth labels; columns: predictions).} \label{tab:Results:CM} \begin{footnotesize} \begin{tabular}{c| ccc} \multicolumn{4}{c}{Student Essay} \\ & B & I & O\\ \midrule B&15 & 33 & 0\\ I&24 & 226 & 0 \\ O&39 & 94 & 2\\ \bottomrule \end{tabular} \hspace{1cm} \begin{tabular}{c| ccc} \multicolumn{4}{c}{debate.org} \\ & B & I & O\\ \midrule B&1 & 24 & 23\\ I&0 & 135 & 115 \\ O&0 & 50 & 85\\ \bottomrule \end{tabular} \end{footnotesize} \end{table} \begin{table*}[tb] \centering \caption{Results of the clustering of arguments of the \emph{debatepedia} data set by topical aspects. \emph{across topics}:~tf-idf scores are computed across topics, \emph{without noise}:~{HDBSCAN} is only evaluated on instances not classified as noise.} \label{tab:Results:ASUnsupervised} \begin{small} \begin{tabular}{@{}l l l cccc l@{}} \toprule Embedding & Algorithm & Dim. Reduction & $ARI$ & $Ho$ &$Co$ & $BCubed~F_1$ & Remark\\ \midrule tf-idf & HDBSCAN & UMAP & 0.076 & 0.343 &0.366& 0.390 & \\ tf-idf & HDBSCAN & UMAP & 0.015 & 0.285 &0.300& 0.341 &\emph{across topics} \\ tf-idf & HDBSCAN & $-$ & \textbf{0.085} & \textbf{0.371} &\textbf{0.409}& \textbf{0.407} & \\ tf-idf & k-means & $-$ & 0.058 & 0.335 &0.362& 0.397 & \\ tf-idf & k-means & $-$ & 0.049 & 0.314 &0.352& 0.402 &\emph{across topics} \\ \midrule bert-cls & HDBSCAN & UMAP & 0.030 & 0.280 &0.298& 0.357 & \\ bert-cls & HDBSCAN & $-$ & 0.016 & 0.201 &0.324& 0.378 & \\ bert-cls & k-means & $-$ & 0.044 & 0.332 &0.326& 0.369 & \\ \midrule bert-avg & HDBSCAN & UMAP & 0.069 & 0.321 &0.352& 0.389 & \\ bert-avg & HDBSCAN & $-$ & 0.018 & 0.170 &0.325& 0.381 & \\ bert-avg & k-means & $-$ & 0.065 & 0.337 &0.349& 0.399 & \\ \midrule tf-idf & HDBSCAN & $-$ & 0.140 & 0.429 &0.451& 0.439 &\emph{without noise}\\ \bottomrule \end{tabular} \end{small} \end{table*} \textbf{Generalizability.} We evaluate whether the model $BILSTM_{cls}^{bert}$, which performed best on the \emph{Student Essay} data set, is able to identify arguments on the \emph{debate.org} data set if trained on the \emph{Student Essay} data set. The results are given in Table~\ref{tab:Results:SegORG}. Again, the pretrained model performs poor on \emph{`O'} sentences since not many examples of \emph{`O'} sentences were observed during training. Moreover, applying the pretrained $BILSTM_{cls}^{bert}$ to the \emph{debate.org} data set yields low precision and recall on \emph{`B'} sentences. A likely reason is that, in contrast to the \emph{Student Essay} data set where arguments often begin with cue words (e.g., \emph{first}, \emph{second}, \emph{however}), the documents in the \emph{debate.org} data set contain cue words less often. % The results from training the $BILSTM_{cls}^{bert}$ model from scratch on the \emph{debate.org} data set are considerably different to our results on the \emph{Student Essay} data set. As shown in Table \ref{tab:Results:CM}, the confusion matrix for the \emph{debate.org} data set shows that the BiLSTM model has difficulties to learn which sentences start an argument in the \emph{debate.org} data set. In contrast to the \emph{Student Essay} data set, it cannot learn from the peculiarities (e.g., cue words) and apparently fails to find other indications for \emph{`B'} sentences. In addition, also the distinction between \emph{`I'} and \emph{`O'} sentences is not clear. These results match our experiences with the annotation of documents in the \emph{debate.org} data set where it was often difficult to decide whether a sentence forms an argument and to which argument it belongs. This is also reflected by the inter-annotator agreement of 0.24 based on Krippendorff's $\alpha$ on a subset of 20 documents with three annotators. \textbf{Bottom Line.} Overall, we find that the performance of the argument identification strongly depends on the peculiarities and quality of the underlying data set. For well curated data sets such as the \emph{Student Essay} data set, the information contained in the current sentence as well as the surrounding sentences yield a considerably accurate identification of arguments. In contrast, data sets with poor structure or colloquial language, as given in the \emph{debate.org} data set, lead to less accurate results.% \subsubsection{Argument Clustering} \label{sec:Results:AS} We now evaluate the argument clustering according to topical aspects (i.e., subtopics) as the final step of the argument search framework, using the \emph{debatepedia} data set. We evaluate the performance of the clustering algorithms HDBSCAN and k-means for different embeddings that yielded the best results in the topic clustering step of our framework. We perform the clustering of the arguments for each topic (e.g., \textit{gay marriage}) separately and average the results across the topics. As shown in Table \ref{tab:Results:ASUnsupervised}, we observe that HDBSCAN performs best on tf-idf embeddings with an averaged $ARI$ score of 0.085 while k-means achieves its best performance on \emph{bert-avg} embeddings with an averaged $ARI$ score of 0.065. Using HDBSCAN instead of k-means on tf-idf embeddings yields an improvement in the $ARI$ score of 2.7\%. Using k-means instead of HDBSCAN on \emph{bert-avg} and \emph{bert-cls} embeddings results in slight improvements. % \textbf{UMAP.} Using an UMAP dimensionality reduction step before applying HDBSCAN outperforms k-means on \emph{bert-avg} embeddings with an $ARI$ score of 0.069. However, using a UMAP dimensionality reduction in combination with tf-idf results in a slightly reduced performance. We find that \emph{bert-avg} embeddings result in slightly better scores than \emph{bert-cls} when using UMAP. \textbf{TF-IDF across Topics.} We further evaluate whether computing tf-idf within each topic separately leads to a better performance than computing tf-idf across topics in the data set. The observed slight deviation of the $ARI$ score for k-means and HDBSCAN in combination with UMAP matches our expectation that clustering algorithms focus more on terms which distinguish the arguments from each other within a topic. \textbf{Excluding Noise.} When excluding the HDBSCAN noise clusters (\emph{without noise}), we yield an $ARI$ score of 0.140 and a $BCubed~F_1$ score of 0.439. \textbf{Number of Arguments.} Figure~\ref{fig:Results:Scatterplot} shows the performance of the models HDBSCAN with tf-idf and k-means with \textit{bert-avg} with respect to the number of arguments in each topic. Both $ARI$ and $BCubed~F_1$ scores show a similar distribution for topics with different numbers of arguments, while the distributions of the homogeneity score show a slight difference for the two models. This indicates that the performance of the clustering algorithms does not depend on the number of arguments. \textbf{Examples.} In Table \ref{tab:Results:BestTopics}, we show for the best performing k-means and HDBSCAN models the topics with the highest $ARI$ scores. \begin{figure}[tb] \centering \includegraphics[width = 0.83\linewidth]{key_results/AS/hdbscan_scatterplot2} \includegraphics[width = 0.83\linewidth]{key_results/AS/kmeans-bert_scatterplot2} \caption{Number of arguments in each topic for HDBSCAN with tf-idf embeddings and k-means with \emph{bert-avg} embeddings.} \label{fig:Results:Scatterplot} \end{figure} \begin{table}[tb] \centering \caption{Top 5 topics using HDBSCAN and k-means.} % \label{tab:Results:BestTopics} \begin{small} \begin{tabular}{p{3.9cm} r@{}r} \toprule Topic & \#$Clust._{true}$ & \#$Clust._{pred}$ \\ \toprule \multicolumn{3}{c}{\textbf{HDBSCAN based on tf-idf embeddings}}\\ \midrule Rehabilitation vs retribution &2 &3 \\ Manned mission to Mars &5 &6 \\ New START Treaty &5 &4 \\ Obama executive order to raise the debt ceiling &3 &6 \\ Republika Srpska secession from Bosnia and Herzegovina &4 &6 \\ \toprule \multicolumn{3}{c}{\textbf{k-means based on \emph{bert-avg} embeddings}}\\ \midrule Bush economic stimulus plan &7 &5 \\ Hydroelectric dams &11 &10 \\ Full-body scanners at airports &5 &6 \\ Gene patents &4 &7 \\ Israeli settlements &3 &4 \\ \bottomrule \end{tabular} \end{small} \end{table} \textbf{Bottom Line.} Overall, our results confirm that argument clustering based on topical aspects is nontrivival and high evaluation results are still hard to achieve in real-world settings. Given the \emph{debatepedia} data set, we show that our unsupervised clustering algorithms with the different embedding methods do not cluster arguments into topical aspects in a highly consistent and reasonable way yet. This result is in line with the results of \citet{Reimers2019} stating that even experts have difficulties to identify argument similarity based on topical aspects (i.e., subtopics). Considering that their evaluation is based on sentence-level arguments, it seems likely that assessing argument similarity is even harder for arguments comprised of one or multiple sentences. Moreover, the authors report promising results for the pairwise assessment of argument similarity when using the output corresponding to the BERT $[CLS]$ token. However, our experiments show that their findings do not apply to the \emph{debatepedia} data set. We assume that this is due to differences in the argument similarity that are introduced by using prevalent topics in the \emph{debatepedia} data set rather than using explicitly annotated arguments.
2024-02-18T23:40:11.267Z
2021-12-02T02:08:28.000Z
algebraic_stack_train_0000
1,575
9,009
proofpile-arXiv_065-7771
\section{Introduction} It is increasingly common to come across tagged objects in our daily life. The tags can be text, icons, barcodes or QR-codes printed or attached on an object's surface. Using a tag is an effective way to enable object identification and augmentation. It can also help to link a physical object to a database with further information. This kind of augmentation is particularly helpful in providing design information for digitally fabricated objects, since they are often customized or personalized ~\cite{maia2019layercode, ettehadi2021documented}. Recent works show several promising ways to embed tags within digitally fabricated objects. Existing methods, however, have several issues. A common approach is to design a tag on the surface. This method can easily compromise the aesthetics of the object~\cite{maia2019layercode}. In comparison, the object design can be modified by extrusion or engraving for tagging~\cite{ettehadi2021documented}; however, this can also be obtrusive, and can limit the object's functionality. Alternatively, recent work has shown promising results in embedding information under the surface, using high-end or professional 3D printers (\textit{e.g., } PolyJet)~\cite{willis2013infrastructs,li2017aircode}. To date, however, such techniques have been unachievable with consumer-level FDM (Fused Deposition Modeling) printers, which are affordable, accessible and more suitable for many everyday use cases~\cite{shewbridge2014everyday}. To this end, we still lack an effective FDM 3D printer based tagging method that is printable with the object, and does not compromise its design aesthetics. We present \textit{InfoPrint}, a technique for invisibly embedding information (\textit{e.g., } tags) into 3D printed objects using FDM 3D printers. The embedded information is unobtrusive or invisible to the eye. Our technique does not alter the surface design of the object, allowing for more creative and secure designs. With \textit{InfoPrint}, it is possible to simply design and print information under the surface, without human intervention during or after printing. To read the information, we leverage two methods for different use cases. One method is based on the different thermal conduction rate between the air and the 3D printed material. This is possible as most objects printed with an FDM 3D printer consist of an infill pattern with the majority of the space being filled with air. The information under the surface can be revealed using a thermal camera after applying heat to the surface (\textit{e.g., } by hand-warming, as shown in Figure~\ref{fig:teaser}a and b). The second method is based on the fact that many plastic materials can be penetrated by or reflect near-infrared light~\cite{reich2005near}. By printing the information and the object using different materials or colors, the information can be read using a low-cost near-infrared device (Figure~\ref{fig:teaser}c and d). Our contribution is three-fold. First, we present a technique to embed information into a 3D printed object using consumer-level FDM 3D printers. Our method expands the design space for tagging 3D printed objects without compromising their appearance or shape. Second, we provide comprehensive design guidelines for embedding information into 3D printed objects addressing multiple application scenarios and use cases. Third, we provide several example applications using an embedded information scheme, including a novel way to interact with digitally fabricated objects using thermal cameras, and a secure and non-fungible way to autograph a 3D printed object as a unique digital fabrication work. \section{Related work} \subsection{Tagging 3D printed objects} \subsubsection{Tagging on the surface} \hfill \\ An intuitive and convenient way to tag an object is to attach a label on the surface. For 3D printed objects, such a label can also be printed as a part of the design or fabrication process~\cite{ettehadi2021documented}. For example, Harrison~\textit{et al. } proposed a notch pattern to tag a 3D printed object's surface. The notches can be swiped to generate a complex sound, which is then recorded by a microphone and decoded to a binary ID~\cite{harrison2012acoustic}. Similar work using a comb-like structure was presented by Savage~\textit{et al. }~\cite{savage2015lamello}. Besides acoustics, Maia~\textit{et al. } presented LayerCode, a scheme to embed optical barcodes by varying the colors of different layers, using a dual-color 3D printer~\cite{maia2019layercode}. However, LayerCode alters the appearance of the whole object, which can be undesirable. Although the authors demonstrated a method to make the information invisible, utilizing a customized material mixed with near-infrared dye and a modified 3D printer made the approach unrealistic for daily use. To address this issue, Delmotte~\cite{delmotte2020blind} proposed a less restrictive method by varying the surface layer thickness. The authors developed a tool to manipulate the G-Code for tagging a 3D printed object on the surface. However, the area with the tag resembles printing flaws, and may be removed by daily use or during post-processing (\textit{e.g., } polishing). This limitation also applies to the tagless method that utilizes subtle print artifacts or patterns for 3D printed object recognition~\cite{dogan2020gid}. In addition to binary tags mentioned above, existing works also show different ways to tag 3D printed objects with other textures. In particular, recent studies focus on incorporating textiles or fabrics in 3D objects to further expand the design space using FDM 3D printers with not only more applications, but also tagging methods by embedding textures on the printed objects. For instance, Rivera~\textit{et al. } demonstrated a technique to embed textiles into 3D printed objects, which can leverage well-developed embroidery techniques as a potential way for tagging~\cite{rivera2017stretching}. Besides embedding, Takahashi~\textit{et al. } presented a method to print textiles using a 3D weaving technique~\cite{takahashi2019printed}. The authors demonstrated that the fabrics can be printed using an FDM 3D printer by controlling the printer's head movements~\cite{takahashi2019printed}. Alternatively, Forman~\textit{et al. } showed that such 3D printed quasi-fabrics could also be printed by under-extrusion~\cite{forman2020defexfiles}. Both works showed that 3D printed textiles with patterned textures can potentially be used as visual tags on the surface. It is worth noting that textile-like methods are mostly suitable for flexible objects, and may not be appropriate for long-term or heavy use. Beyond visual patterns, researchers in the HCI community also developed methods to tag conductive patterns on the surface that can be read by a touchscreen. For example, Marky~\textit{et al. } showed a design to embed a conductive pattern on the bottom surface of an object. The pattern can be re-configured by the user, and thus, can be used for two-factor authentication in a tangible way~\cite{marky2020auth}. Similarly, Schmitz~\textit{et al. } presented Itsy-Bits, a fabrication pipeline to design small footprints for detecting tangibles on a capacitive touchscreen~\cite{schmitz2021itsybits}. However, their methods are limited to specific patterns and require user contact while reading, which may not be adopted to general information embedding applications. Even though the use of `on-the-surface' tagging methods is beneficial in several ways, such as convenience of labeling or intuition for reading, there are several important drawbacks. In particular, such a method can alter the appearance, shape or functionality of the object, and is less robust for long-term or frequent use. \subsubsection{Tagging under the surface} \hfill \\ In contrast to on-the-surface tagging methods, tagging an object under the surface is less intuitive or convenient to read. This method does not compromise the external design of the object, while also being more enduring. A common approach involves embedding an RFID tag inside a digitally fabricated object~\cite{spielberg2016rapid}. However, such a method requires additional materials and procedures to perform embedding, as a fully printable technique is not currently possible~\cite{espalin20143d}. Our work addresses this issue and enables embedding information (\textit{i.e., } tagging) within a 3D printed object that is printable using an off-the-shelf FDM 3D printer. A study by Li~\textit{et al. } is most closely related to our work~\cite{li2017aircode}. The authors presented AirCode, a technique to embed a QR-Code like pattern under the surface of a 3D printed object. The code can be printed using a PolyJet 3D printer with well-designed cavities inside the 3D model. The code can then be read by a monochrome camera and a projector, and decoded as the tag of the printed object (\textit{e.g., } metadata or ID). However, AirCode cannot be used with the common consumer FDM 3D printer. This is because FDM 3D printers yield non-homogeneous printouts, and use relatively thick materials compared to an expensive PolyJet 3D printer. Also, AirCode only embeds binary data, and requires assembly after printing, since a PolyJet 3D printer cannot print hollows (cavities) inside an object. Another similar work is InfraStruts~\cite{willis2013infrastructs}, an information embedding scheme for layered structures, allowing the embedding of not only binary data, but also icons and text. However, their method is also limited to PolyJet 3D printers and is susceptible to layer variations such as uniformed thickness, as the authors use a THz-TDS (TeraHertz Time-Domain) device for imaging. While there are prior examples of using FDM 3D printers to embed information, their design severely limits functionality in application and durability~\cite{silapasuphakornwong2015nondestructive,suzuki2017information}. Their methods are incapable of producing embedded information beyond binary due to their restrictive design between adjacent bits. Our approach does not have this limitation, and can produce binary, text and icons using a more accessible setting. Other works using highly customized materials make the settings unrealistic for daily use~\cite{silapasuphakornwong2018information,silapasuphakornwong20193d,silapasuphakornwong2019technique}. Beyond embedding information for imaging, recent works also include fabricating information as signals inside an object. For instance, Iyer~\textit{et al. } demonstrated a backscattering system that enables wireless analytics for 3D printed objects~\cite{iyer2018wireless}. The authors embedded antennas that can be switched on or off using a sophisticated mechanical design, triggered by user interactions. The information can then be captured by reading the signals backscattered by the antennas. In another example, Chadalavada~\textit{et al. } presented ID'em~\cite{chadalavada2018id} that allowed users to embed a conductive dot matrix under the surface that can be read by an array of inductive sensors. The conductive matrix works like an antenna array that manipulates the magnetic fields as the signals. The signals are then decoded as the information. However, both of the aforementioned methods are limited to specific data types (categorical or binary), and may not be easily printed using common off-the-shelf 3D printers. \subsection{Embedding objects with 3D prints} In a broad sense of information embedding, significant efforts have been done for embedding physical objects within 3D prints. In particular, researchers have focused on incorporating electronics with 3D printed objects that enable functionality or interactivity, \textit{e.g., } camera~\cite{savage2013sauron}, a wireless accelerometer~\cite{hook2014making}, or a speaker~\cite{ishiguro2014printed}. A recent study also demonstrated novel use cases enabling 3D printing on cloth with SMA (Shape-Memory Alloys) actuators~\cite{muthukumarana2021clothtiles}. Furthermore, researchers endeavor to provide tools and guidelines for designing inner structures to embed electronics, \textit{e.g., } for designing internal pipes~\cite{willis2012printed, savage2014series} or hollow tubes~\cite{tejada2020airtouch}, optimizing sensor placement~\cite{bacher2016defsense}, component placement for assembly~\cite{desai2018assembly}, or even for printed objects that are deformable~\cite{wang2020morphingcircuit, hong2021thermoformed, ko2021designing} or re-configurable for fast-prototyping~\cite{schmitz2021ohsnap}. However, such methods are not fully printable yet, and cannot be automated. Therefore, it still requires relatively complex fabrication techniques with extra material costs, which is extravagant for simple tagging or information embedding tasks. One way to overcome this non-printable disadvantage is to use the emerging conductive 3D printing materials. Recent works show potential to print conductive traces, circuits or surfaces within objects. For example, Schmitz~\textit{et al. } presented a design pipeline to print selected surfaces using conductive ABS (Acrylonitrile Butadiene Styrene) for sensing touched areas on a 3D printed object~\cite{schmitz2015capricate}. As an extension, the authors further showed a pipeline for embedding electrodes to fabricate a hover-, touch-, and force sensitive object~\cite{schmitz2019trilaterate}. Besides embedding electronics with the traces, such a method can also be useful for various interactive scenarios in VR and AR~\cite{narazani2019extending}. Moreover, such a method can be adopted for heavily used functional objects utilizing emerging carbon fiber strengthened 3D print materials. For example, Swaminathan~\textit{et al. } demonstrated a technique to fabricate conductive traces while keeping the object mechanically strong for heavy use~\cite{swaminathan2019fiberwire}. In addition to solid conductive traces, recent work has adopted conductive liquids for fabricating circuits using a 3D printed stamp as the circuit mold~\cite{tokuda2021flowcuits}. Beyond rigid objects, existing works also show the feasibility of embedding conductive traces or electronics for 3D printed flexible objects, such as conductive fabrics~\cite{peng2015layered}, electrospun textiles~\cite{rivera2019desktop}, using foldable structures~\cite{olberding2015foldio, yamaoka2019foldtronics}, or the flexible TPU (Thermoplastic Polyurethane) material with silver inks (\textit{i.e., } AgTPU)~\cite{valentine2017hybrid}. In particular for the flexible objects, researchers have strived to design a computational structure for enabling functionality or information embedding. For example, Schmitz~\textit{et al. } designed a flexible structure fabricated using TPU that can sense deformation as input information~\cite{schmitz2017flexibles}. Deformation structures can also be used with pneumatic devices for sensing user inputs~\cite{vazquez2015printing,he2017squeezapulse}. Nevertheless, existing methods are not fully printable and still require electronics to be attached~\cite{macdonald2016multiprocess}. Beyond electronics, previous works also show other promising methods to embed everyday objects for 3D prints. In particular, Schmitz~\textit{et al. } demonstrated techniques to embed liquids inside 3D printed object as a changeable information, for sensing tasks such as tilting and motion~\cite{schmitz2016liquido, schmitz2018offline}. Such a method can also be beneficial for reducing cost of fabrication time or material, and has a great potential for information embedding~\cite{mueller2014fabrickation, chen2018medley, wall2021scrappy, sun2021shrincage}. Other than 3D printing, researchers also proposed different digital fabrication based methods for information or object embedding, such as 3D sculpting~\cite{oh2018pep}, laser-cutting~\cite{takahashi2019printed, valkeneers2019stackmold}, paper printing~\cite{li2016paperid}, or spraying~\cite{hanton2020protospray}. In summary, existing works for embedding information inside a 3D printed object are not practical for general information embedding; they either require non-printable materials or cannot be applied to off-the-shelf consumer FDM 3D printers. \section{Method overview} \label{sec:method} We present a technique to embed information under the surface of 3D printed objects. The embedded information itself is printable as part of the printed objects, and cannot be directly seen to the eye. In this section, we briefly describe the method to design, read, and fabricate. \subsection{Information embedding} Our design pipeline is shown in Figure~\ref{fig:design_overview}. In this paper, we use Autodesk Fusion 360~\cite{fusion360} as the 3D modeling tool, without requiring any modification or plugin. Thus, the pipeline can also be applied to other 3D modeling software. The whole design pipeline includes three steps: \begin{enumerate} \item \textit{Object modeling}: Model the desired object. For a mesh file (\textit{e.g., } stl file), Fusion 360 provides a ``Mesh to BRep'' function that allows readily editing the mesh file. \item \textit{Information modeling}: Model the necessary information (\textit{e.g., } letter 'A'). The information should be sketched and extruded as a 3D object. \item \textit{Information Embedding}: Embed the information model inside the object model. Here, we embed the model for ``A'' into the object. Then the object model is subtracted by the ``A'' model, while keeping the ``A'' model in place. In Fusion 360, this can be achieved by invoking the ``combine'' command. For a mesh editor software, this can be achieved with ``subtractive boolean'' (\textit{e.g., } Blender~\cite{blender}). \end{enumerate} The designed model must be exported individually. In Fusion 360, the export option is ``one file per body'' for fabrication. This allows the slicing software to have different settings (such as materials) for the object model and the information model. For example, we demonstrate the fabricated object in Figure~\ref{fig:design_overview} (d)-(f), where the object is printed using PLA-Blue, while the information model is printed using PLA-White. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{figures/design_overview.pdf} \caption{Illustration of the design overview. \textbf{(a)} An object model to embed information inside. \textbf{(b)} The information model to be embedded. \textbf{(c)} The object model with information embedded. \textbf{(d)} The 3D printed object with information embedded. \textbf{(e)} The cross-section with information embedded. \textbf{(f)} The surface under which the information is embedded.} \label{fig:design_overview} \end{figure} \subsection{Reading principles} \label{subsec:principles} Since we aim to embed information that cannot be directly seen on the surface of the object, the information needs to be imaged (visualized) for reading. In this paper, we leverage the following two approaches for imaging: \begin{enumerate} \item \textit{Thermal conduction}: As illustrated in Figure~\ref{fig:imaging_principles}~(a), after heat is transferred to the surface of an object, the heat flows from that surface down into the object. Because the object's inside is composed of two different materials, the heat permeates at different speeds, depending on thermal conduction properties of the materials. As a result, the temperature of the surface changes at different speeds, resulting in a thermal pattern as a projection of the information model on the surface. Here, we consider the filament materials (\textit{e.g., } PLA or ABS) as one such material, while air is the other material (the spaces without infill). \item \textit{Light penetration and reflection}: As illustrated in Figure~\ref{fig:imaging_principles}~(b), a 3D printed object can also be fabricated by two materials with different optical characteristics. In particular, we are interested in the case where the material on the surface layers can be penetrated while the other material inside the object is reflective. By transmitting light onto the surface of the object and measuring the reflected light, we can create an image of the reflection pattern. Further, as we aim to embed information unobtrusively, we utilize near-infrared lights that are more capable of penetrating and distinguishing different materials, compared to visible lights~\cite{reich2005near}. For 3D printing, we can use filaments of two different colors (such as PLA-blue and PLA-white). \end{enumerate} In this paper, we demonstrate how to read the embedded information using a thermal camera and a near-infrared scanner respectively. We use the Optris Xi 400\footnote{\url{https://www.optris.com/optris-xi-400}} thermal camera, with wavelengths $8~\mu m$ -- $14~\mu m$ (mid-far infrared). We also use the DLP NIRscan Nano~\cite{dlpnirscanonline} near-infrared scanner. For raster scanning, we mount the near-infrared scanner on an xy-plotter, controlled by a custom-built software. We also provide a supplementary video demonstrating our system with both reading devices. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/imaging_principles.pdf} \caption{Illustrations of the two reading approaches. Bottom part demonstrates the imaging patterns.} \label{fig:imaging_principles} \end{figure} \subsection{Fabrication methods} We propose one fabrication technique for thermal imaging, and another for near-infrared imaging (Figure~\ref{fig:imaging_principles}). Both fabrication methods use the same design models (\textit{i.e., } the same stl files for slicing). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/fabrication_methods.pdf} \caption{Fabrication methods. \textbf{(a)} Illustration of the vertical cross-section. \textbf{(b)} Normal fabrication method. \textbf{(c)} The ``surface-join'' fabrication method. \textbf{(d)} The ``surface-fill'' fabrication method.} \label{fig:fabrication_overview} \end{figure} \begin{enumerate} \item \textit{Surface-join, for thermal imaging}: As illustrated in Figure~\ref{fig:fabrication_overview}~(c), the ``surface-join'' denotes the surface layers of the object and the embedded information model are joined together. In slicing software, this is done by thickening the top or bottom layer of both the object model and the information model. This enables the thermal conduction between the object's surface layers and the information model. \item \textit{Surface-fill, for near-infrared imaging}: As illustrated in Figure~\ref{fig:fabrication_overview}~(d), the ``surface-fill'' denotes the surface layers of the object are filled until the top of the 3D model representing the information. In slicing software, this is done by thickening the surface layer of the object model only. This is for near-infrared imaging and to prevent light scattering due to the non-uniformed infill material (filament and air) between the surface layers and the top of information model for reflection. \end{enumerate} We note that our two methods are exclusive. Depending on specific applications and use cases, either method should be chosen for fabrication. We show various examples in the following section and provide comprehensive design guidelines in Table~\ref{tab:design_space}. \section{Example Applications} \label{sec:examples} \textit{InfoPrint} enables various applications by embedding information inside 3D printed objects using off-the-shelf FDM 3D printers and filaments, without requiring extra procedures (such as assembly), or software modifications. We exemplify the applications including tagging 3D models, interactive thermal displays, and hand-signing (autographing) 3D prints. We present the design details in Section~\ref{subsec:design_guideline} below. We also demonstrate our application in the supplementary video. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/demo_gears.pdf} \caption{Demonstration of different tagging methods by printings. The example is to 3D print a tag for a gear with the part number (01X). \textbf{(a)} The tag is printed on the surface using another color. \textbf{(b)} The tag is printed above the surface with extrusion upwards. \textbf{(c)} The tag is printed by extruding the tag inwards (like engraving). \textbf{(d)} The tag is printed under the surface. \textbf{(e)} The tag is read under the thermal camera for the object in (d), after the surface is warmed by hand. \textbf{(f)} Illustration of multiple gears with labels on the surface. \textbf{(g)} Illustration of malfunctioning gears tagged by extrusion in (b). \textbf{(h)} Illustration of visual contaminated tagged in (c).} \label{fig:demo_gears} \end{figure} \subsection{Tagging 3D models} Conventional tagging methods for 3D printed objects are usually performed on the surface of the object. Such a method can alter not only the appearance of the object, but also constrains its shape or even functionality (Figure~\ref{fig:demo_gears}). In particular, for comparison, we include examples using the most common conventional tagging methods: (a) Tagging on the surface using another color. (b) Tagging by extruding outwards. (c) Tagging by extruding inwards (like engraving). Furthermore, we consider two types of tags as seen below. \subsubsection*{Serial number: } Serial numbers can be used to tag different objects with similar designs. For the examples shown in Figure~\ref{fig:demo_gears}f and g, we include three gears composed in a gearbox, tagged as ``01X'', ``02X'' and ``01Y''. The gearbox itself can be used as part of mechanical design, educational activity or exhibition. For example, in an education setting, \textit{InfoPrint} could let students try and figure out the correct placement of the gears, whilst also embedding hidden information as hints for the students to read in case if they need help. Compared to other methods, our method is unobtrusive and does not affect the design functionality. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/demo_tools.pdf} \caption{Examples of tagging 3D printed tools with binary data. Embedded tags are robust for frequent and heavy use, compared to tags on the surface. } \label{fig:demo_tools} \end{figure} \subsubsection*{Binary data: } Binary data can be used to store metadata for the object, \textit{e.g., } the key or ID for a database entry that stores detailed information of a print or the design. For example, as shown in Figure~\ref{fig:demo_tools}, a tool can be printed using different materials (\textit{e.g., } ABS or PLA) or different colors. The objects may not be easily distinguished after being printed. Since a tool can be frequently and heavily used, tags on the surface can be easily damaged (\textit{e.g., } due to scratching or abrasion). In contrast, \textit{InfoPrint} embeds information under the surface and reduces the likelihood of damage to the tags. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/demo_thermal_display.pdf} \caption{Example of a 3D printed interactive thermal display. \textbf{(a)} The model design with different information embedded on the left part and the right part respectively. \textbf{(b)} The printed object with information embedded. \textbf{(c)} Selectively hand-warm the left part of the object. \textbf{(d)} The information on the left side is imaged under the thermal camera, while the information on the right side remains hidden.} \label{fig:demo_display} \end{figure} \subsection{Interactive thermal displays} To read the embedded information, one of the approaches we leverage is thermal conduction (Section~\ref{subsec:principles}). The information can be imaged after applying heat on the surface, \textit{e.g., } through warm hands. This enables a new way of interacting with a 3D printed object. As demonstrated in Figure~\ref{fig:demo_display}, we embed two pieces of information into an object. The information can be selectively revealed by interacting with a specific part of the object (left or right), while the other part remains hidden. \subsubsection*{Security code. } In practice, such an interaction can be used for security purposes. For example, most credit cards have a CVV2 (Card Verification Value 2) code on the back side. Such a number is used for reducing credit card fraud when the card is not present for transaction~\cite{gupta2017credit}. However, the CVV2 code is usually printed, affording easy malicious access for potential data breaches. In similar design, the grid authentication, where the user has a grid card containing a matrix of codes for two-factor authentication is also subject to data breach risks, since the grid card codes are visible and can be photographed~\cite{ali2019consumer}. \textit{InfoPrint} could address the above mentioned issues as it hides the secret information (\textit{i.e., } codes) under the surface. The codes can be revealed for a short period of time (detailed in Section~\ref{subsec:eval_thermal}) by interacting with the object whenever required. In addition, the location of information can be known only to the user, further improving the security of the data. \subsubsection*{Hidden tokens for social activities. } \textit{InfoPrint} can also can be used in social settings such as escape rooms and board games, with temporal uncovered information. For example, objects with hidden tokens can be placed or installed in an escape room as concealed clues of puzzles. The token can only be read after interacting with the object while holding a thermal camera (\textit{e.g., } recent smartphones are equipped with thermal camera such as~\cite{catphones}, or mobile thermal cameras such as~\cite{flir}). Such a design can also be used for board games, where players can be assigned with different roles using tokens or cards. With \textit{InfoPrint}, we can create hidden tokens that can only be seen under a thermal camera after interacting with the token. For instance, a player can act as the ``oracle'' holds a thermal camera to identify other players' roles. Potentially, this may provide more possibilities to design such social activities, compared to using visible or covered tokens. \subsection{Extension: autographing unique or non-fungible 3D prints} Finally, as an extension to information embedding, \textit{InfoPrint} enables autographing inside the 3D prints. The 3D prints can then be treated as a unique object (such as an artwork), or a non-fungible object, as the autograph cannot be erased or modified (compared to autograph on the surface). As a demonstration, we show a robot model with an autograph under its back surface in Figure~\ref{fig:demo_sign}. The autograph is then imaged using the near-infrared scanner. Not only can autographs be encoded using \textit{InfoPrint}, but other hand-written information, \textit{e.g., } text, icons or binary data with precise hand-draw. In addition, the handwritten information (\textit{e.g., } autograph) can be modeled and shared with the object design as a unique digital signature of the design. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/demo_sign.pdf} \caption{Demonstration of autographing inside a 3D printed object. \textbf{Layer 10}: The autograph is signed by pausing the print job. The job is resumed after autographing. \textbf{Layer 16}: A reflective plate is printed using PLA-White. \textbf{Layer 166}: The job is finished. After printing, the autograph can be imaged by the near-infrared scanner.} \label{fig:demo_sign} \end{figure} \section{Evaluation and Validation} \label{sec:eval} Finally, we evaluate our information embedding method to better understand its limitations, including using two different imaging approaches: thermal imaging and near-infrared imaging. A comparison between the two approaches is shown in Table~\ref{tab:thermal_vs_nirs}. Furthermore, based on our results, we derive design guidelines for information embedding and reading using an FDM 3D printer for different use cases. The guidelines are included at the end of this section (Table~\ref{tab:design_space}). \subsection{Samples} For systematic evaluation we adopt information encoding conventions previously used in literature~\cite{willis2013infrastructs, li2017aircode}, and consider a $4\times4$ binary matrix as the embedded information. Such a matrix can also be considered as a bitmap for non-binary data. The binary matrix is randomly generated, with 8 bits being $1$s and the other 8 being $0$s, with the top-left bit fixed as $1$ as the anchor bit (Figure~\ref{fig:sample_groundtruth}). The matrix is embedded into a cube with dimensions $W \times D \times H = 30 \times 30 \times15~mm$. We then fabricate the cube with different print settings, including: \begin{enumerate} \item Information depth ($d$): The minimal distance between the surface and the information surface. As illustrated in Figure~\ref{fig:fabrication_overview}. \item Information density ($X$): The block size (\textit{i.e., } the size of each bit or pixel) in the matrix, measured by $mm~per~pixel$. \item Infill percentage: The infill percentage of the model. Both printer's heads use the same value. We vary the infill percentage as $10\%$, $20\%$, $40\%$ and $80\%$, considering the use case of modeling, standard printing tools, functional tools and heavily used tools, respectively. \end{enumerate} \input{tables/thermal_vs_nirs} For fabrication, all samples are printed using a low-cost off-the-shelf dual-head FDM 3D printer (FlashForge Creator Pro 2\footnote{\url{https://www.flashforge.com/product-detail/51}}). The information models (\textit{i.e., } the matrix) are printed using PLA-white filaments. The cube models are printed using PLA-black filaments and PLA-blue filaments, for thermal imaging and near-infrared imaging, respectively. The colors are commonly available and chosen based on our preliminary tests of performance. We also validate our results using other colors. We refer to the example applications demonstrated in Section~\ref{sec:examples} above. Other parameters are fixed for feasibility and consistency (for example, we chose honeycomb for an infill pattern, and set the information height to $1~mm$. Nevertheless, we consider these parameters as negligible). \subsection{Thermal imaging} \label{subsec:eval_thermal} \subsubsection{Reading process} \hfill \\ We first perform a heat transfer onto the surface of an object embedded with information. The information is revealed when the heat dissipates into the under-surface layers of the object. We record the whole process using the thermal camera and save the data as csv files, at the fastest rate specified by the thermal camera software (5 - 7 frames per second). For each frame, the following two-stage decoding pipeline is performed to decode the matrix into binary data: \begin{enumerate} \item \textit{Normalization}: After reading the frame data, we first denoise the frame using a Gaussian blur filter (window size = $5\times5$), and normalize the values to the range between 0 and 255. \item \textit{Binarization 1}: The frame is binarized using Otsu's method. Since the thermal imaging varies in time, we use Otsu's method instead of a selected threshold for better robustness. \item \textit{Contour detection 1}: The contour detection algorithm is applied to the binary frame. This step is to detect the contour of the object (\textit{i.e., } cube). \item \textit{Cropping}: The normalized frame is cropped using the detected contour with the largest area. \item \textit{Binarization 2}: The cropped frame is binarized using Otsu's method. Since Otsu's method is based on the histogram of the image, this second-stage binarization can help the algorithm to better find the threshold and further increase the robustness of the decoding process. \item \textit{Contour detection 2}: The contour detection algorithm is applied to the cropped binary frame. \item \textit{Sampling and decoding}: The center of the top-left contour is used as the anchor point for sampling. A $4 \times 4$ sampling matrix is then derived and decoded to binary data. For evaluation, the spacing between sample points is predefined. \end{enumerate} A demonstration of thermal imaging and decoding results are shown in Figure~\ref{fig:eval_thermal_imaging}~(top). We note that the steps above are only for evaluation purposes. In practice, this may not be required for human-readable non-binary information as it does not require decoding (\textit{e.g., } board game token and interactive display). \subsubsection{Data collection} \hfill \\ We first test the imaging results at different temperature conditions. To avoid interference from other parameters, we maintain constant information depth, information density (size) and infill percentage as $1~mm$, $5~mm$ and $10\%$ respectively. Furthermore, considering a practical scenario, we keep the 3D printed object at room temperature ($27\pm2\degree$C). We then prepare four palm-sized bags of sand (made of 100\% cotton calico) at four temperature conditions: ``cold'' ($10\pm2\degree$C), ``cool'' ($20\pm2\degree$C), ``warm'' ($40\pm2\degree$C) and ``hot'' ($50\pm2\degree$C). The sand bags were heated overnight for 12 hours in a temperature-controlled warm-cool dual-mode car fridge to ensure the desired temperature was reached. The temperature errors are caused by the highly dynamic nature of heat dissipation and measurement errors of the thermal camera. Each sand bag is then placed and pressed on top of the cube for three seconds to ensure even and sufficient contact (the surface under which the matrix is embedded). The whole process is recorded by the thermal camera for $60$ seconds, resulting in around $360$ frames collected per condition (varies due to the thermal imaging software). In addition to the four above-mentioned controlled conditions, we include an example for practical use -- using a hand to warm-up the cube to reveal the information. Because hand temperature varies for different people at different times of the day~\cite{zontak1998dynamic}, we first regulate the hand temperature using an air-activated hand-warmer. The regulated hand temperature is $35\pm2\degree$C, which is within the normal human temperature range~\cite{hutchison2008hypothermia}. In practice, instead of using an air-activated hand-warmer, this step can be done in other ways, \textit{e.g., } through hand rubbing. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/eval_thermal.pdf} \caption{Illustration thermal imaging results. The example demonstrates the hand-warming condition for thermal transfer. The colors in the histograms of different bins represent the colormap for the thermal images. } \label{fig:eval_thermal_imaging} \end{figure} \subsubsection{Reading window} \hfill \\ For each frame, the decoding algorithm is executed to detect the binary matrix and calculate the accuracy as the number of bits correctly retrieved divided by 16. Since the thermal camera is very sensitive and requires continuous calibration while recording (performed automatically), a few frames are corrupted before or during calibration~\cite{prakash2006robust}. We exclude those corrupted frames by removing the outliers with performance below the $0.2$ quantile. We note this phenomenon does not impact practical use of the camera as it only affects the frames imminent to the calibration. Finally, we calculate the reading window lengths as the duration of error-free decoding (when accuracy = 1.0). The results are shown in Figure~\ref{fig:eval_thermal_imaging}. As an example, we demonstrate the reading process using the hand-warming condition that is more practical than using a sandbag. Evaluation results for all conditions are included in Appendix Figure~\ref{fig:eval_thermal_acc}. We observe that under all conditions, the embedded information can be successfully read immediately after heat transfer. The maximal error-free reading window length is 12 seconds after the sandbag leaves the surface of the cube, achieved by ``cold'', ``cool'' and ``warm'' conditions. The reading window lengths for ``hot'' and ``hand'' conditions are 9 seconds and 8 seconds respectively. This may be caused by the variation of heat dissipation under different conditions~\cite{bergman2011fundamentals}. In particular, the heat dissipates faster with larger temperature differences \footnote{Rate of heat flow = $-kA \cdot \Delta T / \Delta x$, where $k$ is the thermal conductivity, $A$ is the heat emitting area, $\Delta T$ is the temperature difference and $\Delta x$ is the material thickness. In our experiments, $\Delta T$ varies.}. Also, for the ``hot'' condition, the heat dissipates to both the ambient air and into the 3D object. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{figures/thermal_infill.pdf} \caption{Illustration of thermal imaging with different infill percentages. The figure on the right shows the accuracy in time for $20\%$ infill percentage.} \label{fig:eval_thermal_infill} \end{figure} \subsubsection{Other parameters} \hfill \\ \textit{Infill percentage.} We further test the thermal imaging at different infill percentages. The initial thermal transfer is applied by hand-warming. Our reading succeeds at infill percentage of $20\%$ and fails at $40\%$. The result is expected because a higher infill percentage acquires more heat. The more heat dissipates to the infill prints, the smaller the temperature difference is between the areas embedded with information model and those without, yielding more blurred thermal images. The results are shown in Figure~\ref{fig:eval_thermal_infill}. \vspace{1em} \noindent\textit{Information density and depth.} Our reading fails at higher information density or greater depth (information density $X = 4~mm~per~pixel$ and depth $d = 2~mm$). Similarly, this is limited by the principle of heat dissipation. For a higher information density, the infill prints take relatively more heat. While for the information in a greater depth, the heat dissipates through a longer path to the information model, resulting in more heat loss and less temperature differences between areas embedded with information and those without. The results are illustrated in Appendix Figure~\ref{fig:eval_thermal_w4} and \ref{fig:eval_thermal_d2}. \subsection{Near-infrared imaging} \label{subsec:eval_nirs} \subsubsection{Reading process} \hfill \\ As briefed in Section~\ref{sec:method}, the samples are raster scanned by a near-infrared scanner mounted on an xy-plotter. For scanning, we set the raster step size as $1~mm$, and home the scanner to the pre-defined area. The scanning resolution is $24\times24$, resulting $576$ near-infrared spectra for each image. The scanner's settings are identical to literature~\cite{klakegg2018assisted,jiang2019probing,jiang2021user} ($900~nm$ - $1700~nm$ wavelength range, $228$ wavelength resolution, $7.03~nm$ light pattern width, and $0.635~ms$ exposure time). As a result, the raw dimension for a near-infrared image is $24\times24\times228$. For each image, the following processing pipeline is performed: \begin{enumerate} \item \textit{Normalization}: The mean value of each spectrum is computed across the near-infrared wavelengths as the pixel value. Then the image is normalized to the range between 0 and 255. \item \textit{Super-resolution}: We adopt a pre-trained deep-learning based super-resolution model to upsample the image by four times~\cite{dong2016accelerating}. Since the raster-scanned resolution is low, this step can enhance the imaging quality, yielding a higher resolution of $96\times96$. \item \textit{Binarization}: The image is binarized using a simple thresholding method. We empirically select $0.4$ of the maximal value as the threshold for all images. \item \textit{Contour detection and decoding}: The contour detection and decoding algorithm is applied to the binary image. This step is identical to Step (6) and (7) for thermal imaging. \end{enumerate} Similar to the thermal imaging evaluations, we calculate the decoding accuracy as the performance for each condition. \subsubsection{Information depth} \hfill \\ We first evaluate the reading accuracy at different information depths. The information density (block size) and infill percentage are fixed to $5~mm$ and $10\%$ respectively. As clarified in Section~\ref{sec:method}, we use the surface-fill technique to fill the layers between the information model and the surface. The results are shown in Figure~\ref{fig:eval_nirs_depth}. It can be observed that the image quality decreases as the depth increases. For decoding, we succeeded in reading the binary data until depth $d=3~mm$. As scarce near-infrared lights penetrate deeper and reflect back ($d>3~mm$), the images become too blurry to read. It is worth noting that the information is visible with depth $d=1~mm$. Therefore, in practice, a depth $d\ge2~mm$ is suggested. We show more visibility tests in Section~\ref{subsec:visibility} and provide guidelines in Table~\ref{tab:design_space}. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{figures/eval_nirs_depth.pdf} \caption{Near-infrared imaging and decoding results with different information depth.} \label{fig:eval_nirs_depth} \end{figure} \subsubsection{Information density} \hfill \\ Next, we test the information density by varying the block size of the information matrix. The information depth and infill percentage are fixed to $d=2~mm$ and $10\%$ respectively. The results are shown in Figure~\ref{fig:eval_nirs_size}. Our reading method can decode the data from block sizes as small as $X=3~mm$. Although the matrix itself can be successfully detected until $X=1~mm$, the decoding results are not perfect and the retrieved images are blurry. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{figures/eval_nirs_size.pdf} \caption{Near-infrared imaging and decoding results with different block sizes (information density).} \label{fig:eval_nirs_size} \end{figure} \subsubsection{Infill percentage} \hfill \\ Further, we evaluate the near-infrared imaging with different infill percentages. Aligned with the thermal imaging evaluation, we vary the infill percentages: $10\%$, $20\%$, $40\%$ and $80\%$. The information depth and block size are fixed at $d=2~mm$ and $X=5~mm$ respectively. The results are shown in Figure~\ref{fig:eval_nirs_infill}. The imaging results are similar, with perfect reading results for all conditions. This result confirms that the effect of the infill parameter is negligible for near-infrared imaging, as we use the aforementioned ``surface-fill'' technique for fabrications. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/eval_nirs_infill.pdf} \caption{Near-infrared imaging with different infill percentages. \textbf{(a)} Cross-section for different infill percentages. Photos are transposed for illustrations. \textbf{(b)} Imaging (top row) and reading (bottom row) results. \textbf{(c)} Reading accuracy for the samples.} \label{fig:eval_nirs_infill} \end{figure} \subsubsection{Color} \hfill \\ In addition, we test the near-infrared imaging method using different colors. In particular, we vary the material color for the object (\textit{i.e., } the cube) while keeping the color for the information body as white. Five colors are chosen for testing: blue, gray, orange, red and black. We demonstrate the results in Figure~\ref{fig:eval_nirs_color}. For the non-black colors, the readings are all accurate. While for the black color, we cannot extract any information inside using near-infrared light. In principle, the black material absorbs the majority of light (both visible and infrared), and reflects very little. This characteristic makes the black material ideal for information embedding using thermal imaging, as we tested above. In contrast, the non-black materials are in fact translucent. Furthermore, near-infrared light can penetrate deeper than visible light, as we clarified in Section~\ref{sec:method}. This characteristic makes non-black colors more suitable for information embedding using the near-infrared scheme. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/eval_nirs_color.pdf} \caption{Near-infrared imaging and decoding results with different colors. \textbf{(a)} Samples printed using different colored materials. Photos are transposed for illustrations. \textbf{(b)} Imaging (top row) and reading (bottom row) results. \textbf{(c)} Reading accuracy for the samples.} \label{fig:eval_nirs_color} \end{figure} \subsection{Visibility to the human eye} \label{subsec:visibility} Finally, we test the visibility of information embedding using different colors, as we aim to embed information in an unobtrusive manner. For the visibility tests, the following samples are prepared, considering to be practical scenarios: \begin{enumerate} \item Thermal imaging samples: For thermal imaging, we fabricate the samples with information depth $d=1~mm$, with the ``surface-join'' fabrication technique. The same material is used for both the information model (\textit{i.e., } the matrix) and the object model (\textit{i.e., } the cube). \item Near-infrared imaging samples: For near-infrared imaging, we vary the information depth in $d=1~mm$, $2~mm$ and $3~mm$. The information model is printed using PLA-white, while the object model is printed using different colors. \end{enumerate} \input{tables/visibility_table} For both thermal imaging samples and near-infrared imaging samples, we use the same color set above for the visibility test (\textit{i.e., } blue, gray, orange, red and black). In total, ($1$ thermal sample $+$ $3$ near-infrared samples) $\times$ $5$ colors $=$ $20$ samples are fabricated for the visibility tests. We classify the visibility into ``visible'', ``unobtrusive'' and ``invisible'', aligned with the literature~\cite{li2017aircode,delmotte2020blind}. For classification, two researchers first label the samples independently. For the controversial samples, a third researcher is included to decide the final labels independently. The ground-truth information is provided for reference. All samples are inspected in a well-illuminated office (illuminance $>350~lx$). The results are shown in Table~\ref{tab:visibility_result}. We run a Cohen's kappa test to measure the inter-rater reliability. The Cohen's kappa coefficient $\kappa = 0.76$, showed substantial agreement for the initial rating (3 out of 20 labels are controversial). For the samples printed by the ``surface-fill'' technique, we observe that the embedded information can be directly seen for depth $d=1~mm$. Whereas the embedded information is invisible for depth $d\ge2~mm$, except for the orange color (unobtrusive). For the ``surface-join'' case, only the black samples are invisible, while the blue and red samples are unobtrusive. \input{tables/design_space} \section{Discussion} \subsection{Design guidelines} \label{subsec:design_guideline} We have systematically explored multiple ways of embedding information in 3D objects, and provide a thorough evaluation of multiple parameters for fabrication and reading. Given our evaluation results, we now provide design guidelines for embedding information into 3D printed objects using an FDM 3D printer, as summarized in Table~\ref{tab:design_space}. We cover four design aspects for embedding information into a 3D printed object as follows: \begin{enumerate} \setlength{\itemsep}{0.5em} \item \textit{Appearance: Whether the embedded information can be seen directly. } For designs with appearance-related constraints, we recommend to embed information at a depth of $d\ge2~mm$ with colored materials (refer to the example application shown in Figure~\ref{fig:demo_tools}). Alternatively, if the design can be fabricated using a black material, the information depth should be $d\le1~mm$ (refer to the example application in Figure~\ref{fig:demo_display}). \item \textit{Functionality: Whether the printed objects are functional. } A typical functional print requires a high infill density. Since for thermal imaging the maximal infill percentage is $20\%$, we recommend using colored material incorporated with near-infrared imaging for embedding information into a functional printed object (refer to the example application shown in Figure~\ref{fig:demo_tools}). \item \textit{Information density: Whether the embedded information requires high density. } For information density higher than (or pixel size smaller than) $5~mm~per~pixel$, we recommend the embedding technique with near-infrared imaging, using the ``surface-fill'' fabrication technique. In addition, a deep-learning based super-solution method can further increase the image quality for high-density information (refer to the example application shown in Figure~\ref{fig:demo_sign}). \item \textit{Reading speed: Whether an instant reading is required. } For embedded information that requires instant reading, we recommend embedding information incorporated with thermal imaging. Also, a thermal camera is more ubiquitous compared to a near-infrared device (\textit{e.g., } a mobile thermal camera~\cite{flir} or a smartphone with thermal camera~\cite{catphones}). \end{enumerate} \subsection{Limitations and future work} \label{subsec:limitation} We note several limitations of \textit{InfoPrint}, which need to be addressed in our future work and might further expand the design space and use cases. \begin{itemize}[leftmargin=*] \setlength\itemsep{0.5em} \item We only tested \textit{InfoPrint} on an FDM printer. In principle, our method can also be applied to other 3D printing and digital fabrication techniques, such as SLA (Stereolithography) 3D printers and laser cutters that are also popular and feasible for practical use. In particular, incorporating multiple digital fabrication tools is considered a promising way to further expand the design space and accelerate design pipeline~\cite{beyer2015platener}. \item The information density of \textit{InfoPrint} is not very high (up to $3~mm~per~pixel$). This limitation is mainly caused by the resolution of our imaging devices. As a complement, we utilized several computer vision algorithms to enhance the imaging quality. In the short term, the issue can be alleviated by tuning and adopting new computer vision methods that evolved rapidly in recent years~\cite{voulodimos2018deep}. In the long-term, as the hardware upgrades, the imaging quality improves, resulting in higher information density in the future. \item Also, for the evaluation, we only included the binary case in a machine-readable way. For the texts and icons, the readability may vary among users. As we focus on the embedding technique in this work, it would be beneficial to run a user-study for testing the readability for different information types. \item Our reading devices are currently not mobile. For thermal imaging, there are several mobile thermal cameras that are commercially available~\cite{flir}. We also observe the emergence of smartphones integrated with a thermal camera~\cite{catphones}. Mobile thermal cameras and smartphones with built-in thermal cameras should be tested in future work to determine the feasibility of \textit{InfoPrint} in everyday scenarios. Furthermore, for near-infrared imaging, we note that the already-popular depth camera for many recent smartphones using near-infrared wavelengths~\cite{grunnet2019intel} can potentially be used as a reading device for \textit{InfoPrint}. Future studies can focus on developing mobile apps utilizing those smartphone components. \item For the information embedding scheme, our design is limited to a flat surface. In practice, many designs may not include a flat surface. Therefore, future work could explore a technique for embedding information under a curved surface. \end{itemize} \section{Conclusion} In this paper, we presented a technique to embed information invisible to the eye inside 3D printed objects. The information can be integrated and printed with the object directly, using off-the-shelf dual-head FDM 3D printers. For retrieving the information, we adopted and evaluated two methods, thermal imaging and near-infrared imaging, respectively. Based on the evaluation results, we proposed design guidelines of embedding information for different scenarios. Applications include interactive thermal displays, hidden board game tokens, tagging functional printed objects, and autographing non-fungible fabrication work, with more use cases enabled by expanding the design space of digital fabrications with our method and future work. \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:40:11.390Z
2021-12-02T02:09:47.000Z
algebraic_stack_train_0000
1,582
8,532
proofpile-arXiv_065-7790
\section{Introduction} The Atiyah-Singer (AS) index theorem \cite{Atiyah:1963zz} on a closed manifold without boundary $X$, \begin{align} {\rm Ind} (D)= \frac{1}{32\pi^2}\int_{X} d^4x \epsilon_{\mu\nu\rho\sigma}{\rm tr}[F^{\mu\nu}F^{\rho\sigma}], \end{align} is well-known and appreciated in physics. The number of chiral zero modes $n_\pm$ with the $\pm$ chirality of the Dirac operator $D$ defines the index ${\rm Ind}(D) = n_+-n_-$, which is related to the winding number or topological charge of the gauge fields. The index is essential in understanding the tunneling effect among different vacuua in quantum-chromo dynamics (QCD). This text-book-level formula can be confirmed by only one-loop computations, and therefore, physicist-friendly. However, the Atiyah-Patodi-Singer (APS) index theorem \cite{Atiyah:1975jf}, which is an extension of the AS theorem to a manifold $X$ with boundary $Y$, \begin{align} \label{eq:APS} {\rm Ind}(D_{\rm APS}) = \frac{1}{32\pi^2}\int_{X} d^4x \epsilon_{\mu\nu\rho\sigma}{\rm tr}[F^{\mu\nu}F^{\rho\sigma}] \textcolor{black}{-\frac{\eta(iD_Y)}{2}}, \end{align} is less known. Here the second term is called the eta invariant, which is given by a regularized difference between the number of positive and negative eigenvalues of the boundary Dirac operator $iD_Y$. This is not surprising since we have been not very interested in manifolds with boundary in physics until very recently. Recently the theorem became important in condensed matter physics, as it was pointed out in \cite{Witten:2015aba} that the APS index is a key to understand bulk-edge correspondence \cite{Hatsugai} of symmetry protected topological insulator, which is a gapped material in the bulk but a good conductivity is seen on the edge. The carrier of the charge, the so-called edge mode, is described as a massless $2+1$-dimensional Dirac fermion, whose partition function suffers from an anomaly of the time-reversal symmetry. This anomaly is not a problem in the total system since it is precisely canceled by the bulk fermion determinant. Each piece of the APS index in Eq.~(\ref{eq:APS}) corresponds to each phase of the fermion determinants. Namely, the APS theorem is a mathematical guarantee that the time reversal symmetry is protected in the total system. However, the left-hand side of the theorem defined by a massless Dirac operator with nonlocal boundary condition is physicist-unfriendly. The difficulty is in the fact that if we impose local and Lorentz symmetric boundary condition, the reflected particle at the boundary flips its momentum but keeps the angular momentum unchanged. This means that the chirality is not conserved and the $\pm$ chirality sectors do not decouple anymore: $n_\pm$ and the index do not make sense. The Atiyah-Patodi-Singer boundary condition \cite{Atiyah:1975jf} gives up the locality and rotational symmetry to respect the chirality. In the vicinity of the boundary, the Dirac operator has a form \begin{align} D = \gamma_4\left(\partial_4 + H \right), \end{align} where we take $x_4$ in the normal direction to the boundary, the $A_4=0$ gauge is chosen, and $H = \gamma_4\sum_i \gamma_i D^i $ is a Hermitian operator. The APS boundary condition is given by this $H$ operator, such that its positive eigenmode's components are forced to zero at the boundary: $(H+|H|)\psi=0$. This is a non-local boundary condition requiring information of eigenfunction extended on the whole boundary manifold. Note that $H$ commutes with $\gamma_5$ and chirality is conserved. Then we can define the index by the chiral zero-modes. This outcome is mathematically beautiful but physicist-unfriendly, since locality (or causality) is much more important than chirality for physicists, otherwise, we may allow information to propagate faster than speed of light. In physics, we should not accept the non-local APS boundary condition no matter how mathematically beautiful it is \footnote{ In Refs.\cite{Witten:2019bou, Kobayashi:2021jbn}, it was shown that the non-local feature of the APS boundary condition has no problem when it is Wick-rotated to a ``state'' at a time-slice. }. Instead, we need to give up the chirality and consider left-right mixing of massive fermion. The question is if we can make a fermionic integer even within massive systems. Our answer is ``Yes we can''. Here is our reference list. We proposed a new formulation of the APS index using the domain-wall fermion \cite{Jackiw:1975fn,Callan:1984sa,Kaplan:1992bt} in 2017 \cite{Fukaya:2017tsq} with Onogi and Yamaguchi. One year later, three mathematicians Furuta, Matuo and Yamashita joined and we succeeded in a mathematical proof that our proposal is mathematically correct \cite{Fukaya:2019qlf}. The reformulation is so physicist-friendly that application to lattice gauge theory is straightforward \cite{Fukaya:2019myi} (see \cite{Onogi:2021slv} for a relation to the Berry phase). In this conference, we had two parallel talks on an application to a curved domain-wall fermion by Aoki \cite{AokiS} and that to the mod-two APS index \cite{Fukaya:2020tjk, Matsuki} by Matsuki. We also refer the readers to our review paper \cite{Fukaya:2021sea} on the whole project. \section{Massive Dirac operator index without boundary} The key point of this work is if we can reformulate the index of the Dirac operator in terms of massive fermions, where either notion of chiral or zero mode is lost. Let us start with an easier case without boundary. We consider a Dirac fermion with a negative mass, compared to that of the regulator. Here we choose the Pauli-Villars regularization. For simplicity, we couple the $SU(N)$ gauge fields to the fermion on an even-dimensional flat Euclidean space. The fermion determinant is expressed as \begin{align} \frac{\det(D-m)}{\det(D+M)}. \end{align} In the large mass limit, or simply taking the physical mass $m$ and Pauli-Villars mass $M$ the same, let us perform an axial $U(1)$ rotation with angle $\pi$ to flip the sign of the mass. Apparently the fermion looks decoupling from the theory, but taking the anomaly \cite{Adler:1969gk,Bell:1969ts} into account, we have a shift in the $\theta$ term by $\pi$ and the sign is controlled by the Atiyah-Singer index $I_{\rm AS}(D)$: \begin{align} \frac{\det(D-M)}{\det(D+M)} = \frac{\det(D\textcolor{black}{+M})}{\det(D+M)} \times\exp\left(-i\textcolor{black}{\pi}\underbrace{\frac{1}{32\pi^2}\int d^4x FF}_{=I_{\rm AS}(D)}\right) = (-1)^{-I_{\rm AS}(D)}. \end{align} Even though we consider this massive fermion case, which does not have chiral symmetry or zero modes, the index still appears. Our proposal is then to use the massive Dirac operator to “define” the index. We use anomaly rather than symmetry. Specifically, multiplying $i\gamma_5$ to the numerator and denominator in the determinant to make the operator anti-Hermitian, the determinant can be expressed by the products of pure imaginary eigenvalues of massive operators: \begin{align} \frac{\det(D-M)}{\det(D+M)} &=\frac{\det i\gamma_5 (D-M)}{\det i\gamma_5 (D+M)} = \frac{\prod_{\lambda_{-M}}i\lambda_{-M}}{\prod_{\lambda_{+M}}i\lambda_{+M}} = \exp\left[\frac{i\pi}{2} \left(\sum_{\lambda_{-M}}{\rm sgn}\lambda_{-M}-\sum_{\lambda_{+M}}{\rm sgn}\lambda_{+M}\right)\right]. \end{align} Now we have shown the equality \begin{align} I_{\rm AS}(D) = \frac{1}{2}\eta(\gamma_5(D-M))^{reg.} = -\frac{1}{2}\left[\eta(\gamma_5(D-M))-\eta(\gamma_5(D+M))\right]. \end{align} On the right-hand side, chiral symmetry is no more essential but it is not written by the zero modes only but also by the non-zero modes. \section{New formulation of the index with boundary} Next we consider the nontrivial case with boundary. Before going into the details, let us discuss what a more physical set-up should be. In physics, any boundary has its ``outside''. Topological insulator is nontrivial because its outside is surrounded by normal insulators. It is more natural to regard the surface of material as a wall between domains of different physical profiles, rather than a boundary of a manifold. This domain-wall should keep the angular momentum in its normal direction, rather than helicity of the reflecting particles. Namely, we should consider a massive fermion system. The boundary condition should not be put by hand but should be automatically chosen by nature. The edge-localized modes should play a key role. Is there any candidate? Yes, we have the domain-wall fermion \cite{Jackiw:1975fn,Callan:1984sa,Kaplan:1992bt}. Let us consider a four-dimensional closed manifold, which is extended from the original boundary, and a massive Dirac fermion operator on it, \begin{align} D+\varepsilon M, \end{align} where the sign function takes $\varepsilon=-1$ in our target domain (say $x_4>0$), and $\varepsilon=+1$, otherwise\footnote{ In \cite{Kanno:2021bze}, more general position-dependent mass is discussed. }. Here we do not assume any boundary condition on the domain-wall, expecting it dynamically given. Unlike the standard domain-wall fermion employed in lattice QCD simulations, our domain-wall fermion lives in four dimensions and the surface modes are localized in the three dimensional wall. Our proposal for the new expression of the index is a natural extension of the Atiyah-Singer index in the previous section. We find that the $\eta$ invariant of the domain-wall Dirac operator with kink structure, coincides with the APS index, \begin{align} \label{eq:APSDW} I_{\rm APS}(D) = \frac{1}{2}\eta(\gamma_5(D+\varepsilon M))^{reg.} = \frac{1}{2}\left[\eta(\gamma_5(D+\varepsilon M))-\eta(\gamma_5(D+M))\right]. \end{align} This equality can be shown by Fujikawa method, which consists of three steps. 1) choosing regularization: we employ the Pauli-Villars subtraction, 2) choosing the complete set to evaluate the trace: we take the eigenmode set of free domain-wall Dirac operator squared, and 3) perturbative evaluation. See \cite{Fukaya:2017tsq} for the details of the computation. Here we give two remarks about the evaluation. First, the eigenmodes of the free domain-wall fermion, or the solutions to \begin{align} \{\gamma_5(D^{\rm free}+M\varepsilon(x_4))\}^2 \phi = \left[-\partial_\mu^2 + M^2 \textcolor{black}{-2M\gamma_4 \delta(x_4)}\right]\phi = \lambda^2 \phi \end{align} are given by a direct product $\phi =\varphi_{\pm,e/o}^{\omega/{\rm edge}}(x_4) \otimes e^{i\bm{p}\cdot \bm{x}}$, where $\bm{p}$ denotes the momentum vector in the three dimensions. The bulk wave functions in the $x_4$ direction \begin{align} \varphi^\omega_{\pm, o}(x_4)&=\frac{e^{i\omega x_4}-e^{-i\omega x_4}}{\sqrt{2\pi}},\;\;\; \varphi^{\omega}_{\pm,e}(x_4)= \frac{(i\omega\pm M)e^{i\omega |x_4|}+(i\omega\mp M)e^{-i\omega |x_4|}}{\sqrt{2\pi(\omega^2+M^2)}}, \end{align} have the eigenvalues $\lambda^2=\bm{p}^2+\omega^2+M^2$ and the $\pm$ eigenvalue of $\gamma_4$ indicated by the subscripts $\pm$. Another subscript $e/o$ denotes the even/odd parity under the reflection $x_4\to -x_4$. In the complete set, we have the edge localized solutions with \begin{align} \varphi^{\rm edge}_{-, e}(x_4)&=\sqrt{M}e^{-M|x_4|}, \end{align} with the eigenvalue $\lambda^2=\bm{p}^2$. These modes are chiral and massless. The second remark is that we did not give any boundary condition by hand but the following non-trivial boundary condition \begin{align} \left[\partial_4 \mp M\varepsilon\right]\varphi^{\omega/{\rm edge}}_{\pm,e}(x_4)|_{x_4=0} = 0,\;\;\; \varphi^{\omega}_{\pm,o}(x_4=0)=0, \end{align} is automatically satisfied due to the domain-wall. More importantly, this boundary condition preserves the angular momentum in the $x_4$ direction but does not keep helicity. Nature chooses the rotational symmetry, rather than chirality. Now we can compute the $\eta$ invariant in a simple perturbative expansion. From the bulk mode part, we obtain the curvature term but with the sign flipping: \begin{align} \label{eq:bulk} \frac{1}{2}\eta(H_{DW})^{\rm bulk}&= \frac{1}{2} \sum_{\rm bulk}(\phi^{\rm bulk})^\dagger {\rm sgn}(H_{DW})\phi^{\rm bulk} = \frac{1}{64\pi^2}\int d^4x \textcolor{black}{\epsilon(x_4)}\epsilon_{\mu\nu\rho\sigma}{\rm tr}_cF^{\mu\nu}F^{\rho\sigma}(x) + O(1/M), \end{align} where we have denoted $H_{DW}=\gamma_5(D+\varepsilon M)$. From the edge modes, we obtain the $\eta$ invariant of the three-dimensional massless Dirac operator on the domain-wall, \begin{align} \label{eq:edge} \frac{1}{2}\eta(H_{DW})^{\rm edge} &= \frac{1}{2}\sum_{\rm edge}\phi^{\rm edge}(x)^\dagger {\rm sgn}(H_{DW})\phi^{\rm edge}(x) = -\frac{1}{2}\eta(iD^{\rm 3D})|_{x_4=0}. \end{align} Together with the Pauli-Villars contribution with $H_{PV}=\gamma_5(D+M)$, \begin{align} -\frac{1}{2}\eta(H_{PV}) &=\frac{1}{64\pi^2}\int d^4x\; \epsilon_{\mu\nu\rho\sigma}{\rm tr}_cF^{\mu\nu}F^{\rho\sigma}(x) + O(1/M), \end{align} we finally obtain, \begin{align} \frac{1}{2}\eta(\gamma_5(D+\varepsilon M))^{reg.}= \frac{1}{32\pi^2}\int_{x_4>0} d^4x \epsilon_{\mu\nu\rho\sigma}{\rm tr}[F^{\mu\nu}F^{\rho\sigma}] \textcolor{black}{-\frac{\eta(iD^{\rm 3D})}{2}}, \end{align} which is equal to the APS index. The above evaluation of the bulk and edge modes separately makes the roles of them in the anomaly inflow manifest: the time-reversal ($T$) symmetry breaking of the edge (\ref{eq:edge}) (the $\eta$ invariant is odd in $T$) is precisely canceled by that of the bulk modes (\ref{eq:bulk}) when they are exponentiated in the fermion partition function: $Z_{\rm edge}Z_{\rm bulk}\propto (-1)^{-\frac{1}{2}\eta(\gamma_5(D+\varepsilon M))^{reg.}}$. \section{Mathematical justification} In the previous section, we have perturbatively shown on a flat Euclidean space that the $\eta$ invariant of the domain-wall Dirac operator equals to the APS index of a domain with negative mass, where the APS boundary condition is imposed on the location of the domain-wall, in spite of the fact that we did not impose any boundary condition in the former set up. As the two formulations are given on different manifolds, the reader may wonder if the equivalence is just a coincidence. Since the problem is nontrivial not only in physics but also in mathematics, our collaboration invited three mathematicians and the interdisciplinary collaboration went successful in giving a general proof for the equivalence. In \cite{Fukaya:2019qlf} we gave a proof that for any APS index of a Dirac operator on a manifold $X_+$ with boundary $Y$, there exists a massive domain-wall fermion Dirac operator on a closed manifold $X$ extended from $X_+$, and its $\eta$ invariant equals to the original index. Here we just give a sketch of the proof. We introduce an extra dimension to consider $X \times \mathbb{R}$, and the following Dirac operator on it, \begin{align} D^{\rm 5D} = \left(\begin{array}{cc} 0 & \partial_5 + \gamma_5 (D^{\rm 4D} + m(x_4,x_5))\\ -\partial_5 + \gamma_5 (D^{\rm 4D} + m(x_4,x_5)) & 0 \end{array}\right), \end{align} where the mass term is given negative in a positive region of $x_4$ and $x_5$ and positive otherwise: \begin{align} m(x_4,x_5) = \left\{ \begin{array}{cc} -M & \mbox{for}\; x_4>0\; \&\; x_5>0\\ 0 & \mbox{for}\; x_4=0\; \&\; x_5=0\\ M_2 & \mbox{otherwise}\\ \end{array}\right. \end{align} Here the gauge fields are copied in the $x_5$ direction and we set $A_5=0$. Then we evaluate the index of this Dirac operator in two different ways. With the so-called localization technique \cite{Witten:1982im,FurutaIndex}, which focuses on the edge-localized modes on the domain-wall, we can show that the index is equal to the original APS index, together with an equality between the index and that with an half-infinite cylinder. The index can be also computed by counting the zero-crossings of the eigenvalues of the Dirac operator in the 4 dimensions, which leads to our new proposal, the $\eta$ invariant of domain-wall Dirac operators. The equality (\ref{eq:APSDW}) always holds since the left and right-hand sides are just two different evaluations of the same quantity. \section{APS index on a lattice} Finally let us discuss the lattice version of the APS index. In the standard lattice gauge theory on a square hyper-cubic and periodic lattices, it is well-known that the overlap fermion action \cite{Neuberger:1997fp} $S = \sum_x \bar{q}(x) D_{ov}q(x)$ is invariant under a modified chiral transformation \cite{Ginsparg:1981bj, Luscher:1998pqa}: \begin{align} q \to e^{i\alpha\gamma_5(1-aD_{ov})}q,\;\;\;\bar{q} \to \bar{q}e^{i\alpha\gamma_5}. \end{align} But the fermion measure transforms as \begin{align} Dq\bar{q} \to \exp\left[2i\alpha {\rm Tr}(\gamma_5+\gamma_5(1-aD_{ov}))/2\right]D q\bar{q}, \end{align} which reproduces the axial $U(1)$ anomaly. Moreover, the AS index can be defined as \begin{align} \label{eq:latticeAS} I_{\rm AS}(D_{ov})={\rm Tr}\gamma_5\left(1-\frac{aD_{ov}}{2}\right), \end{align} and it can reach the continuum value even at a finite lattice spacing \cite{Hasenfratz:1993sp}, when the gauge link variables are smooth enough. But how to formulate the APS index has not been known. In continuum theory, the APS boundary condition is imposed by separating the normal and tangent parts of the Dirac operator to the boundary and requiring the positive components of the tangent Dirac operator of fermion fields to vanish at the boundary. On the lattice, however, such a separation is difficult as is clear from the explicit form of the overlap Dirac operator, \begin{align} \label{eq:Dov} D_{ov} = \frac{1}{a}\left(1+\gamma_5\frac{H_W}{\sqrt{H_W^2}}\right), \end{align} where $H_W=\gamma_5(D_W-1/a)$ is the Hermitian Wilson Dirac operator. Even if we managed to impose a lattice version of the APS boundary condition, the Ginsparg-Wilson relation or the definition of the index would no longer been guaranteed. In fact, an alternative direction is hidden in the index of the overlap Dirac operator. If we substitute the explicit form (\ref{eq:Dov}) into the definition of the index (\ref{eq:latticeAS}), we end up with the $\eta$ invariant of a negatively very massive Wilson Dirac operator, \begin{align} I_{\rm AS}(D_{ov})= - \frac{1}{2}{\rm Tr}\frac{H_W}{\sqrt{H_W^2}} = - \frac{1}{2}\eta(\gamma_5(D_W-1/a)). \end{align} Interestingly, the lattice index ``knew'' 1) the AS index can be written by the $\eta$ invariant of a massive Dirac operator\footnote{ This fact was known, for example, in \cite{Itoh:1987iy, Adams:1998eg} but its mathematical importance was not discussed, as far as we know. }, and 2) chiral symmetry is not important: the Wilson Dirac operator is enough to define it\footnote{ The issue was recently revisited by some of mathematicians \cite{Yamashita:2020nkf,Kubota:2020tpr}. }. The situation can be summarized in two tables. With the conventional massless Dirac operator, we need a significant effort to formulate the index theorems. For the APS index we need the unphysical boundary condition. For the lattice Atiyah-Singer index, we need the overlap fermion. The lattice APS is even not known yet. However, the $\eta$ invariant with massive Dirac operator on a closed manifold gives a united treatment of index theorems which is easy to handle. The APS in continuum theory is given by a kink structure in the mass term and the lattice AS is given by Wilson Dirac operator. Now we can easily speculate that the lattice APS index must be given by the eta-invariant of the Wilson Dirac operator with a domain-wall mass $-\frac{1}{2}\eta(\gamma_5(D_W-\varepsilon M))$. \begin{table}[hbt] \caption{The standard formulation of the index with massless Dirac operator} \label{tab:massless} \centering \begin{tabular}{|c|c|c|} \hline & continuum & lattice\\\hline AS & ${\rm Tr}\gamma_5e^{D^2/M^2}$ & ${\rm Tr}\gamma_5(1-aD_{ov}/2)$\\\hline APS & ${\rm Tr}\gamma_5e^{D^2/M^2}$ w/ APS b.c. & not known.\\\hline \end{tabular} \vspace{1cm} \caption{The $\eta$ invariant of massive Dirac operator} \centering \begin{tabular}{|c|c|c|} \hline & continuum & lattice\\\hline AS & $-\frac{1}{2}\eta(\gamma_5(D- M))^{reg.}$ & $-\frac{1}{2}\eta(\gamma_5(D_W-M))$\\\hline APS & $-\frac{1}{2}\eta(\gamma_5(D-\varepsilon M))^{reg.}$ & $-\frac{1}{2}\eta(\gamma_5(D_W-\varepsilon M))$ \\\hline \end{tabular} \label{tab:massive} \end{table} In \cite{Fukaya:2019myi}, we have perturbatively shown on a four-dimensional Euclidean flat lattice that $-\frac{1}{2}\eta(\gamma_5(D_W-\varepsilon M))$ in the classical continuum limit is \begin{align} \label{eq:latAPS} -\frac{1}{2}\eta(\gamma_5(D_W-\varepsilon M))=& \displaystyle{\frac{1}{32\pi^2} \int_{0<x_4<L_4} d^4x \epsilon^{\mu\nu\rho\sigma} {\rm tr} F_{\mu\nu}F_{\rho\sigma}} -\frac{1}{2}\eta(iD^{3\mathrm{D}})|_{x_4=0}+\frac{1}{2}\eta(iD^{3\mathrm{D}})|_{x_4=L_4}, \end{align} which coincides with the APS index on $T^3\times I$ with $I=[0,L_4]$. Here we have put two domain-walls at $x_4=a/2$ and $x_4=L_4-a/2$ and set $M=1/a$. Since the left-hand side of (\ref{eq:latAPS}) is always an integer, we expect that this definition is non-perturbatively valid. \section{Summary} In this work, we have shown that the massive domain-wall fermion is physicist-friendly: the APS index can be formulated (even on a lattice) without any unphysical boundary conditions. Moreover, it is mathematically rich: the $\eta$ invariant of the massive Dirac operator on a closed manifold unifies the index theorems. The author thanks M.~Furuta, N.~Kawai, S.~Matsuo, Y.~Matsuki, M.~Mori, K.~Nakayama, T.~Onogi, S.~Yamaguchi and M.~Yamashita for the fascinating collaborations. This work was supported in part by JSPS KAKENHI Grant Number JP18H01216 and JP18H04484.
2024-02-18T23:40:11.458Z
2021-12-22T02:09:09.000Z
algebraic_stack_train_0000
1,586
3,714
proofpile-arXiv_065-7882
\section*{Acknowledgements} I am very much indebt to Professors P.S.Letelier,A.Wang,I.Shapiro and E.Mielke for helpful discussions on the subject of this paper.Thanks are also due to CNPq. (Brazilian Government Agency) and to UERJ for financial support.Special thanks go to Prof.G.E.Volovik for sending me many of his repreints dealing with torsion strings and domain walls in superfluids.
2024-02-18T23:40:11.788Z
1998-11-13T20:15:29.000Z
algebraic_stack_train_0000
1,604
69
proofpile-arXiv_065-7887
\section*{\large \bf References} \parskip=.5ex plus 1.0pt \def\bibitem{\par \noindent \hangindent\parindent \hangafter=1}} \def\par \endgroup{\par \endgroup} \begin{document} \rightline{UMN-TH-1727/98} \rightline{TPI-MINN-98/22} \rightline{astro-ph/9811183} \rightline{November 1998} \title{The Evolution of \li6 in Standard Cosmic-Ray Nucleosynthesis} \author {Brian D. Fields} \affil {Department of Astronomy\\ University of Illinois\\ Urbana, IL 61801, USA} \and \author{Keith A. Olive} \affil{Theoretical Physics Institute\\ School of Physics and Astronomy\\ University of Minnesota\\ Minneapolis, MN 55455, USA} \begin{abstract} We review the Galactic chemical evolution of \li6 and compare these results with recent observational determinations of the lithium isotopic ratio. In particular, we concentrate on so-called standard Galactic cosmic-ray nucleosynthesis in which Li, Be, and B are produced (predominantly) by the inelastic scattering of accelerated protons and $\alpha$'s off of CNO nuclei in the ambient interstellar medium. If O/Fe is constant at low metallicities, then the \li6 vs Fe/H evolution--as well as Be and B vs Fe/H--has difficulty in matching the observations. However, recent determinations of Population II oxygen abundances, as measured via OH lines, indicate that O/Fe increases at lower metallicity; {\em if} this trend is confirmed, then the \li6 evolution in a standard model of cosmic-ray nucleosynthesis is consistent with the data. We also show that another key indicator of \li6BeB origin is the \li6/Be ratio which also fits the available data if O/Fe is not constant at low metallicity. Finally we note that \li6 evolution in this scenario can strongly constrain the degree to which \li6 and \li7 are depleted in halo stars. \end{abstract} \keywords{cosmic-rays -- Galaxy :abundances -- nuclear reactions, nucleosynthesis, abundances} \newpage \section{Introduction} Both of the stable isotopes of lithium, \li6 and \li7, have a special nucleosynthetic status; however, their origins are quite different. The story of \li7 is perhaps better-known, as this nuclide dominates by far the Li production in big bang. Starting in the 1980's, observations of extreme Pop II stars (Spite \& Spite 1982) revealed a constant Li abundance versus Fe, the ``Spite plateau.'' This discovery has been confirmed many times over, and demonstrates that Li is primordial. Furthermore, the mean value along the plateau, ${\rm Li/H} = (1.6 \pm 0.1) \times 10^{-10}$ (Molaro, Primas \& Bonifacio, 1995; Bonifacio \& Molaro, 1996) agrees well with the inferred primordial values of the other light elements, in dramatic confirmation of big bang nucleosynthesis theory (Walker {\rm et al.}~ 1991; Fields {\rm et al.}~ 1996). In Pop I stars, the Li abundance rises by factor of $\sim 10$ from its primordial level and various stellar production mechanisms have been suggested to explain this (e.g., Matteucci, d'Antona, \& Timmes 1995). Here, we will focus on the Pop II behavior of both \li7 and \li6. Unlike \li7, the less abundant \li6 has long been recognized as a nucleosynthetic ``orphan''--along with Be and B, \li6 is made neither in the the big bang nor in stars. That is, primordial nucleosynthesis does produces some \li6, but the abundance is unobservably small (\S \ref{sect:prod}); moreover, stellar thermonuclear processes destroy \li6, whose low binding energy renders this nucleus thermodynamically unfavorable. Thus, \li6 can only be produced in Galactic, non-equilibrium processes. Just such a process was identified by Reeves, Fowler, \& Hoyle (1970), in the form of Galactic cosmic ray interactions. Specifically, the propagation of cosmic rays (mostly protons and $\alpha$'s) through the interstellar medium (ISM) inevitably leads to spallation reactions on CNO nuclei (e.g., $p+{\rm O} \rightarrow \li6$) and fusion reactions on interstellar He ($\alpha+\alpha \rightarrow \li6$). Furthermore, Reeves, Fowler, \& Hoyle (1970) and Meneguzzi, Audouze, \& Reeves (1971) showed that cosmic ray interactions do indeed yield solar system abundances of these nuclides over the lifetime of the Galaxy. This mechanism, the so-called ``Standard GCR Nucleosynthesis'' (GCRN), was thus seen to be viable. Although lingering questions remained (standard GCRN alone is unable to reproduce the solar \li7/\li6BeB ratio, nor the \b11/\b10 ratio), this process was viewed as the conventional source of solar system \li6BeB until the late 1980's. This simple picture of the origin of Li has met with several complications over the past decade or so. There is a nagging uncertainty as to whether or not the observed \li7 abundance in the plateau stars is in fact representative of the primordial abundance or was partially depleted by non-standard stellar processes. In other words, does the Spite plateau measure the primordial Li abundance, or should one apply an upward correction to offset the effects of depletion? Stellar evolution models have predicted depletion factors which differ widely, ranging from essentially no depletion in standard models (for stars with $T \ga 5500$ K) to a large depletion (Deliyannis {\rm et al.}~ 1990, Charbonnel {\rm et al.}~ 1992). Depletion occurs when the base of the convection zone sinks down and is exposed to high temperatures, $\sim 2 \times 10^6$ K for \li7 and $\sim 1.65 \times 10^6$ K for \li6 (Brown \& Schramm 1988). Indeed, in standard stellar models, the depletion of \li7 is always accompanied by the depletion of \li6 (though the converse is not necessarily true). Thus any observation of \li6 has important consequences on the question of \li7 depletion. The issue of depletion affects not only \li6 and BBN, but also \li6BeB and their evolution; indeed, in this paper we will make use of this connection. Another complication has emerged to apparently overthrow the picture of standard GCRN. Namely, measurements of Be and B in halo stars have shown that Be and B both have logarithmic slopes versus iron which are near 1. However, in standard GCRN, the rates of Be and B production depend on the CNO target abundances. This implies that Be and B should be ``secondary,'' with abundances which vary quadratically with metallicity; i.e., the Be and B log slopes should be 2 versus Fe. While neutrino-process nucleosynthesis (Woosley {\rm et al.}~ 1990; Olive {\rm et al.}~ 1994) can help explain the linearity of B vs Fe/H, it is the slope of [Be/H] vs. [Fe/H] which has focused the attention of modelers of cosmic-ray nucleosynthesis. For example, to explain the observed ``primary'' slope, new, metal-enriched cosmic ray components have been proposed as the dominant LiBeB nucleosynthesis agents in the early Galaxy (Cass\'{e}, Lehoucq, \& Vangioni-Flam 1995; Ramaty, Kozlovsky, \& Lingenfelter 1995; Vangioni-Flam {\rm et al.}~ 1996; Vangioni-Flam {\rm et al.}~ 1998a; Ramaty, Kozlovsky, Lingenfelter, \& Reeves 1997). In the last few years, key new observations have begun to address questions concerning the Galactic evolution and stellar depletion of the lithium isotopes, with the first observations of the \li6/\li7 ratio in a few very old halo stars (Smith {\rm et al.}~ 1993, Hobbs and Thorburn 1994, 1997). The lithium isotopic ratio has been measured several times in HD 84937, and most recently by Cayrel {\rm et al.}~ (1998). The weighted average of the available measurements yields \li6/Li = 0.054 $\pm$ 0.011 at [Fe/H] $\simeq$ -2.3. Recently, Smith {\rm et al.}~ (1998) have reported one other positive detection in BD $26\hbox{${}^\circ$} 3578$ with \li6/Li = 0.05 $\pm$ 0.03, at about the same metallicity. For the other halo stars examined, only upper limits are available. The rarity of \li6 detection in halo stars is well-understood (Brown \& Schramm 1988) in terms of \li6 depletion in stellar atmospheres. Namely, \li6 is rapidly burned at at relatively low temperatures. Thus, \li6 survives only in halo stars with shallow convective zones and high effective temperatures, $T \ga 6300$ K. The data bear out this picture: both \li6 detections have been in hot halo stars, and only upper limits have been set in cooler stars. Combined with the present ISM abundance of \li6, the halo \li6 measurements can be an important tool for testing models of Galactic cosmic-ray nucleosynthesis. Here, we will restrict our attention to standard models in which accelerated protons and $\alpha$'s produced the LiBeB elements through spallation as well as $\alpha-\alpha$ fusion in the production of the lithium isotopes. We show that as in the case of the evolution of Be and B, the general accord between standard GCRN and the observational data depends crucially on the behavior of the O/Fe ratio in Pop II (Fields \& Olive 1998). The model is described in detail below (\S \ref{sect:prod}); the basic idea is that for spallation production, the slope vs O (rather than Fe) is the better indicator of nucleosynthesis origin. If O/Fe is constant, this distinction is irrelevant. However, new O/Fe data for Pop II reveal an evolving O/Fe ratio; if true, then it follows that the Be and B slopes versus Fe are not equal to the respective slopes vs O. As pointed out by Fields \& Olive (1998), {\em if} O/Fe does vary as suggested by Israelian et al (1998) and by Boesgaard {\rm et al.}~ (1998), then GCRN-produced BeB can have a roughly ``primary'' slope versus Fe yet a secondary slope vs O. Within the observational errors, the LiBeB data vs O and Fe are consistent with a ``neoclassical'' or ``revised standard'' cosmic ray nucleosynthesis consisting of (1) GCRN which makes LiBeB, without additional metal-enriched cosmic rays, and (2) the $\nu$-process in SNe, required to fit the solar \b11/\b10 ratio as well as the different B and Be slopes in Pop II. As we show below, the effect of a varying O/Fe ratio, will also soften the slope of \li6/H vs. Fe/H, though to a lesser extent. It is important to note that while the revised standard GCRN is allowed by the present data, the uncertainties in the observations are large enough that primary models for BeB are also allowed. Fortunately, this ambiguity will not persist: the two classes of models give very different predictions for LiBeB and O/Fe evolution, and can thus be tested with more and better observations. As described in detail below, \li6 evolution, and the \li6/Be ratio provide one of the best discriminators between the different scenarios. Below we will show in detail the prediction of the secondary (standard) model regarding \li6, the current data, and suggest future observations. The evolution of \li6 in a primary model of cosmic-ray nucleosynthesis is discussed in Vangioni-Flam {\rm et al.}~ (1998b). We also discuss the possibility of using this model to constrain the degree to which both Li isotopes are depleted in halo stars (\S \ref{sect:ev}). Following Steigman et al.\ (1993) and Lemoine et al.\ (1995), we compute the maximum difference between the initial predicted \li6 and the observed \li6, and find that little \li6 depletion is allowed. That implies that the \li7 depletion is at least as small and probably negligible, so that the observed Spite plateau value of Li/H should reflect the true primordial Li abundance. \section{The \li6 Data} \label{sect:data} The determination of the \li6 abundance in halo dwarf stars is extremely difficult and requires high resolution and high signal to noise spectra due to the tiny hyperfine splitting between the two lithium isotopes of $\sim 0.16$ \AA. The observational challenge arises in part because the line splitting is not seen as a distinct doublet, but rather the narrowly shifted lines are thermally broadened so that one sees only a single, anomalously wide absorption feature; fits to the width of this feature are sensitive to the \li7/\li6 ratio. The first indication of a positive result was reported by Andersen, Gustafsson, and Lambert (1984) in the star HD211998 with \li6/\li7 = 0.07 (though they caution that the detection was not certain) consistent with the upper limit of $0.1$ reported by Maurice, Spite, \& Spite (1984). Given the relatively low surface temperature of this star, $T \simeq 5300$ K, standard stellar models (e.g. Deliyannis, Demarque, \& Kawaler, 1990) would predict that this star is severely depleted in \li6. Indeed, the star is depleted in \li7 ([\li7] $\simeq$ 1.22), thus lying well below the Spite plateau. \begin{table}[ht] \centerline {\sc{\underline{Table 1:} Hot Halo Dwarf Stars}} \vspace {0.1in} \begin{center} \begin{tabular}{|lcccc|} \hline \hline Star & Temperature (K) & [Fe/H] & [Li] \\ \hline G 64-12 & $6356 \pm 75$ & -3.03 & $2.32 \pm 0.07$ & B \\ & $6468 \pm 87$ & -3.35 & $2.40 \pm 0.07$ & IRFM \\ G 64-37 & $6364 \pm 75$ & -2.6 & $2.03 \pm 0.07$ & B \\ & $6432 \pm 70$ & -2.51 & $2.11 \pm 0.06$ & IRFM \\ LP 608 62 & $6435 \pm 52$ & -2.51 & $2.28 \pm 0.06$ & B \\ & $6313 \pm 80$ & -2.81 & $2.21 \pm 0.08$ & IRFM \\ BD 9\hbox{${}^\circ$} 2190 & $6452 \pm 60$ & -2.05 & $2.20 \pm 0.11$ & B \\ & $6333 \pm 89$ & -2.89 & $2.15 \pm 0.07$ & IRFM \\ BD 72\hbox{${}^\circ$} 94 & $6347 \pm 88$ & -1.3 & $-$ & B \\ BD 36\hbox{${}^\circ$} 2165 & $6349 \pm 84$ & -1.15 & $-$ & B \\ HD 83769 & $6678 \pm 97$ & -2.66 & $-$ & IRFM \\ HD 84937 & $6330 \pm 83$ & -2.49 & $2.27 \pm 0.07$ & IRFM \\ BD 26\hbox{${}^\circ$} 3578& $6310 \pm 81$ & -2.58 & $2.24 \pm 0.06$ & IRFM \\ G 4-37 & $6337 \pm 92$ & -3.31 & $2.16 \pm 0.08$ & IRFM \\ G 9-16 & $6776 \pm 84$ & -1.31 & $-$ & IRFM \\ G 37-37 & $6304 \pm 112$ & -2.98 & $ - $ & IRFM \\ G 201-5 & $6328 \pm 111$ & -2.64 & $2.27 \pm 0.09$ & IRFM \\ BD 20\hbox{${}^\circ$} 3603 & $6441 \pm 76$ & -2.05 & $2.41 \pm 0.07$ & IRFM \\ \hline \end{tabular} \end{center} \end{table} \indent \li6 can only be realistically expected to be observed in stars with both high surface temperatures and at intermediate metallicities of [Fe/H] between about -2.5 and -1.3. Brown and Schramm (1988) determined that only in stars with surface temperatures greater than about 6300 K will \li6 survive in the observable surface layers of the star. At metallicities much lower than [Fe/H] $\sim -2.5$, the \li6 abundance is expected to be lower due to the short timescales available for GCRN production. At metallicities [Fe/H] $\ga -1.3$, even higher effective temperatures would be required to preserve \li6. In Table 1, we show the stellar parameters and \li7 abundances of a set of stars from with $T > 6300$ K from Molaro, Primas \& Bonifacio (1995) who use the Balmer line method of Fuhrmann, Axer \& Gehren (1994) for determining the surface temperatures (labeled B) and from Bonifacio \& Molaro (1996) who use the Infrared Flux Method (IRFM) of Blackwell {\rm et al.}~ (1990) and Alonso {\rm et al.}~ (1996). Included in this set of stars is the well observed HD 84937. In an early report, Pilachowski, Hobbs, \& De Young (1989) were able to determine an upper limit of \li6/\li7 $< 0.1$ for this hot halo dwarf star. The \li7 abundance for BD 72\hbox{${}^\circ$} 94 was determined by Rebolo, Molaro, \& Beckman (1988) and by Pilachowski, Sneden, \& Booth (1993) using other parameter choices to be [Li] = 2.22 $\pm$ 0.09. In a now seminal paper, Smith, Lambert, and Nissen (1993) reported the first detection of a small amount of \li6 in the halo dwarf HD 84937 at the level of $R \equiv \li6/(\li6+\li7) = 0.05 \pm 0.02$. In a slightly cooler star, HD 19445, they found only an upper limit of $R < 0.02$. This observation was confirmed by Hobbs \& Thorburn (1994, hereafter HT94) who found \li6/\li7 $ = 0.07 \pm 0.03$ for HD 84937. HT94 observed 5 additional stars, only one of which is in Table 1, HD 338529 also known as BD $26\hbox{${}^\circ$} 3578$, in which they found the upper limit $R < 0.1$. In their sample, they list BD $3\hbox{${}^\circ$} 740$ as a very hot star at 6400 K, however both the B and IRFM methods yield lower temperatures (6264 K and 6110 K respectively). In contrast, HT94 did report a positive detection in HD 201891 with \li6/\li7 $ = 0.05 \pm 0.02$, in this relatively cool (5900 K) star. In Hobbs \& Thorburn (1997, hereafter HT97), a reanalysis of HD 84937 yielded \li6/\li7 $ = 0.08 \pm 0.04$ and converted the detection of \li6 in HD 201891 to an upper limit $R < 0.05$. In addition, HT97 found upper limits in 5 other cooler halo stars. In more recent work, Smith {\rm et al.}~ (1998) observe nine halo stars, three of which appear in Table 1. In two cases, HD 84937, and BD 26\hbox{${}^\circ$} 3578, they claim a positive detection for \li6 with $R = 0.06 \pm 0.03$ and $R = 0.05 \pm 0.03$ for the two stars respectively. In the third star, BD 20\hbox{${}^\circ$} 3603, \li6 was not detected and an upper limit $R < 0.02$ was established. In the remaining six stars, no \li6 was detected with certainty. In addition, it is interesting to note that Smith {\rm et al.}~ (1998) confirm the small scatter in the \li7 abundances seen by Molaro {\rm et al.}~ (1995), Spite {\rm et al.}~ (1996), and Bonifacio \& Molaro (1997), far below that expected if \li7 were depleted in these halo stars. Recently, Cayrel {\rm et al.}~ (1998) have performed new very high signal-to-noise measurements of HD 84937 and 2 other stars, BD 36\hbox{${}^\circ$} 2165 and BD 42\hbox{${}^\circ$} 2667, the former of which is hot enough to appear in Table 1. They report a very accurate determination of \li6/\li7 = 0.052 $\pm$ 0.018 in HD84937. The weighted mean of all measurements of the lithium isotopic ratio in HD 84937 is \li6/\li7 = $ 0.057 \pm 0.012$, corresponding to $R = 0.054 \pm 0.011$. Cayrel {\rm et al.}~ (1998) also report a possible detection of \li6 in the cooler of the other two stars but a lower S/N for this observation precludes a definite detection claim. BD 42\hbox{${}^\circ$} 2667 was also observed by Smith {\rm et al.}~ (1998) with no detection reported. In the third star, Cayrel {\rm et al.}~ (1998) state that no \li6 was detected. \section{ The Production of \li6} \label{sect:prod} While primordial \li7 is produced at observable levels, \li6 production is orders of magnitude smaller (Thomas {\rm et al.}~ 1993; Delbourgo-Salvador \& Vangioni-Flam 1994). For values of the baryon-to-photon ratio $\eta$ consistent with \he4 and \li7, $\eta = (1.8-5) \times 10^{-10}$ (see e.g. Fields {\rm et al.}~ 1996), \li6 lies in the range $\li6/{\rm H} \simeq (1.5 - 7.5) \times 10^{-14}$, far below the levels measured in halo stars ($\sim 8 \times 10^{-12}$). One thus infers that Li in halo stars should be dominated by the primordial component of \li7, and the data is completely consistent with this expectation. Furthermore, it follows that the \li6 observed in Pop II is due to Galactic processes. The BBN production of \li6 was recently reinvestigated in Nollett {\rm et al.}~ (1997) comparing several measurements of the important D ($\alpha, \gamma$) \li6 reaction. By and large they found similar results to those in Thomas {\rm et al.}~ (1993), showing that the the uncertainty in this reaction could allow for perhaps a factor 2 more \li6. As noted in the introduction, there have been several models advanced to explain Pop II \li6 as well as Be and B; all of these involve spallation/fusion of accelerated particles. In this paper we focus on the case for standard GCRN as the source of Pop II \li6, in the picture of Fields \& Olive (1998), which includes both GCRN and the $\nu$-process. In contrast to ``primary'' models (at least in their simplest forms), different \li6BeB nuclides have strongly different evolution in standard GCRN. Namely, in Pop II the Li isotopes and \b11 ($\approx$ B) are mostly primary, due to $\alpha+\alpha$ interactions and $\nu$-process, respectively. On the other hand, \be9 and \b10 are secondary (versus oxygen). Consequently, the ratios of primary to secondary nuclides---e.g., B/Be, \li6/Be, and $\b11/\b10$---all vary strongly with metallicity and are good tests if measured accurately. We note however, that the Li/Be ratio is highly model dependent as shown in Fields, Olive, \& Schramm (1994). The GCRN model is described in detail elsewhere (Fields \& Olive 1998); here, we only summarize essential cosmic ray inputs: (1) Galactic cosmic rays are assumed to be accelerated with the (time-varying) composition of the ambient ISM. Consequently, interactions between cosmic ray $p,\alpha$ particles and interstellar HeCNO dominate the LiBeB production. (2) The injection energy spectrum is that measured for the present cosmic rays $\propto (E+m_p)^{-2.7}$; thus the cosmic rays are essentially relativistic, with no large flux increase at low energies. (3) The cosmic ray flux strength is scaled to the SN rate. On the basis of this model, production rates are computed. The (time-integrated) LiBeB outputs are normalized to the solar \li6, Be, and \b10 abundances, the isotopes that are of exclusively GCR origin. We adopt the solar abundances and isotopic ratios of Anders \& Grevesse (1989) for all but elemental B, which is taken from the more recent determination of Zhai \& Shaw (1994). The scaling factor effectively measures the average Galactic flux today, and is calculated from an unweighted average of the each of the scalings for \li6, Be, and \b10. To follow the LiBeB evolution in detail, the production rates are incorporated into a simple Galactic chemical evolution code. Closed and open box models are both able to give good results; here, we will adopt a closed box for simplicity. The model has an initial mass function $\propto m^{-2.65}$, and uses supernova yields due to Woosley \& Weaver (1995). In this model, O/Fe indeed varies in Pop II. However, with the Woosley \& Weaver (1995) yields, the model cannot reproduce a slope for O/Fe vs.\ Fe/H as steep as that observed by Israelian {\rm et al.}~ (1998) and Boesgaard {\rm et al.}~ (1998).\footnote { Note that Chiappini, Matteucci, Beers, \& Nomoto (1998) also find a changing O/Fe in Pop II . Their results are roughly consistent with ours when using the Woosley \& Weaver (1995) yields, and they indeed find a a steeper [O/Fe] variation when using the Thielemann, Nomoto, \& Hashimoto (1996) yields. } Since the O-Fe behavior is crucial, and the Fe yields are the more uncertain (due to, e.g., the dependence on the Type II supernova mass cut as well as the inclusion of Type Ia supernova yields) we use the O trends as calculated in the model, but scale Fe from the O outputs and the observed O/Fe logarithmic slope. For comparison, we will present results from a model with the na\"{\i}ve scaling Fe $\propto$ O, to show the effect of variations in the Pop II slope of O/Fe. In both cases, we use the observed Pop I relation $[{\rm O}/{\rm Fe}] \simeq -0.5 [{\rm Fe}/{\rm H}]$ over the range $[{\rm Fe}/{\rm H}] > -1$. \section {Galactic evolution of \li6} \label{sect:ev} Before presenting model results, some discussion is in order regarding how to compare the theory with the data. Since stellar \li6 abundances may have suffered some depletion, the observed \li6 represents a firm a lower limit on the initial abundance. For an evolution model to be acceptable, it must therefore predict a \li6 abundance which lies at or above the observed levels (within errors). If a model is viable by these criteria, then the difference between the theory and the data quantifies the possible depletion. Model results for \li6 vs Fe appear in Figure \ref{fig:li6-fe}, for [O/Fe]-[Fe/H] Pop II slope $\omega_{\rm O/Fe} = -0.31$ (the proposed GCRN model) and $\omega_{\rm O/Fe} = 0$ for comparison. We see that GCRN does quite well in reproducing both solar and Pop II \li6 when O/Fe is allowed to evolve in Pop II. On the other hand, if O/Fe is constant, then the \li6-Fe slope is steeper and the model underproduces the Pop II \li6. Clearly, the O/Fe behavior in Pop II is crucial to determine accurately. Note that because of the large uncertainty in [Fe/H] for BD 26\hbox{${}^\circ$} 3578, this star does not at this time provide a stringent constraint. Another test of the GCRN model is to compare primary versus secondary nuclides, e.g., \li6/Be. Figure \ref{fig:li6_beb-fe} plots \li6/Be and \li6/B vs Fe for the two O/Fe models. We see that while the model with changing O/Fe is consistent with \li6/Be for solar and Pop II metallicities, the uncertainty in the data do not sufficiently discriminate between this and the model with constant Pop II O/Fe. However, in purely primary models (with constant O/Fe), one expects \li6/Be to be approximately constant with respect to [Fe/H] in Pop II, and that is clearly disfavored by the data, albeit there is only one star with both \li6 and Be determined. While there is no positive detection of B/\li6 in a halo star, the figure makes clear that this ratio is also a good test of the model. In Pop II, both ${\rm B} \approx \b11$ and \li6 are dominated by primary processes ($\nu$-process and $\alpha+\alpha$, respectively), and thus the \li6/B ratio changes much less strongly than does \li6/Be. Note that in our scenario, \li6 and Be are pure cosmic ray products. Consequently, the \li6/Be curve is a particularly clean prediction of the model, free of any normalization between different sources; indeed, even the normalization of the present-day cosmic ray flux strength drops out as a common factor. By contrast, the B/\li6 ratio does depend on the relative normalization between the GCRN and $\nu$-process yields (which is fixed so that \b11/\b10 equals the solar ratio 4.05 at [Fe/H] = 0). The results shown above indicate that the standard cosmic-ray origin for \li6BeB is in fact consistent with the data. This is contrary to the conclusions of Smith, Lambert \& Nissen (1998), who concluded that on the basis of the solar ratio of $\li6/{\rm Be}_\odot = 5.9$ and the value of this ratio for HD84937, $\li6/{\rm Be} \simeq 80 \gg \li6/{\rm Be}_\odot$, an additional source of \li6 was necessary. This conclusion assumes the observed linear evolution of [Be/H] vs. [Fe/H] and the expected linear evolution of \li6 as a primary element due to $\alpha-\alpha$ fusion. In this case one would expect the \li6/Be ratio to be constant, which from a simple examination of Figure \ref{fig:li6_beb-fe} is clearly not the case. In standard GCRN (with constant O/Fe at low metallicities), \be9 is a secondary isotope, and given the linearity of [\li6], one should expect that \li6/Be is inversely proportional to Fe/H (i.e., to have a log slope of -1). However, if we take [O/Fe] = $\omega_{\rm O/Fe}$[Fe/H], then we would expect up to an additive constant (Fields \& Olive 1998) \begin{equation} [{\rm Be}] = 2 (1 + \omega_{\rm O/Fe}) \ {\rm [Fe/H]} \end{equation} and \begin{equation} [\li6] = (1 + \omega_{\rm O/Fe}) \ {\rm [Fe/H]} \end{equation} so that \begin{equation} [\li{6}/{\rm Be}] = - (1 + \omega_{\rm O/Fe}) \ {\rm [Fe/H]} \end{equation} Now for the Israelian et al.\ (1998) value of $\omega_{\rm O/Fe} = -0.31$, we would predict a dependence which is consistent with the data as shown in Figure \ref{fig:li6_beb-fe}. (The Boesgaard {\rm et al.}~ (1998) value is very similar, $\omega_{\rm O/Fe} = -0.35$.) While the case for a nonzero O/Fe Pop II slope has not been conclusively made, it is nevertheless striking that the reported $\omega_{\rm O/Fe}$ can explain all of the observed \li6, Be, and B evolution within a simple (and canonical!) model. The data as shown in the figures and compared to the models do not take into account any depletion of \li6. To be sure, there is still a great deal of uncertainty in the amount of depletion for both \li6 and \li7 as well as the relative depletion factor, $D_6/D_7$ (Chaboyer 1994, Vauclair and Charbonnel 1995, Deliyannis {\rm et al.}~ 1996, Chaboyer 1998, Pinsonneault {\rm et al.}~ 1992, Pinsonneault {\rm et al.}~ 1998). In the remainder of this paper we examine to what extent, the data (present or future) can tell us about the degree to which the lithium isotopes have been depleted and hence the implications for the primordial abundance of \li7. In addition, we will show that future data on the lithium isotopic ratio may go a long way in resolving some of the key uncertainties in GCRN. The observed lithium abundance can be expressed as \begin{equation} {\rm Li}_{\rm Obs} = D_7 ( {\rm \li7_{BB}} + {\rm \li7_{CR}}) + D_6 ( {\rm \li6_{BB}} + {\rm \li6_{CR}}) \label{li} \end{equation} where the $D_{6,7} < 1$ are the \li{6,7} depletion factors. Ignoring the depletion factors for the moment, we see that lithium (and in particular \li7) has two components, due to big bang and cosmic ray production. In principle, given a model of cosmic-ray nucleosynthesis, one could use the observed Be abundances in halo stars along with the model predictions of Be/Li and \li6/\li7 to extract a cosmic-ray contribution to \li7 and through (\ref{li}) the big bang abundance of \li7 (Walker {\rm et al.}~ 1993, Olive \& Schramm 1992). Unfortunately this procedure is very model-dependent since Li/Be can vary between 10 and $\sim 300$ depending on the details of the cosmic-ray sources and propagation--e.g., source spectra shapes, escape pathlength magnitude and energy dependence, and kinematics (Fields, Olive \& Schramm 1994) On the other hand, the \li6/\li7 ratio is a relatively model-independent prediction of cosmic-ray nucleosynthesis. With more data, one could use this model-independence to great advantage, as follows. Given enough \li6 Pop II data, one could use the observed \li6 evolution (1) to infer $\li7_{\rm CR}$ and thus $\li7_{\rm BB}$, and (2) to measure \li6/Be and thereby constrain in more detail the nature of early Galactic cosmic rays. In standard stellar models, Brown \& Schramm (1988) have argued that $D_6 \sim D_7^\beta$ with $\beta \approx 60$. Clearly in this case any observable depletion of \li7 would amount to the total depletion of \li6. Hence the observation of \li7 in HD84937 has served as a basis to limit the total amount of \li7 depletion (Steigman {\rm et al.}~ 1993, Lemoine {\rm et al.}~ 1997, Pinsonneault {\rm et al.}~ 1998). There are however, many models based on diffusion and/or rotation which call for the depletion \li6 and \li7 even in hot stars. The weakest constraint comes from assuming that depletion occurs entirely due to mixing, so the destruction of the Li isotopes is the same despite the greater fragility of \li6. Because \li6/\li7 $\sim 1$ in cosmic-ray nucleosynthesis, the observation of \li6 does exclude any model with extremely large \li6 depletion on the basis of the Spite plateau for \li7 up to [Fe/H] = -1.3 (Pinsonneault {\rm et al.}~ 1998, Smith {\rm et al.}~ 1998). However, barring an alternative source for the production of \li6, the data are in fact much more restrictive. At the 2$\sigma$ level, the model used to produce the evolutionary curve in Figure \ref{fig:li6-fe}, would only allow a depletion of \li6 by 0.15 dex ($D_6 > 0.7$); since $D_7 \ge D_6$, this is also a lower limit to $D_7$. We note that with improved data on BeB as well, and knowing that $D_{\rm B} \ge D_{\rm Be} \ge D_7 \ge D_6$, one can further limit the degree of depletion in the lighter isotopes. Further constraints on $D_7$ become available if we adopt a model which relates \li6 and \li7 depletion. E.g., if we use $\log D_6 = -0.19 + 1.94 \log D_7$ as discussed in Pinsonneault {\rm et al.}~ (1998), the data in the context of the given model would not allow for any depletion of \li7. Of course there is uncertainty in the model as well. Using the Balmer line stellar parameters, we found (Fields \& Olive 1998) $\omega_{\rm O/Fe} = -0.46 \pm 0.15$. Using the value of -0.46, we determine that at the 2$\sigma$ (with respect to the \li6 data) that $\log D_6 > -0.32$ and would still limit $\log D_7 > -0.07$. Even under what most would assume is an extreme O/Fe dependence of $\omega_{\rm O/Fe} = -0.61$, \li6 depletion is limited to by a factor of 3.5 and corresponds to an upper limit on the depletion of \li7 by 0.2 dex. This is compatible with the upper limit in Lemioine {\rm et al.}~ (1997) though the argument is substantially different. It should be clear at this point, that improved ($\equiv$ more) data on \li6 in halo stars can have a dramatic impact on our understanding of cosmic-ray nucleosynthesis and the primordial abundance of \li7. Coupled with improved data on the O/Fe ratio in these stars, we would be able to critically examine these models on the basis of their predictions of \li6 and \be9. \section{Conclusion} \label{sect:conclude} We have considered the evolution of \li6 in context of standard Galactic cosmic ray nucleosynthesis. In this scenario, \li6 and \li7 have a primary origin, due to the dominance of $\alpha+\alpha$ in Pop II, while Be and \b10 are secondary (with \b11 primary due to the neutrino process). \li6 thus provides an excellent diagnostic of LiBeB origin, both by itself and in ratio to Be and B. We find that if O/Fe has a changing slope in Pop II, as suggested by Israelian et al.\ (1998) and Boesgaard {\rm et al.}~ (1998), then standard GCRN provides a good fit to \li6/H and \li6/Be for both Pop II and solar data. On the other hand, a model with constant O/Fe in Pop II does poorly, illustrating the need to determine the O/Fe trend accurately. Given the evolution scheme proposed here, one can constrain both \li6 and \li7 depletion in halo stars. The predictions here are in good agreement with the observed \li6 data, uncorrected for depletion; it follows that in our model, the \li6 depletion cannot be very large: the abundance is reduced by a factor of 3.5 at the extreme, and more likely a factor of $<2$. Using the model discussed in Pinsonneault {\rm et al.}~ (1998), this leads to an upper limit on \li7 depletion of 0.2 dex. It is interesting to note the robustness of our conclusions regarding \li6 evolutionary constraints Pop II Li depletion. As noted above, the GCRN model is not the only possible scenario for LiBeB production allowed by the current data. A class of sharply different scenarios is also viable, in which all of LiBeB are primary products through new mechanisms in addition to the standard GCRN. The \li6 evolution in one such model is considered in detail by Vangioni-Flam et al.\ (1998b), in an analysis very similar to our own. Interestingly, the two very different models get similarly strong constraints. Thus the basic conclusion is quite robust that viable LiBeB evolution models imply small \li6 depletion. We wish to re-emphasize the utility of, and need for, more and better observations of \li6, Be, and B in Pop II. The ambiguity of the putative ``primary'' versus ``standard GCRN'' scenarios can be resolved with careful observations, which will also pave the way for sharper tests of Li depletion, a better knowledge of the primordial Li abundance, and a better understanding of early Galactic cosmic rays. Finally, as this volume celebrates the life and science of David Schramm, it is particularly fitting to point out his major role in the study of LiBeB origin and evolution. A single example of his impact is the prescient work of Reeves, Audouze, Fowler, \& Schramm (1973), which sweepingly laid out a paradigm for the origin of the light elements. The LiBeB origin proposed by Reeves et al.\ combined contributions from primordial \li7, cosmic-ray-produced \li6, Be, and \b10, and an additional stellar \li7 and \b11 source. This basic picture has served as the starting point for all subsequent work in the field, including the model presented here. \acknowledgements We are grateful to Elisabeth Vangioni-Flam and Michel Cass\'{e} for many useful discussions and comments on an earlier version of this work. We would also like to dedicate this work in the memory of David Schramm, an advisor to us both. He brought to the area of cosmic-ray nucleosynthesis the same insight and unparalleled enthusiasm he showed for all of his many research interests. This work was supported in part by DoE grant DE-FG02-94ER-40823 at the University of Minnesota. \begingroup \section*{\large \bf References \bibitem Anders, E. and Grevesse, N. , 1989, Geochim. Cosmochem. Acta, 53, 197 \bibitem Andersen, J., Gustafsson, B., \& Lambert, D.L. 1984, A \& A, 136,75 \bibitem Alonso, A., Arribas, S., \& Martinez-Roger, C. 1996, A \& AS, 117, 227 \bibitem Blackwell, D.E., {\rm et al.}~ 1990, A \& A, 232,396 \bibitem Boesgaard, A.M., King, J.R., Deliyannis, C.P., \& Vogt, S.S. 1998, AJ, submitted \bibitem Bonifacio, P. \& Molaro, P. 1997, MNRAS, 285, 847 \bibitem Brown, L. \& Schramm, D.N. 1988, ApJ, 329, L103 \bibitem Cass\'{e}, Lehoucq, R., \& Vangioni-Flam, E. 1995, Nature, 373, 318 \bibitem Cayrel, R., Spite M., Spite F., Vangioni-Flam, E., Cass\'e, M. and Audouze, J. 1998, AA submitted \bibitem Chaboyer, B. 1994, ApJ, 432, L47 \bibitem Chaboyer, B. 1998, submitted, astroph/9803106 \bibitem Charbonnel, C., Vauclair, S. \& Zahn, J.P. 1992, AA, 255, 191 \bibitem Chiappini, C., Matteucci, F., Beers, T.C., \& Nomoto, K. 1998, ApJ, in press (astro-ph/9810422) \bibitem Delbourgo-Salvador, P. \& Vangioni-Flam, E. 1994, in " Origin and Evolution of Elements", Edts Prantzos {\rm et al.}~, Cambridge University Press, p. 52 \bibitem Deliyannis, C.P., Demarque, C. \& Kawaler, S.D. 1990, ApJS 73, 21 \bibitem Deliyannis, C.P., King, J.R. and Boesgaard, A.M., 1996, BAAS, 28, 916 \bibitem Fields, B.D., Kainulainen, K., Olive, K.A. \& Thomas, D. 1996, New Astronomy, 1, 77 \bibitem Fields, B.D., \& Olive, K.A. 1998, ApJ, submitted (astro-ph/9809277) \bibitem Fields, B.D., Olive, K.A., \& Schramm, D.N. 1994, ApJ, 435, 185 \bibitem Fuhrmann, K., Axer, M., \& Gehren, T. 1994, A \& A, 285, 585 \bibitem Israelian, G., Garc\'{\i}a-L\'{o}pez, \& Rebolo, R. 1998, ApJ, 507, 805 \bibitem Hobbs, L.M. \& Thorburn, J.A. 1994, ApJ, 428, L25 (HT94) \bibitem Hobbs, L.M. \& Thorburn, J.A. 1997, ApJ, 491, 772 (HT97) \bibitem Lemoine, M., Schramm, D.N., Truran, J.W., \& Copi, C.J. 1997, ApJ, 478, 554 \bibitem Matteucci, F., d'Antona, F. \& Timmes, F.X. 1995, A\&A, 303, 460 \bibitem Maurice, E., Spite, F., \& Spite, M. 1984, A \& A, 132, 278 \bibitem Meneguzzi, M., Audouze, J. \& Reeves, H. 1971, A\&A, 15, 337 \bibitem Molaro, P., Primas, F., \& Bonifacio, P. 1995, A \& A, 295, L47 \bibitem Nollett, K.M., Lemoine, M. \& Schramm, D.N. 1997, Phys. Rev. C56, 1144 \bibitem Olive, K.A., Prantzos, N., Scully, S., \& Vangioni-Flam, E. 1994, ApJ, 424, 666 \bibitem Olive, K.A. \& Schramm, D.N. 1992, Nature, 360, 439 \bibitem Pilachowski, C.A., Hobbs,, L.M., \& De Young, D.S. 1989, ApJ, 345, L39 \bibitem Pilachowski, C.A., Sneden, C., \& Booth, J. 1993, ApJ, 407, 699 \bibitem Pinsonneault, M.H., Deliyannis, C.P. \& Demarque, P. 1992, ApJS, 78, 181 \bibitem Pinsonneault, M.H., Walker, T.P., Steigman, G. \& Naranyanan, V.K. 1998, ApJ submitted \bibitem Ramaty, R., Kozlovsky, B., Lingenfelter, R.E., 1995, ApJ, 438, L21 \bibitem Ramaty, R., Kozlovsky, B., Lingenfelter, R.E., \& Reeves, H. 1997, ApJ, 488, 730 \bibitem Rebolo, R., Molaro, P., \& Beckman, J.E. 1988, A \& A, 192, 192 \bibitem Reeves, H., Audouze, J., Fowler, W.A., \& Schramm, D.N. 1973, ApJ, 177, 909 \bibitem Reeves, H., Fowler, W.A., \& Hoyle, F. 1970, Nature, 226, 727 \bibitem Smith, V.V., Lambert D.L. \& Nissen P.E. 1993, ApJ, 408, 262 \bibitem Smith, V.V., Lambert, D.L. \& Nissen, P.E., 1998, ApJ, 506, 405 \bibitem Spite, F. \& Spite M. 1982, AA, 115, 357 \bibitem Spite, M. Francois, P., Nissen, P.E., \& Spite, F. 1996, A \& A, 307, 172 \bibitem Steigman, G., Fields, B.D., Olive, K.A., Schramm, D.N., \& Walker, T.P. 1993, ApJ, 415, L35 \bibitem Thielemann, F.-K., Nomoto, K., \& Hashimoto, M. 1996, ApJ, 460, 408 \bibitem Thomas, D., Schramm, D.N., Olive, K.A. \& Fields, B.D., 1993, ApJ, 406, 569 \bibitem Vangioni-Flam, E., Cass\'e, M., Cayrel, R., Audouze, J., Spite, M., \& Spite, F. 1998b, New Astronomy, submitted \bibitem Vangioni-Flam, E., Cass\'e, M., Olive, K. \& Fields B.D. 1996, ApJ 468, 199 \bibitem Vangioni-Flam, E. Ramaty, R., Olive, K.A. \& Cass\'e, M. 1998a, A \& A, 337, 714 \bibitem Vauclair, S. \& Charbonnel, C. 1995, A \& A, 295, 715 \bibitem Walker, T.P., Steigman, G. Schramm, D.N., Olive, K.A., \& Fields, B.D. 1993, ApJ, 413, 562 \bibitem Walker, T.P., Steigman, G. Schramm, D.N., Olive, K.A., \& Kang, K. 1991, ApJ, 376, 51 \bibitem Woosley, S.E., Hartmann, D., Hoffman, R., Haxton, W. 1990, ApJ., 356, 272 \bibitem Woosley, S.E. \& Weaver, T.A. 1995, ApJS, 101, 181 \bibitem Zhai, M., \& Shaw, D. 1994, Meteoritics, 29, 607 \par \endgroup \newpage \begin{figure}[htb] \epsfysize=7.5truein \epsfbox{Li6Fe_2.ps} \caption{The \li6 evolution as a function of [Fe/H]. {\it Solid line}: the ``revised standard'' GCRN model. Here Fe is scaled from the calculated O to fit the observed [O/Fe]--[Fe/H] slope. {\it Dashed line}: the GCRN model with Fe $\propto$ O in POP II. The error bars on the points are 2 sigma errors, and the spread in the points connected by lines show the uncertainty due to stellar parameter choices.} \label{fig:li6-fe} \end{figure} \begin{figure}[htb] \epsfysize=7.5truein \epsfbox{Li6BeB_2.ps} \caption{The evolution of the \li6/Be and \li6/B ratios. Models are as in Figure \protect\ref{fig:li6-fe}. The error bars on the points are 2 sigma errors, and the spread in the points connected by lines show the uncertainty due to stellar parameter choices.} \label{fig:li6_beb-fe} \end{figure} \end{document} --------------877453E0D3953C55F514124B--
2024-02-18T23:40:11.797Z
1998-11-11T18:15:02.000Z
algebraic_stack_train_0000
1,606
7,147
proofpile-arXiv_065-7926
\section{Introduction} \label{SecIntro} Heavy fermion (HF) and related intermediate valence (IV) behaviors are ubiquitous in metallic $f$-electron systems containing lanthanide or actinide ($\equiv M$) atoms with unstable valence.\cite{HFIV} The HF materials are typically intermetallic compounds containing Ce, Yb or U ions and are characterized at the lowest temperatures $T$ by a large and nearly $T$-independent spin susceptibility $\chi(T \rightarrow 0) \sim 10^{-2}$\,cm$^3$/(mol~$M$), and an extraordinarly large nearly $T$-independent electronic specific heat coefficient $\gamma(T \rightarrow 0) \sim 1$\,J/(mol~$M$)\,K$^2$, where $\gamma (T) \equiv C_{\rm e}(T)/T$ and $C_{\rm e}(T)$ is the electronic contribution to the measured specific heat at constant pressure $C_{\rm p}(T)$. Large quasiparticle effective masses $m^*$ of $\sim 100$--1000 electron masses $m_{\rm e}$ have been inferred from $\gamma(0)$ for the HF compounds and smaller values for the IV materials. The normalized ratio of $\chi(0)$ to $\gamma(0)$, the Sommerfeld--Wilson ratio\cite{Wilson1975} $R_{\rm W}$, is on the order of unity in HF and IV materials as in conventional metals, and is given by $R_{\rm W} = \pi^2 k_{\rm B}^2 \chi(0)/3\mu_{\rm eff}^2 \gamma(0)$, where $k_{\rm B}$ is Boltzmann's constant and $\mu_{\rm eff}$ is the effective magnetic moment of the Fermi liquid quasiparticles. For quasiparticles with (effective) spin $S = 1/2$, one obtains \begin{equation} R_{\rm W} = \frac{4 \pi^2 k_{\rm B}^2 \chi(0)}{3 g^2 \mu_{\rm B}^2 \gamma(0)}~~, \label{EqRW} \end{equation} where $g$ is the $g$-factor of the quasiparticles and $\mu_{\rm B}$ is the Bohr magneton. Since $R_{\rm W} \sim 1$ in many of the HF and IV compounds, $\chi$ and $C_{\rm e}$ at low temperatures are both probing the same low-energy heavy quasiparticle spin excitations. With increasing $T$ in the heaviest-mass systems, $\chi(T)$ crosses over to local-moment behavior and $\gamma$ decreases rapidly, on a temperature scale of $\sim 0.3$--30\,K\@. Heavy fermion behaviors are not expected for $d$-electron compounds because of the much larger spatial extent of $d$ orbitals than of $f$ orbitals and the resulting stronger hybridization with conduction electron states. Recently, however, in collaboration with other researchers, we have documented HF behaviors, characteristic of those of the heaviest mass $f$-electron HF systems, in the metallic\cite{Rogers1967} transition metal oxide compound LiV$_2$O$_4$ using $C_{\rm p}(T)$,\cite{Kondo1997} $\chi(T)$,\cite{Kondo1997,Kondo1998} $^7$Li and $^{51}$V NMR,\cite{Kondo1997,Mahajan1998} muon spin relaxation ($\mu$SR),\cite{Kondo1997,Merrin1998} and 4--295\,K crystallography\cite{Kondo1997,Kondo1998,Chmaissem1997} measurements. Independent crystallography and $\chi(T)$ measurements\cite{Ueda1997,Onoda1997} and NMR measurements\cite{Onoda1997,Fujiwara1997,Fujiwara1998} were reported nearly simultaneously by other groups, with similar results. LiV$_2$O$_4$ has the face-centered-cubic normal-spinel structure (space group $Fd{\bar 3}m$),\cite{Reuter1960} and is formally a $d^{1.5}$ system. The Li atoms occupy tetrahedral holes and the V atoms octahedral holes in a nearly cubic-close-packed oxygen sublattice, designated as Li[V$_2$]O$_4$. The $C_{\rm e}(T)$ is extraordinarily large for a transition metal compound, $\gamma(1\,{\rm K})\approx 0.42$\,J/mol\,K$^2$, decreasing rapidly with $T$ to $\sim 0.1$\,J/mol\,K$^2$ at 30\,K.\cite{Kondo1997} As discussed extensively in Refs.~\onlinecite{Kondo1997} and~\onlinecite{Kondo1998}, from $\sim 50$--100\,K to 400\,K, $\chi(T)$ shows a Curie-Weiss-like [$\chi = C/(T - \theta)$] behavior corresponding to antiferromagnetically coupled ($\theta = -30$ to $-60$\,K) vanadium local magnetic moments with $S = 1/2$ and $g\approx 2$, but static magnetic ordering does not occur above 0.02\,K in magnetically pure LiV$_2$O$_4$, and superconductivity is not observed above 0.01\,K\@. To our knowledge, in addition to LiV$_2$O$_4$ the only other stoichiometric transition metal spinel-structure oxide which is metallic to low temperatures is the normal-spinel compound LiTi$_2$O$_4$.\cite{Johnston1976,Deschanvres1971,Cava1984,Dalton1994} In contrast to LiV$_2$O$_4$, this compound becomes superconducting at $T_{\rm c} \leq 13.7$\,K (Refs.~\onlinecite{Johnston1976,Johnston1973}) and has a comparatively $T$-independent and small $\chi(T)$ from $T_{\rm c}$ up to 300\,K. \cite{Johnston1976,Harrison1985,Heintz1989,Tunstall1994} The resistivity of thin films at 15\,K is (4.3--8.8)$\times 10^{-4}\,\Omega$\,cm.\cite{Inukai1982} The spinel system Li$_{1+x}$Ti$_{2-x}$O$_4$ with cation occupancy Li[Li$_x$Ti$_{2-x}$]O$_4$ exists from $x = 0$ to $x = 1/3$;\cite{Johnston1976,Deschanvres1971,Dalton1994} for $x = 1/3$,\cite{Bertaut1953} the oxidation state of the Ti is $+4$ and the compound is a nonmagnetic insulator. A zero-temperature superconductor-insulator transition occurs at $x \sim 0.1$--0.2.\cite{Johnston1976,Harrison1985,Heintz1989} In this paper, we report the details of our $C_{\rm p}(T)$ measurements on LiV$_2$O$_4$ and of the data analysis and theoretical modeling. We have now obtained data to 108\,K, which significantly extends our previous high-temperature limit of 78\,K.\cite{Kondo1997} We also present complementary linear thermal expansion $\alpha(T)$ measurements on this compound from 4.4 to 297\,K\@. We will assume that $C_{\rm p}(T)$ can be separated into the sum of electronic and lattice contributions, \begin{mathletters} \label{EqCsum:all} \begin{equation} C_{\rm p}(T) = C_{\rm e}(T) + C^{\rm lat}(T)~~, \label{EqCsum:a} \end{equation} \begin{equation} C_{\rm e}(T) \equiv \gamma(T)\,T~~. \label{EqCsum:b} \end{equation} \end{mathletters} In Ref.~\onlinecite{Kondo1997}, we reported $C_{\rm p}(T)$ measurements up to 108\,K on Li$_{4/3}$Ti$_{5/3}$O$_4$ which were used to estimate $C^{\rm lat}(T)$ in LiV$_2$O$_4$ so that $C_{\rm e}(T)$ could be extracted according to Eq.~(\ref{EqCsum:a}). In the present work, we report $C_{\rm p}(T)$ up to 108\,K for LiTi$_2$O$_4$, compare these data with those for Li$_{4/3}$Ti$_{5/3}$O$_4$, and obtain therefrom what we believe to be a more reliable estimate of $C^{\rm lat}(T)$ for LiV$_2$O$_4$. The experimental details are given in Sec.~\ref{SecExpDet}. An overview of our $C_{\rm p}(T)$ data for LiV$_2$O$_4$, LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ is given in Sec.~\ref{SecOverview}. Detailed analyses of the data for the Li$_{1+x}$Ti$_{2-x}$O$_4$ compounds and comparisons with literature data are given in Sec.~\ref{SecLiTiO}, in which we also estimate $C^{\rm lat}(T)$ for LiV$_2$O$_4$. The $C_{\rm e}(T)$ and electronic entropy $S_{\rm e}(T)$ for LiV$_2$O$_4$ are derived in Sec.~\ref{SecLiVO}. The $\alpha(T)$ measurements are presented in Sec.~\ref{SecThermExp} and compared with the $C_{\rm p}(T)$ results and lattice parameter data versus temperature obtained from neutron diffraction measurements by Chmaissem {\it et al.}\cite{Chmaissem1997} From the combined $\alpha(T)$ and $C_{\rm p}(T)$ measurements on the same sample, we derive the Gr\"uneisen parameter from 4.4 to 108\,K and estimate \widetext \begin{table} \caption{Lattice parameter $a_0$ and structural [$f_{\rm imp}$ (Str)] and magnetic [$f_{\rm imp}$ (Mag)] impurity concentrations for the LiV$_2$O$_4$ samples studied in this work.\protect\cite{Kondo1998}} \begin{tabular}{ccccccc} Sample No.& 2 & 3 & 4A & 5 & 6 \\ \hline Lattice parameter (\AA) & 8.23997(4) & 8.24100(15) & 8.24705(29) & 8.24347(25) & 8.23854(11)\\ Impurity phase& V$_2$O$_3$ & pure & V$_2$O$_3$ & V$_2$O$_3$ & V$_3$O$_5$ \\ $f_{\rm imp}$ (Str) (mol\,\%) & 1.83 & $< 1$ & 1.71 & $< 1$ & 2.20 \\ $f_{\rm imp}$ (Mag) (mol\,\%) & 0.22(1) & 0.118(2) & 0.77(2) & 0.472(8) & 0.0113(6) \\ \end{tabular} \label{TableLiVO} \end{table} \narrowtext \noindent the value at $T = 0$. Theoretical modeling of the $C_{\rm e}(T)$ data for LiV$_2$O$_4$ is given in Sec.~\ref{SecTheory}. Since the electrical resistivity data for single crystals of LiV$_2$O$_4$ indicate metallic behavior from 4\,K to 450\,K,\cite{Rogers1967} we first discuss the Fermi liquid description of this compound and derive the effective mass and other parameters for the current carriers at low temperatures in Sec.~\ref{SecFL}. This is followed by a more general discussion of the FL theory and its application to LiV$_2$O$_4$ at low $T$. In Sec.~\ref{SecMillis} we compare the predictions of Z\"ulicke and Millis\cite{Zulicke1995} for a quantum-disordered antiferromagnetically coupled metal with our $C_{\rm e}(T)$ results for LiV$_2$O$_4$. The isolated $S = 1/2$ impurity Kondo model predicts FL behavior at low temperatures and impurity local moment behavior at high temperatures. Precise predictions for the $\chi(T)$ and $C_{\rm e}(T)$ have been made for this model, and we compare our $C_{\rm e}(T)$ data with those predictions in Sec.~\ref{SecS1/2Kondo}. In Sec.~\ref{SecHTS} we consider a local moment model in which the magnetic specific heat of the $B$ sublattice of the $A$[$B_2$]O$_4$ spinel structure for spins $S = 1/2$ and $S = 1$ per $B$ ion is given by a high-temperature series expansion and the predictions compared with the $C_{\rm e}(T)$ data for LiV$_2$O$_4$. A summary and concluding remarks are given in Sec.~\ref{SecConcl}. Unless otherwise noted, a ``mol'' refers to a mole of formula units. \section{Experimental Details} \label{SecExpDet} Polycrystalline LiV$_2$O$_4$ samples were prepared using conventional ceramic techniques described in detail elsewhere, where detailed sample characterizations and magnetic susceptibility results and analyses are also given.\cite{Kondo1998} A few of these results relevant to the present measurements, analyses and modeling are given in Table~\ref{TableLiVO}. Polycrystalline LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ samples were synthesized using solid-state reaction techniques.\cite{Johnston1976} TiO$_2$ (Johnson Matthey, 99.99\,\%) was dried under a pure oxygen stream at 900\,$^\circ$C before use. This was mixed with Li$_2$CO$_3$ (Alfa, 99.999\,\%) in an appropriate ratio to produce either Li$_{4/3}$Ti$_{5/3}$O$_4$ or a precursor ``LiTiO$_{2.5}$" for LiTi$_2$O$_4$. The mixtures were then pressed into pellets and heated at 670\,$^\circ$C in an oxygen atomosphere for $\approx$ 1 day. The weight loss due to release of CO$_2$ was within 0.04 wt.\% of the theoretical value for LiTiO$_{2.5}$. However, for Li$_{4/3}$Ti$_{5/3}$O$_4$ additional firings at higher temperatures (up to 800\,$^\circ$C), after being reground and repelletized, were necessary. LiTi$_2$O$_4$ was prepared by heating \newpage \noindent pressed pellets of a ground mixture of the LiTiO$_{2.5}$ precursor and Ti$_2$O$_3$ in an evacuated and sealed quartz tube at 700\,$^\circ$C for one week and then air-cooling. The Ti$_2$O$_3$ was prepared by heating a mixture of TiO$_2$ and titanium metal powder (Johnson Matthey) at 1000\,$^\circ$C for one week in an evacuated and sealed quartz tube. Powder x-ray diffraction data were obtained using a Rigaku diffractometer (Cu K$\alpha$ radiation) with a curved graphite crystal monochromator. Rietveld refinements of the data were carried out using the program ``Rietan 97 (`beta' version)''.\cite{Izumi1993} The x-ray data for our sample of Li$_{4/3}$Ti$_{5/3}$O$_4$ showed a nearly pure spinel phase with a trace of TiO$_2$ (rutile) impurity phase. The two-phase refinement, assuming the cation distribution Li[Li$_{1/3}$Ti$_{5/3}$]O$_4$, yielded the lattice $a_0$ and oxygen $u$ parameters of the spinel phase 8.3589(3)\,\AA\ and 0.2625(3), respectively; the concentration of TiO$_2$ impurity phase was determined to be 1.3\,mol\%. The LiTi$_2$O$_4$ sample was nearly a single-phase spinel structure but with a trace of Ti$_2$O$_3$ impurity. A two-phase Rietveld refinement assuming the normal-spinel cation distribution yielded the spinel phase parameters $a_0$ = 8.4033(4)\,\AA\ and $u$ = 0.2628(8), and the Ti$_2$O$_3$ impurity phase concentration $<1$\,mol\%. Our crystal data are compared with those of Cava~{\it et al.}\cite{Cava1984} and Dalton~{\it et al.}\cite{Dalton1994} in Table~\ref{TableI}. The $C_{\rm p}(T)$ measurements were done on samples from four different batches of LiV$_2$O$_4$ using a conventional heat-pulse calorimeter, with Apeizon-N grease providing contact between the sample and the copper tray.\cite{Swenson1996} Additional $C_{\rm p}(T)$ data were obtained up to 108\,K on 0.88\,g of the isostructural nonmagnetic insulator spinel compound Li$_{4/3}$Ti$_{5/3}$O$_4$, containing only maximally oxidized Ti$^{+4}$, and 3.09\,g of the isostructural superconductor LiTi$_2$O$_4$ to obtain an estimate of the background lattice contribution. A basic limitation on the accuracy of these $C_{\rm p}$ data, except for LiV$_2$O$_4$ below 15\,K, was the relatively small (and sample-dependent) ratios of the heat capacities of the samples to those associated with the tray (the \widetext \begin{table} \caption{Characteristics of LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ samples. Abbreviations: $a_0$ is the lattice parameter, $u$ the oxygen parameter, $\gamma$ the electronic specific heat coefficient, $\theta_0$ the zero-temperature Debye temperature, $T_{\rm c}$ and $\Delta T_{\rm c}$ the superconducting transition temperature and transition width, and $\Delta C_{\rm p}$ the specific heat jump at $T_{\rm c}$.} \begin{tabular}{lldcdddr} $a_0$ & $u$ & $\gamma$ & $\theta_0$ & $T_{\rm c}$ & $\Delta T_{\rm c}$ & $\Delta C_{\rm p}/\gamma T_{\rm c}$ & Ref.\\ (\AA) & & (mJ/mol\,K$^2$) & (K) & (K) & (K) & (mJ/mol\,K$^2$) & \\ \hline &&&LiTi$_2$O$_4$\\ \hline 8.4033(4) & 0.2628(8) & 17.9(2) & 700(20) & 11.8 & $\lesssim$0.2 & 1.75(3) & This Work\\ 8.4033(1) & 0.26275(5) &&&&&&\onlinecite{Cava1984}\\ 8.41134(1) & 0.26260(4) &&&&&&\onlinecite{Dalton1994}\\ 8.407 & & 21.4 & 685 & 11.7 & 1.2 & 1.59 & \onlinecite{McCallum1976}\\ & & 22.0 & 535 & 12.4 & 0.32 & 1.57 &\onlinecite{Heintz1989}\\ & 0.26290(6) (300\,K)&&&&&&\onlinecite{Tunstall1994}\\ & 0.26261(5) (6\,K)&&&&&&\onlinecite{Tunstall1994}\\ \hline &&&Li$_{4/3}$Ti$_{5/3}$O$_4$\\ \hline 8.3589(3) & 0.2625(3) & 0. & 725(20) &&&&This Work\\ 8.35685(2) & 0.26263(3) &&&&&&\onlinecite{Dalton1994}\\ 8.359 & & 0. & 610 &&&&\onlinecite{McCallum1976}\\ & & 0.05 & 518 &&&&\onlinecite{Heintz1989}\\ \end{tabular} \label{TableI} \end{table} \narrowtext \noindent ``addenda''). For LiV$_2$O$_4$ sample 6, this ratio decreased from 40 near 1\,K to 1.0 at 15\,K to a relatively constant 0.2 above 40\,K\@. For the superconducting LiTi$_2$O$_4$ sample, this ratio was 0.45 just above $T_{\rm c}$ ($= 11.8$\,K), and increased to 0.65 at 108\,K\@. For the nonmagnetic insulator Li$_{4/3}$Ti$_{5/3}$O$_4$ sample, this ratio varied from 0.03 to 0.12 to 0.2 at 8, 20 and 108\,K, respectively. These factors are important since small ($\pm 0.5$\%) systematic uncertainties in the addenda heat capacity can have differing effects on the $C_{\rm p}(T)$ measured for the different samples, even though the precision of the raw heat capacity measurements (as determined from fits to the data) is better than 0.25\%. The linear thermal expansion coefficient of LiV$_2$O$_4$ sample~6 was measured using a differential capacitance dilatometer.\cite{Swenson1996,Swenson1998} All data were taken isothermally ($T$ constant to 0.001\,K). The absolute accuracy of the measurements is estimated to be better than 1\%. \section{Specific Heat Measurements} \label{SecExpRes} \subsection{Overview} \label{SecOverview} An overview of our $C_{\rm p}(T)$ measurements on LiV$_2$O$_4$ sample 2, run 2 (1.26--78\,K), sample~6 (1.16--108\,K), and LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ up to 108\,K, is shown in plots of $C_{\rm p}(T)$ and $C_{\rm p}(T)/T$ in Figs.~\ref{FigCpSumm}(a) and~(b), respectively. Our data for LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ are generally in agreement with those of McCallum {\em et al.}\cite{McCallum1976} which cover the range up to $\sim 25$\,K\@. For LiTi$_2$O$_4$ above $T_{\rm c} = 11.8$\,K (see below) and for Li$_{4/3}$Ti$_{5/3}$O$_4$, one sees from Fig.~\ref{FigCpSumm}(a) a smooth monotonic increase in $C_{\rm p}$ up to 108\,K\@. From Fig.~\ref{FigCpSumm}(b), the $C_{\rm p}$ of the nonmagnetic insulator Li$_{4/3}$Ti$_{5/3}$O$_4$ is smaller than that of metallic LiTi$_2$O$_4$ up to $\sim 25$\,K, is larger up to $\sim 45$\,K and then becomes smaller again at higher temperatures. Since $C_{\rm e} = 0$ in Li$_{4/3}$Ti$_{5/3}$O$_4$ and $C_{\rm e}(T)$ in LiTi$_2$O$_4$ cannot be negative, \begin{figure} \epsfxsize=3.1 in \epsfbox{Fig01.eps} \vglue 0.1in \caption{Overview of the molar specific heat $C_{\rm p}$ (a) and $C_{\rm p}/T$ (b) {\it vs.}\ temperature $T$ for LiV$_2$O$_4$ samples~2 ($\bullet$) and~6 ($\circ$) and the reference compounds LiTi$_2$O$_4$ ($\bullet$), a metallic superconductor, and Li$_{4/3}$Ti$_{5/3}$O$_4$ ($\circ$), a nonmagnetic insulator. The solid curves are polynomial fits to the data for LiV$_2$O$_4$ sample~6 and LiTi$_2$O$_4$. The dashed curve in (b) is the inferred normal state $C_{\rm p}/T$ below $T_{\rm c}$ for LiTi$_2$O$_4$.\label{FigCpSumm}} \end{figure} \noindent it follows from Eq.~(\ref{EqCsum:a}) that $C^{\rm lat}(T)$ and hence the lattice dynamics are significantly different in LiTi$_2$O$_4$ compared with Li$_{4/3}$Ti$_{5/3}$O$_4$. The data for LiV$_2$O$_4$ in Fig.~\ref{FigCpSumm}(b) are shifted upwards from the data for the Ti spinels, with a strong upturn in $C_{\rm p}(T)/T$ below $\sim 25$\,K\@. These data indicate a very large $\gamma(T\rightarrow 0)$. Comparison of $C_{\rm p}(T)/T$ for LiV$_2$O$_4$ and LiTi$_2$O$_4$ at the higher temperatures $> 30$\,K indicate that a large $\gamma(T)$ persists in LiV$_2$O$_4$ up to our maximum measurement temperature of 108\,K\@. In the following, we begin our analyses with the data for the Li$_{1+x}$Ti$_{2-x}$O$_4$ compounds because we extract a lattice specific heat from these materials as a reference for LiV$_2$O$_4$. \subsection{Li$_{1+x}$Ti$_{2-x}$O$_4$} \label{SecLiTiO} In the present paper, our $C_{\rm p}(T)$ data for LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ are most important for determining the lattice contribution $C^{\rm lat}(T)$ to $C_{\rm p}(T)$ of LiV$_2$O$_4$. At low temperatures, the $C_{\rm p}(T)$ of a conventional nonmagnetic, nonsuperconducting material is\cite{Gopal1966} \begin{equation} C_{\rm p}(T) = A_1 T + A_3 T^3 + A_5 T^5 + A_7 T^7 + \cdots~~, \label{EqCLoT} \end{equation} where $\gamma \equiv A_1$ and $\beta \equiv A_3$. From Eqs.~(\ref{EqCsum:all}), the first term in Eq.~(\ref{EqCLoT}) is $C_{\rm e}(T)$, the second corresponds to the ideal Debye lattice contribution $C^{\rm lat}(T\rightarrow 0)$ and the following terms represent dispersion in the lattice properties.\cite{Barron1980} The zero-temperature Debye temperature $\theta_0$ is given by\cite{Gopal1966} $\theta_0 = (1.944 \times 10^6 r/\beta)^{1/3}$, where $r$ is the number of atoms per formula unit ($r = 7$ here) and $\beta$ is in units of mJ/mol\,K$^4$. Equation~(\ref{EqCLoT}) suggests the commonly used plot of $C_{\rm p}/T$ versus $T^2$ to obtain the parameters $\gamma$ and $\beta$. Unfortunately, the very small heat capacity of the small Li$_{4/3}$Ti$_{5/3}$O$_4$ sample and the occurrence of the superconducting transition in LiTi$_2$O$_4$ at 11.8\,K complicate the use of this relation to determine $C^{\rm lat}(T)$ for these presumably similar materials below $\approx 12$\,K\@. The $C_{\rm p}(T)/T$ of LiTi$_2$O$_4$ below 20\,K is plotted versus $T$ and $T^2$ in Fig.~\ref{FigC/TLi1}(a) and (b), respectively. The superconducting transition at $T_{\rm c} = 11.8$\,K is seen to be pronounced and very sharp ($\Delta T_{\rm c} \lesssim 0.2$\,K). The dotted line extrapolation of the normal state ($T > 11.8$\,K) data to $T = 0$ shown in Figs.~\ref{FigCpSumm}(b) and~\ref{FigC/TLi1} uses Eq.~(\ref{EqCLoT}), equality of the superconducting and normal state entropy at $T_{\rm c}$, $S(11.8\,{\rm K}) = 241(1)$\,mJ/mol\,K, and continuity considerations with $C_{\rm p}(T)/T$ above $T_{\rm c}$, from which we also obtain estimates of $\gamma$ and $\beta$. Although we cannot rule out a $T$-dependence of $\gamma$, we assume here that $\gamma$ is independent of $T$. While $\gamma$ [$= 17.9$(2)\,mJ/mol\,K$^2$] appears to be quite insensitive to addenda uncertainties, $\theta_0$ [$= 700$(20)\,K] is less well-defined. Our value for $\gamma$ is slightly smaller than the values of 20--22\,mJ/mol\,K$^2$ reported earlier for LiTi$_2$O$_4$,\cite{Heintz1989,McCallum1976} as shown in Table~\ref{TableI}. From the measured superconducting state $C_{\rm p}(T_{\rm c}) = 684(2)$\,mJ/mol\,K and normal state $C_{\rm p}(T_{\rm c}) = 315(1)$\,mJ/mol\,K, the discontinuity in $C_{\rm p}$ at $T_{\rm c}$ is given by $\Delta C_{\rm p}/T_{\rm c} = 31.3(3)$\,mJ/mol\,K$^2$, yielding $\Delta C_{\rm p}/\gamma T_{\rm c} = 1.75(3)$ which is slightly larger than previous estimates in Table~\ref{TableI}. According to Eqs.~(\ref{EqCsum:all}), the lattice specific heat of LiTi$_2$O$_4$ above $T_{\rm c}$ is given by $C^{\rm lat}(T) = C_{\rm p}(T) - \gamma T$. The $C^{\rm lat}(T)$ derived for LiTi$_2$O$_4$ below 12\,K is consistent within experimental uncertainties with the measured $C^{\rm lat}(T)$ of Li$_{4/3}$Ti$_{5/3}$O$_4$ in the same temperature range after accounting for the formula weight difference. The low-$T$ $C_{\rm p}(T)/T = C^{\rm lat}(T)/T$ for Li$_{4/3}$Ti$_{5/3}$O$_4$ is plotted in Figs.~\ref{FigC/TLi1}. The $\theta_0 = 725$(20)\,K found for Li$_{4/3}$Ti$_{5/3}$O$_4$ is slightly larger than that for LiTi$_2$O$_4$, as expected. A polynomial fit to the $C_{\rm p}(T)$ of Li$_{4/3}$Ti$_{5/3}$O$_4$ above 12\,K is shown by the dashed curves in Figs.~\ref{FigC/TLi1}. The uncertainties in the data and analyses for the Ti spinels have little effect on the analyses of $C_{\rm p}(T)$ for LiV$_2$O$_4$ in the following Sec.~\ref{SecLiVO}, since as Figs.~\ref{FigCpSumm} suggest, $C^{\rm lat}(T)$ for LiV$_2$O$_4$ is small compared to $C_{\rm e}(T)$ of this compound at low temperatures. To quantify the difference above $\sim 12$\,K between the $C^{\rm lat}(T)$ of LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ noted above in \begin{figure} \epsfxsize=3.3 in \epsfbox{Fig02.eps} \vglue 0.1in \caption{Expanded plots below 20\,K of the molar specific heat divided by temperature $C_{\rm p}/T$ {\it vs.}\ temperature $T$ of LiTi$_2$O$_4$ and Li$_{4/3}$Ti$_{5/3}$O$_4$ from Fig.~\protect\ref{FigCpSumm}. The solid curves are polynomical fits to the data for LiTi$_2$O$_4$, whereas the dotted curve is the inferred normal state behavior below $T_{\rm c} = 11.8$\,K\@. The dashed curve is a polynomial fit to the data for Li$_{4/3}$Ti$_{5/3}$O$_4$ above 12\,K\@.\label{FigC/TLi1}} \end{figure} \noindent Sec.~\ref{SecOverview}, in Fig.~\ref{FigDClatt} is plotted the difference $\Delta C^{\rm lat}(T)$ between the measured $C_{\rm p}(T)$ of Li$_{4/3}$Ti$_{5/3}$O$_4$ and $C^{\rm lat}(T)$ of LiTi$_2$O$_4$. The shape of $\Delta C^{\rm lat}(T)$ in Fig.~\ref{FigDClatt} below $\sim 30\,K$ is similar to that of an Einstein specific heat, but such a specific heat saturates to the Dulong-Petit limit at high $T$ and does not decrease with $T$ as the data do above 40\,K\@. These observations suggest that intermediate-energy phonon modes in LiTi$_2$O$_4$ at some energy $k_{\rm B} T_{\rm E2}$ split in Li$_{4/3}$Ti$_{5/3}$O$_4$ into higher ($k_{\rm B} T_{\rm E3}$) and lower ($k_{\rm B} T_{\rm E1}$) energy modes, resulting from the Li-Ti atomic disorder on the octahedral sites in Li$_{4/3}$Ti$_{5/3}$O$_4$ and/or from the difference in the metallic character of the two compounds. Following this interpretation, we model the data in Fig.~\ref{FigDClatt} as the difference $\Delta C^{\rm lat}_{\rm Einstein}$ between the Einstein heat capacities of two Einstein modes with Einstein temperatures of $T_{\rm E1}$ and $T_{\rm E2}$ (neglecting the modes at high energy $k_{\rm B} T_{\rm E3}$), given by\cite{Gopal1966} \begin{equation} \Delta C^{\rm lat}_{\rm Einstein} = 3rR\Bigg[\frac{x_1(T_{\rm E1}/2T)^2}{\sinh^2(T_{\rm E1}/2T)} - \frac{x_2(T_{\rm E2}/2T)^2}{\sinh^2(T_{\rm E2}/2T)}\Bigg]~~, \label{EqDClattFit} \end{equation} where $R$ is the molar gas constant, $r$ = 7 atoms/formula unit and $x_1$ and $x_2$ are the fractions of the total number of phonon modes shifted to $T_{\rm E1}$ and away from $T_{\rm E2}$, respectively. A reasonable fit of the data by Eq.~(\ref{EqDClattFit}) was obtained with the parameters $x_1$ = 0.012, $T_{\rm E1} = 110$\,K, $x_2 = 0.018$ and $T_{\rm E2} = 240$\,K; the fit is shown as the solid curve in Fig.~\ref{FigDClatt}. The model then predicts that a fraction $(x_2 - x_1)/x_2 \sim 0.3$ of the modes removed at energy $k_{\rm B} T_{\rm E2}$ are moved to an energy $k_{\rm B} T_{\rm E3} \gg k_{\rm B} T_{\rm E2}$. \begin{figure} \epsfxsize=3.1 in \epsfbox{Fig03.eps} \vglue 0.1in \caption{The difference $\Delta C^{\rm lat}$ between the lattice specific heats of Li$_{4/3}$Ti$_{5/3}$O$_4$ and LiTi$_2$O$_4$ {\it vs}.\ temperature $T$. The solid curve is a fit to the data by the difference between two Einstein specific heats in Eq.~(\protect\ref{EqDClattFit}), whereas the dashed curve is the Schottky specific heat of a two-level system in Eq.~(\protect\ref{EqDClattSchFit}). The error bars represent $\pm$1\% of the measured $C_{\rm p}(T)$ for Li$_{4/3}$Ti$_{5/3}$O$_4$.\label{FigDClatt}} \end{figure} An alternative parametrization of the experimental $\Delta C^{\rm lat}(T)$ data can be given in terms of the specific heat of a two-level system, described by the Schottky function\cite{Gopal1966} \begin{equation} \Delta C^{\rm lat}_{\rm Schottky} = xrR\bigg({g_0\over g_1}\bigg)\bigg({\delta\over T}\bigg)^2\frac{{\rm e}^{\delta/T}}{\big[1 + (g_0/g_1){\rm e}^{\delta/T}\big]^2}~~, \label{EqDClattSchFit} \end{equation} where $x$ is the atomic fraction of two-level sites, $g_0$ and $g_1$ are respectively the degeneracies of the ground and excited levels and $\delta$ is the energy level splitting in temperature units. Fitting Eq.~(\ref{EqDClattSchFit}) to the data in Fig.~\ref{FigDClatt}, we find $g_1/g_0 = 4$, $x = 0.012$ and $\delta = 117$\,K\@. The fit is shown as the dashed curve in Fig.~\ref{FigDClatt}. The accuracy of our $\Delta C^{\rm lat}(T)$ data is not sufficient to discriminate between the applicability of the Einstein and Schottky descriptions. \subsection{LiV$_2$O$_4$} \label{SecLiVO} Specific heat $C_{\rm p}(T)$ data were obtained for samples from four batches of LiV$_2$O$_4$. Our first experiment was carried out on sample~2 (run~1) with mass 5~g. The $C_{\rm p}(T)$ was found to be so large at low $T$ (the first indication of heavy fermion behavior in this compound from these measurements) that the large thermal diffusivity limited our measurements to the 2.23--7.94\,K temperature range. A smaller piece of sample~2 (0.48~g) was then measured (run~2) from 1.16 to 78.1\,K\@. Data for samples from two additional batches (sample~3 of mass 0.63~g, 1.17--29.3\,K, and sample~4A of mass 0.49~g, 1.16--39.5\,K) were also obtained. Subsequent to the theoretical modeling of the data for sample~3 described below in Sec.~\ref{SecTheory}, we obtained a complete data set from 1.14 to 108\,K for sample~6 with mass 1.1\,g from a fourth batch. A power series fit to the $C_{\rm p}(T)$ data for sample~6 is shown as solid curves in Fig.~\ref{FigCpSumm}. We have seen above that $C^{\rm lat}(T)$ of LiTi$_2$O$_4$ is significantly different from that of Li$_{4/3}$Ti$_{5/3}$O$_4$. Since LiV$_2$O$_4$ is a metallic normal-spinel compound with cation occupancies Li[V$_2$]O$_4$ as in Li[Ti$_2$]O$_4$, and since the formula weight of metallic LiTi$_2$O$_4$ is much closer to that of LiV$_2$O$_4$ than is that of the insulator Li$_{4/3}$Ti$_{5/3}$O$_4$, we expect that the lattice dynamics and $C^{\rm lat}(T)$ of LiV$_2$O$_4$ are much better approximated by those of LiTi$_2$O$_4$ than of Li$_{4/3}$Ti$_{5/3}$O$_4$. Additionally, more precise and accurate $C_{\rm p}(T)$ data were obtained for LiTi$_2$O$_4$ as compared to Li$_{4/3}$Ti$_{5/3}$O$_4$ because of the factor of three larger mass of the former compound measured than of the latter. Therefore, we will assume in the following that the $C^{\rm lat}(T)$ of LiV$_2$O$_4$ from 0--108\,K is identical with that given above for LiTi$_2$O$_4$. We do not attempt to correct for the influence of the small formula weight difference of 3.5\% between these two compounds on $C^{\rm lat}(T)$; this difference would only be expected to shift the Debye temperature by $\lesssim 1.8$\%, which is on the order of the accuracy of the high temperature $C_{\rm p}(T)$ data. The $C_{\rm e}(T)$ of LiV$_2$O$_4$ is then obtained using Eq.~(\ref{EqCsum:a}). The $C_{\rm e}(T)$ data for samples~2 (run~2) and~6 of LiV$_2$O$_4$, obtained using Eqs.~(\ref{EqCsum:all}), are shown up to 108\,K in plots of $C_{\rm e}(T)$ and $C_{\rm e}(T)/T$ {\it vs.}\ $T$ in Figs.~\ref{FigCe}(a) and (b), respectively. An expanded plot of $C_{\rm e}(T)$ below 9\,K for LiV$_2$O$_4$ is shown in Fig.~\ref{FigCe2}(a), where data for sample~2 (run~1) and sample~3 are also included. The data for samples~2 and~3 are seen to be in agreement to within about 1\%. However, there is a small positive curvature in the data for sample~2 below $\sim 3$\,K, contrary to the small negative curvature for sample~3. This difference is interpreted to reflect the influence of the larger magnetic defect concentration present in sample~2 as compared with that in sample~3, see Table~\ref{TableLiVO}.\cite{Kondo1998} Therefore, we believe that the $C_{\rm e}(T)$ data for sample~3 more closely reflect the intrinsic behavior of defect-free LiV$_2$O$_4$ compared to the data for sample~2 and all fits to $C_{\rm e}(T)$ of LiV$_2$O$_4$ below 30\,K by theoretical models to be presented in Sec.~\ref{SecTheory} below are therefore done using the data for sample~3. As seen \begin{figure} \epsfxsize=3.3 in \epsfbox{Fig04.eps} \vglue 0.1in \caption{Electronic specific heat $C_{\rm e}$ (a) and $C_{\rm e}/T$ (b) {\it vs.}\ temperature $T$ for LiV$_2$O$_4$ samples~2 (run~2) and~6. The error bars in (a) represent $\pm$1\% of the measured $C_{\rm p}(T)$ for LiV$_2$O$_4$.\label{FigCe}} \end{figure} \noindent in Fig.~\ref{FigCe2}(a), the $C_{\rm e}(T)$ data for sample~6 lie somewhat higher than the data for the other samples below about 4\,K but are comparable with those for the other samples at higher temperatures. This difference is also reflected in the magnetic susceptibilities $\chi(T)$,\cite{Kondo1998} where $\chi(T)$ for sample~6 is found to be slightly larger than those of other samples. To obtain extrapolations of the electronic specific heat to $T = 0$, the $C_{\rm e}(T)/T$ data in Fig.~\ref{FigCe2} from 1 to 10\,K for samples~3 and~6 were fitted by the polynomial \begin{equation} {C_{\rm e}(T)\over T} = \gamma(0) + \sum_{n=1}^5 C_{2n} T^{2n}~~, \label{EqCeFit} \end{equation} yielding \begin{mathletters} \label{EqGam(0):all} \begin{equation} \gamma(0) = 426.7(6)\,{\rm mJ/mol\,K^2}~~{\rm (sample\ 3)}~,\label{EqGam(0):a} \end{equation} \begin{equation} \gamma(0) = 438.3(5)\,{\rm mJ/mol\,K^2}~~{\rm (sample\ 6)}~.\label{EqGam(0):b} \end{equation} \end{mathletters} \begin{figure} \epsfxsize=3.1 in \epsfbox{Fig05.eps} \vglue 0.1in \caption{Expanded plot below 9\,K of the $C_{\rm e}/T$ {\it vs.}\ $T$ data for LiV$_2$O$_4$ samples~2, 3 and~6. The solid and dashed curves are polynomial fits to the 1.1--10\,K data for samples~3 and~6, respectively.\label{FigCe2}} \end{figure} \noindent The fits for samples~3 and~6 are respectively shown by solid and dashed curves in Fig.~\ref{FigCe2}. The $\gamma(0)$ values are an order of magnitude or more larger than typically obtained for transition metal compounds, and are about 23 times larger than found above for LiTi$_2$O$_4$. The $T$-dependent electronic entropy $S_{\rm e}(T)$ of LiV$_2$O$_4$ was obtained by integrating the $C_{\rm e}(T)/T$ data for sample 6 in Fig.~\ref{FigCe}(b) with $T$; the extrapolation of the $C_{\rm e}(T)/T$ {\it vs.}\ $T$ fit for sample~6 in Fig.~\ref{FigCe2} from $T = 1.16$\,K to $T=0$ yields an additional entropy of $S_{\rm e}($1.16\,K) = 0.505\,J/mol\,K\@. The total $S_{\rm e}(T)$ is shown up to 108\,K in Fig.~\ref{FigSe}; these data are nearly identical with those of sample~2 (run~2) up to the maximum measurement temperature of 78~K for that sample (not shown). The electronic entropy at the higher temperatures is large. For example, if LiV$_2$O$_4$ were to be considered to be a strictly \begin{figure} \epsfxsize=3.1 in \epsfbox{Fig06.eps} \vglue 0.1in \caption{Electronic entropy $S_{\rm e}$ of LiV$_2$O$_4$ sample~6 versus temperature $T$ ($\bullet$), obtained by integrating the $C_{\rm e}/T$ data for sample~6 in Fig.~\protect\ref{FigCe}(b) with $T$.\label{FigSe}} \end{figure} \begin{figure} \epsfxsize=3.2 in \epsfbox{Fig07.eps} \vglue 0.1in \caption{Measured specific heat divided by temperature $C_{\rm p}/T$ {\it vs.}\ temperature $T$ for LiV$_2$O$_4$ sample~4A; corresponding data for sample~2 from Fig.~\protect\ref{FigCpSumm} are shown for comparison. The lines are guides to the eye.\label{FigSam4A}} \end{figure} \noindent localized moment system with one spin $S = 1/2$ per V atom, then the maximum electronic (spin) entropy would be 2$R\ln(2)$, which is already reached at about 65\,K as shown by comparison of the data with the horizontal dashed line in Fig.~\ref{FigSe}. Our $C_{\rm p}(T)$ data for one sample (sample~4A) of LiV$_2$O$_4$ were anomalous. These are shown in Fig.~\ref{FigSam4A} along with those of sample~2 (run~2) for comparison. Contrary to the $C_{\rm p}(T)/T$ data for sample~2, the data for sample~4A show a strong upturn below $\sim 5$\,K and a peak at about 29\,K\@. We have previously associated the first type of effect with significant ($\sim 1$\,mol\%) concentrations of paramagnetic defects.\cite{Kondo1997} Indeed, Table~\ref{TableLiVO} shows that this sample has by far the highest magnetic impurity concentration of all the samples we studied in detail. The anomalous peak at 29\,K might be inferred to be due to small amounts of impurity phases (see Table~\ref{TableLiVO}). However, the excess entropy $\Delta S$ under the peak is rather large, $\Delta S \sim 0.9\,{\rm J/mol\,K} \approx 0.16R$ln(2). We also note that the height of the anomaly above ``background'' is at least an order of magnitude larger than would be anticipated due to a few percent of V$_4$O$_7$ or V$_5$O$_9$ impurity phases which order antiferromagnetically with N\'eel temperatures of 33.3 and 28.8\,K, respectively.\cite{Khattak1978} It is possible that the 29\,K anomaly is intrinsic to the spinel phase in this particular sample; in such a case Li-V antisite disorder and/or other types of crystalline defects would evidently be involved. As seen in Table~\ref{TableLiVO}, this sample has by far the largest room temperature lattice parameter of all the samples listed, which may be a reflection of a slightly different stoichiometry and/or defect distribution or concentration from the other samples. Although these $C_{\rm p}(T)$ data for sample~4A will not be discussed further in this paper, the origin of the anomaly at 29\,K deserves further investigation. \begin{figure} \epsfxsize=3.2 in \epsfbox{Fig08.eps} \vglue 0.1in \caption{Linear thermal expansion coefficient $\alpha$ (left-hand scales) and $\alpha/T$ (right-hand scales) versus temperature $T$ for LiV$_2$O$_4$ sample~6 from 4.4 to 297\,K (a) and 4.4 to 50\,K (b). The solid curves are the fit to the $\alpha(T)$ data by a polynomial.} \label{FigAlpha3} \end{figure} \section{Thermal Expansion Measurements} \label{SecThermExp} The linear thermal expansion coefficient $\alpha(T)$ of LiV$_2$O$_4$ sample~6 was measured between 4.4 and 297\,K\@. Figure~\ref{FigAlpha3}(a) shows $\alpha(T)$ and $\alpha(T)/T$ over this $T$ range, and Fig.~\ref{FigAlpha3}(b) shows expanded plots below 50\,K\@. At 297\,K, $\alpha = 12.4 \times 10^{-6}$\,K$^{-1}$, which may be compared with the value $\alpha \approx 15.6 \times 10^{-6}$\,K$^{-1}$ obtained for LiTi$_2$O$_4$ between 293 and 1073\,K from x-ray diffraction measurements.\cite{Roy1977} Upon cooling from 297\,K to about 25\,K, $\alpha$ of LiV$_2$O$_4$ decreases as is typical of conventional metals.\cite{Barron1980} However, $\alpha(T)$ nearly becomes negative with decreasing $T$ at about 23\,K\@. This trend is preempted upon further cooling below $\sim 20$\,K, where both $\alpha(T)$ and $\alpha(T)/T$ exhibit strong increases. The strong increase in $\alpha(T)$ below 20\,K was first observed by Chmaissem~{\it et~al}.\cite{Chmaissem1997} from high-resolution neutron diffraction data, which motivated the present $\alpha(T)$ measurements. We fitted our $\alpha(T)$ data by a polynomial in $T$ over three contiguous temperature ranges and obtained the fit shown as the solid curves in Figs.~\ref{FigAlpha3}. From the fit, we obtain \begin{figure} \epsfxsize=3.2 in \epsfbox{Fig09.eps} \vglue 0.1in \caption{Lattice parameter $a_{\rm o}$ versus temperature $T$ from 4 to 297\,K (a) and an expanded plot from 4 to 100\,K (b) for LiV$_2$O$_4$. The filled circles are the neutron diffraction measurements of sample~5 by Chmaissem~et~al.\protect\cite{Kondo1997,Chmaissem1997} The solid curve is the linear thermal dilation obtained from our capacitance dilatometer measurements of sample~6, assuming $a_{\rm o}(0) = 8.22670$\,\AA.} \label{FigAlpha2} \end{figure} \noindent $\lim_{T\to 0}\alpha(T)/T = 2.00 \times 10^{-7}\,{\rm K}^{-2}$. Shown as the solid curve in Fig.~\ref{FigAlpha2}(a) is the linear thermal dilation expressed in terms of the lattice parameter $a_{\rm o}(T) = a_{\rm o}(0)[1 + \int_0^T \alpha(T)\,{\rm d}T]$, where we have used our polynomial fit to the $\alpha(T)$ data to compute $a_{\rm o}(T)$ and have set $a_{\rm o}(0) = 8.22670$\,\AA. The $a_{\rm o}(T)$ determined from the neutron diffraction measurements by Chmaissem~et~al.\cite{Chmaissem1997} for a different sample (sample~5) are plotted as the filled circles in Fig.~\ref{FigAlpha2}. The two data sets are in overall agreement, and both indicate a strong decrease in $a_{\rm o}(T)$ with decreasing $T$ below 20\,K\@. There are differences in detail between the two measurements at the lower temperatures as illustrated below 100\,K in Fig.~\ref{FigAlpha2}(b), suggesting a possible sample dependence. Our measurement of $\alpha(T)/T$ for sample~6 is compared with the measured $C_{\rm p}(T)/T$ for the same sample in Fig.~\ref{FigAlpha1}(a), where the temperature dependences of these two quantities are seen to be similar. We infer that the strong increase in $\alpha(T)/T$ with decreasing $T$ below $\sim 20$\,K is an electronic effect associated with the crossover to heavy fermion behavior. For most materials, \begin{figure} \epsfxsize=3.3 in \epsfbox{Fig10.eps} \vglue 0.1in \caption{(a) Comparison of the linear thermal expansion coefficient divided by temperature $\alpha(T)/T$ (left-hand-scale) with the measured specific heat $C_{\rm p}(T)/T$ (right-hand-scale) for LiV$_2$O$_4$ sample~6. (b) Gr\"uneisen parameter $\Gamma$ versus $T$, computed using Eq.~(\protect\ref{EqGrunEq}) and the assumed value of the bulk modulus $B$ given in the figure.} \label{FigAlpha1} \end{figure} \noindent the volume thermal expansivity $\beta = 3\alpha$ and $C_{\rm p}$ are related through the dimensionless Gr\"uneisen parameter $\Gamma$, with\cite{Barron1980} \begin{equation} \beta = \frac{\Gamma C_{\rm p}}{B_{\rm s}V_{\rm M}}~~, \label{EqGrunEq} \end{equation} where $B_{\rm s}$ is the adiabatic bulk modulus and $V_{\rm M}$ is the molar volume. In this model, $\Gamma = -{\rm d\,ln}\Phi/{\rm d\,ln}V$ where $\Phi(V)$ is a characteristic energy of the system. If independent contributions to $C_{\rm p}$ can be identified, as assumed in Eq.~(\ref{EqCsum:a}), a relation similar to Eq.~(\ref{EqCsum:a}) exists for the thermal expansivity, with an independent $\Gamma$ for each contribution \begin{equation} \beta = \beta_{\rm e} + \beta^{\rm lat} = \frac{\Gamma_{\rm e}C_{\rm e} + \Gamma^{\rm lat}C_{\rm p}^{\rm lat}}{B_{\rm s}V_{\rm M}}~~, \label{EqBeta} \end{equation} where $C_{\rm e}$ is understood to refer to measurements under constant pressure. For a metal, $\Gamma_{\rm e} = {\rm d\,ln}{\cal D}^*(E_{\rm F})/{\rm d\,ln}V = {\rm d\,ln}\gamma(0)/{\rm d\,ln}V$, and $\Gamma^{\rm lat} = -{\rm d\,ln}\theta_0/{\rm d\,ln}V$. Here ${\cal D}^*(E_{\rm F})$ is the mass-enhanced quasiparticle density of states at the Fermi energy and the volume dependence of the electron-phonon interaction is neglected. Thus $\Gamma_{\rm e}$ is a direct measure of the volume dependence of ${\cal D}^*(E_{\rm F})$. For a free electron gas, $\Gamma_{\rm e} = 2/3$. For most real metals $\Gamma_{\rm e} = \pm 3(2)$; {\it e.g.}, $\Gamma_{\rm e} = 0.92$ (Cu), 1.6 (Au), 1.6 (V), $-4.4$ (Sr), $-0.2$ (Ba), 2.22 (Pd).\cite{Barron1980} We have computed $\Gamma(T)$ for LiV$_2$O$_4$ from Eq.~(\ref{EqGrunEq}) using the polynomial fit to our $C_{\rm p}$ data for sample~6 and using the experimental $\alpha(T)$ data in Fig.~\ref{FigAlpha3} for this sample. The molar volume of LiV$_2$O$_4$ at low temperatures is given in Table~\ref{TabLivoParams}. The bulk modulus is assumed to be $B = 200(40)$\,GPa, which is the range found\cite{Birch1966} for the similar compounds Fe$_2$O$_3$, Fe$_3$O$_4$, FeTiO$_3$, MgO, TiO$_2$ (rutile), the spinel prototype MgAl$_2$O$_4$,\cite{Kruger1997} and MgTi$_2$O$_5$.\cite{Hazen1997} The $\Gamma$ obtained by substituting these values into Eq.~(\ref{EqGrunEq}) is plotted versus temperature as the filled circles in Fig.~\ref{FigAlpha1}(b). Interpolation and extrapolation of $\Gamma(T)$ is obtained from the polynomial fit to the $\alpha(T)$ data, shown by the solid curve in Fig.~\ref{FigAlpha1}(b). From Fig.~\ref{FigAlpha1}(b), $\Gamma\approx 1.7$ at 108\,K and decreases slowly with decreasing $T$, reaching a minimum of about 0.1 at 23\,K\@. With further decrease in $T$, $\Gamma$ shows a dramatic increase and we obtain an extrapolated $\Gamma(0)\approx 11.4$. A plot of $\Gamma$ vs.\ $T^2$ obtained from our experimental data points is linear for $T^2 < 30$\,K$^2$, and extrapolates to 11.50 at $T = 0$, to be compared with 11.45 as calculated from the smooth fitted relations for $\alpha(T)$ and $C_{\rm p}(T)$; this justifies the (long) extrapolation of $\alpha(T)$ to $T = 0$. An accurate determination of the magnitude of $\Gamma$ must await the results of bulk modulus measurements on LiV$_2$O$_4$. Our estimated $\Gamma(0) \equiv \Gamma_{\rm e}(0)$ is intermediate between those of conventional nonmagnetic metals and those of $f$-electron heavy fermion compounds such as UPt$_3$ ($\Gamma_{\rm e} = 71$), UBe$_{13}$ (34) and CeCu$_6$ (57) with $\gamma(0) = 0.43,\ 0.78,$ and 1.67\,J/mol\,K$^2$, respectively.\cite{deVisser1989} From the expression\cite{Gopal1966} relating $C_{\rm p}$ to the specific heat at constant volume $C_{\rm v}$, and using our $\alpha(T)$ data and the estimate for $B$ above, $C_{\rm v}(T)$ of LiV$_2$O$_4$ can be considered identical with our measured $C_{\rm p}(T)$ to within both the precision and accuracy of our measurements up to 108\,K\@. \section{Theoretical Modeling: Electronic Specific Heat of L\lowercase{i}V$_2$O$_4$} \label{SecTheory} \subsection{Single-Band Spin $S = 1/2$ Fermi Liquid} \label{SecFL} As mentioned in Sec.~\ref{SecIntro}, the high-temperature $\chi(T)$ of LiV$_2$O$_4$ indicated a vanadium local moment with spin $S = 1/2$ and $g \sim 2$. In the low-temperature Fermi liquid regime, for a Fermi liquid consisting of a single parabolic band of quasiparticles with $S = 1/2$ and $N_{\rm e}$ conduction electrons per unit volume $V$,\cite{Kittel1971,Pethick1973,Pethick1986,Baym1991} the Fermi wavevector $k_{\rm F}$ of LiV$_2$O$_4$ assuming $N_{\rm e} = 1.5$ conduction electrons/V atom is given in Table~\ref{TabLivoParams}. In terms of the mass-enhanced density of states at the Fermi energy $E_{\rm F}$ for both spin directions ${\cal D}^*(E_{\rm F})$, the $\gamma(0)$ (neglecting electron-phonon interactions) and $\chi(0)$ are given by \begin{mathletters} \label{EqDOS:all} \begin{equation} \gamma(0) = \frac{\pi^2k_{\rm B}^2}{3} {\cal D}^*(E_{\rm F})~~,\label{EqDOS:a} \end{equation} \begin{equation} \chi(0) = \frac{g^2\mu_{\rm B}^2}{4} \frac{{\cal D}^*(E_{\rm F})}{1 + F_0^{\rm a}}~~, \label{EqDOS:b} \end{equation} \end{mathletters} where $F_0^{\rm a}$ is a Landau Fermi liquid parameter and $1/(1 + F_0^{\rm a}) = 1 - A_0^{\rm a}$ is the Stoner enhancement factor. The Fermi liquid scattering amplitudes $A_\ell^{\rm a,s}$ are related to the Landau parameters $F_\ell^{\rm a,s}$ by $A_\ell^{\rm a,s} = F_\ell^{\rm a,s}/[1 + F_\ell^{\rm a,s}/(2\ell + 1)]$. The superscripts ``a'' and ``s'' refer to spin-asymmetric and spin-symmetric interactions, respectively. Using Eq.~(\ref{EqDOS:a}) and the $k_{\rm F}$ value in Table~\ref{TabLivoParams}, the experimental value of $\gamma(0)$ for LiV$_2$O$_4$ in Eq.~(\ref{EqGam(0):a}) yields the effective mass $m^*$, Fermi velocity $v_{\rm F}$, $E_{\rm F}$, Fermi temperature $T_{\rm F}$ and ${\cal D}^*(E_{\rm F})$ for LiV$_2$O$_4$ given in Table~\ref{TabLivoParams}. From Eqs.~(\ref{EqRW}) and~(\ref{EqDOS:all}), the Wilson ratio\cite{Wilson1975} $R_{\rm W}$ is expressed as \begin{equation} R_{\rm W} = \frac{1}{1 + F_0^{\rm a}} = 1 - A_0^{\rm a}~~. \label{EqWilRat} \end{equation} Substituting the experimental $\chi(0.4$--2\,K) = 0.0100(2) cm$^3$/mol (Ref.~\onlinecite{Kondo1997}) and $\gamma(0)$ in Eq.~(\ref{EqGam(0):a}) for LiV$_2$O$_4$ into Eq.~(\ref{EqRW}) assuming $g = 2$ yields \begin{equation} R_{\rm W} = 1.71(4)~~. \label{EqRexp} \end{equation} This $R_{\rm W}$ value is in the range of those found for many conventional as well as $f$-electron HF and IV compounds.\cite{HFIV} The $R_{\rm W}$ value in Eq.~(\ref{EqRexp}) yields from Eq.~(\ref{EqWilRat}) \begin{equation} F_0^{\rm a} = -0.42,~~~~A_0^{\rm a} = -0.71~~. \label{EqLanParams2} \end{equation} In Fermi liquid theory, a temperature dependence is often computed for $C_{\rm e}$ at low temperatures having the form\cite{Pethick1973,Pethick1986,Baym1991} \begin{table} \caption{Parameters for LiV$_2$O$_4$. Abbreviations: formula weight FW; lattice parameter $a_0$;\protect\cite{Kondo1997,Chmaissem1997} (formula units/unit cell) $Z$; theoretical mass density $\rho^{\rm calc}$; molar volume $V_{\rm M}$, itinerant electron concentration $N_{\rm e}/V$; Fermi wavevector $k_{\rm F} = (3\pi^2N_{\rm e}/V)^{1/3}$; effective mass $m^*$ (free electron mass $m_{\rm e}$); Fermi velocity $v_{\rm F} = \hbar k_{\rm F}/m^*$; Fermi energy $E_{\rm F} = \hbar^2 k_{\rm F}^2/2 m^*$; Fermi temperature $T_{\rm F} = E_{\rm F}/k_{\rm B}$; mass-enhanced density of states at $E_{\rm F}$ for both spin directions, ${\cal D}^*(E_{\rm F}) = 3 N_{\rm e}/ (2 E_{\rm F}) = m^* k_{\rm F} V/(\pi^2\hbar^2)$.} \begin{tabular}{ld} Property & value \\ \hline FW & 172.82 g/mol \\ $a_0$(12\,K) & 8.2269\,\AA \\ $Z$ & 8 \\ $\rho^{\rm calc}(12\,{\rm K})$ & 4.123\,g/cm$^3$ \\ $V_{\rm M}$ & 41.92\,cm$^3$/mol \\ $N_{\rm e}/V$ & 4.310\,$\times 10^{22}\,{\rm cm^{-3}}$ \\ $k_{\rm F}$ & 1.0847\,\AA$^{-1}$ \\ $m^*/m_{\rm e}$ & 180.5 \\ $v_{\rm F}$ & 6.96\,$\times 10^5$\,cm/s \\ $E_{\rm F}$ & 24.83\,meV \\ $T_{\rm F}$ & 288.1\,K \\ ${\cal D}^*(E_{\rm F})$ & 90.6\,states/eV(V\,atom) \\ \end{tabular} \label{TabLivoParams} \end{table} \begin{equation} C_{\rm e}(T) = \gamma(0)T + \delta T^3 \ln\Big(\frac{T}{T_0}\Big) + {\cal O}(T^3)~~, \label{EqCeFL} \end{equation} where $\gamma(0)$ is given by Eq.~(\ref{EqDOS:a}) and $T_0$ is a scaling or cutoff temperature. Engelbrecht and Bedell\cite{Engelbrecht1995} considered a model of a single-band Fermi liquid with the microscopic constraint of a local (momentum-independent) self-energy, where the interactions are mediated by the quasiparticles themselves (in the small momentum-transfer limit). They find that only $s$-wave ($\ell = 0$) Fermi-liquid parameters can be nonzero and that the $\delta$ coefficient in Eq.~(\ref{EqCeFL}) is \begin{equation} \delta_{\rm EB} = \frac{3\pi^2}{5}\frac{\gamma(0)}{T_{\rm F}^2}\bigl(A_0^{\rm a}\bigr)^2\biggl(1 - \frac{\pi^2}{24}A_0^{\rm a}\biggr)~~, \label{EqDeltaEB} \end{equation} where $|A_0^{\rm a,s}| \leq 1$ and $-\frac{1}{2} \leq F_0^{\rm a,s} < \infty$. Within their model, neither ferromagnetism nor phase-separation can occur. For $F_0^{\rm a} < 0$, the only potential instability is towards antiferromagnetism and/or a metal-insulator transition; in this case they find $1 \leq R_{\rm W} \leq 2$. For $F_0^{\rm a} > 0$, a BCS superconducting state is possible and $R_{\rm W} < 1$. The value of $F_0^{\rm a}$ for LiV$_2$O$_4$ in Eq.~(\ref{EqLanParams2}) is within the former range of this theory. Auerbach and Levin\cite{Auerbach1986} and Millis {\it et al.}\cite{Millis1987,Millis1987b} formulated a Fermi-liquid theory of heavy electron compounds at low temperatures on the basis of a microscopic Kondo lattice model. The large enhancement of $m^*$ arises from the spin entropy of the electrons on the magnetic-ion sites ({\it i.e.}, spin fluctuations).\cite{Millis1987} The Wilson ratio is $R_{\rm W} \sim 1.5$ and a $T^3 \ln T$ contribution to $C_{\rm e}(T)$ is found. The origin of this latter term is not ferromagnetic spin fluctuations (``paramagnons''),\cite{Auerbach1986} but is rather electron density fluctuations and the screened long-range Coulomb interaction.\cite{Millis1987} The coefficient $\delta_{\rm M}$ of the $T^3 \ln T$ term found by Millis\cite{Millis1987} is $\delta_{\rm M} = \pi^2k_{\rm B}^4V(1 - \pi^2/12)/5(\hbar v_{\rm F})^3$, which may be rewritten as \begin{equation} \delta_{\rm M} = \frac{3\pi^2\gamma(0)}{20T_{\rm F}^2}\Big(1 - \frac{\pi^2}{12}\Big)~~. \label{EqDeltaMillis} \end{equation} Using the values $\gamma(0) = 427$\,mJ/mol\,K$^2$ [Eq.~(\ref{EqGam(0):a})], $T_{\rm F}$ = 288\,K (Table~\ref{TabLivoParams}) and $A_0^{\rm a}$ in Eq.~(\ref{EqLanParams2}), Eqs.~(\ref{EqDeltaEB}) and~(\ref{EqDeltaMillis}) respectively predict \begin{equation} \delta_{\rm EB} = 0.0199\,\frac{\rm mJ}{\rm mol\,K^4}~,~~~~\delta_{\rm M} = 0.00135\,\frac{\rm mJ}{\rm mol\,K^4}~~. \label{EqDelCalcs} \end{equation} We have fitted our low-temperature $C_{\rm e}(T)/T$ data for LiV$_2$O$_4$ sample~3 by the expression \begin{equation} \gamma(T) \equiv \frac{C_{\rm e}(T)}{T} = \gamma(0) + \delta\,T^2\ln \biggl(\frac{T}{T_0}\biggr) + \varepsilon T^3~~, \label{EqFLCT2} \end{equation} initially with $\varepsilon = 0$. The fit parameters $\gamma(0),\ \delta$ and $T_0$ were found to depend on the fitting temperature range above 1\,K chosen, and are sensitive to the precision of the data. The parameters obtained for 1--3\,K and 1--5\,K fits were nearly the same, but changed when the upper \begin{figure} \epsfxsize=3.2 in \epsfbox{Fig11.eps} \vglue 0.1in \caption{Electronic specific heat $C_{\rm e}$ divided by temperature $T$ for LiV$_2$O$_4$ samples~2 (run~2) and~3 {\it vs.}\ $T$. The dashed curves are fits to the 1--5\,K, 1--10\,K and 1--15\,K data for sample~3 by the spin-fluctuation Fermi liquid model, Eq.~(\protect\ref{EqFLCT2}) with $\varepsilon = 0$, whereas the solid curve is a 1--30\,K fit assuming $\varepsilon \neq 0$.\label{FigCTFit}} \end{figure} \noindent limit to the fitting range was increased to 10 and 15\,K\@. The fits for the 1--5\,K, 1--10\,K and 1--15\,K fitting ranges are shown in Fig.~\ref{FigCTFit}, along with the $C_{\rm e}(T)/T$ data for sample~2 (run~2). As a check on the fitting parameters, we have also fitted the $C_{\rm e}(T)/T$ data for sample~3 by Eq.~(\ref{EqFLCT2}) with $\varepsilon$ as an additional fitting parameter. The fit for the 1--30\,K range is plotted as the solid curve in Fig.~\ref{FigCTFit}. Since the fits for the smaller $T$ ranges with $\varepsilon = 0$ and for the larger ranges with $\varepsilon \neq 0$ should give the most reliable parameters, we infer from the fit parameters for all ranges that the most likely values of the parameters and their error bars are \begin{equation} \gamma(0) = 428(2)\,{\rm \frac{mJ}{mol\,K^2}},~~~\delta = 1.9(3) {\rm \frac{mJ}{mol\,K^4}}~~. \label{EqCTParams} \end{equation} The parameters in Eq.~(\ref{EqCTParams}) are very similar to those obtained using the same type of fit to $C_{\rm p}(T)/T$ data for the heavy fermion superconductor UPt$_3$ with $T_{\rm c} = 0.54$\,K,\cite{Stewart1984} for which $\gamma(0) = 429$--450\,mJ/mol\,K$^2$ and $\delta = 1.99$\,mJ/mol\,K$^4$.\cite{Stewart1984,Trinkl1996} Our $T_{\rm F}$ and $m^*/m_{\rm e}$ values for LiV$_2$O$_4$ (288\,K and 181, Table~\ref{TabLivoParams}) are also respectively very similar to those of UPt$_3$ (289\,K and 178).\cite{Pethick1986} The experimental $\delta$ value in Eq.~(\ref{EqCTParams}) is a factor of $\sim 10^2$ larger than $\delta_{\rm EB}$ and $\sim 10^3$ larger than $\delta_{\rm M}$ predicted in Eq.~(\ref{EqDelCalcs}). A similar large [${\cal O}(10^2$--$10^3)$] discrepancy was found by Millis for the $\delta$ coefficient for UPt$_3$.\cite{Millis1987} As explained by Millis,\cite{Millis1987} the large discrepancy between his theory and experiment may arise because the calculations are for a single parabolic band, an assumption which may not be applicable to the real materials. However, he viewed the most likely reason to be that his calculations omit some effect important to the thermodynamics such as antiferromagnetic spin fluctuations.\cite{Millis1987} In this context, it is possible that the magnitude of $\delta$ predicted by one of the above two theories is correct, but that terms higher order in $T$ not calculated by the theory are present which mask the $T^3 \ln T$ contribution over the temperature ranges of the fits;\cite{Ishigaki1996a} in this case the large experimental $\delta$ value would be an artifact of force-fitting the data by Eq.~(\ref{EqFLCT2}). Indeed, we found that the fits were unstable, {\it i.e.}, depended on the temperature range fitted ({\it cf.}\ Fig.~\ref{FigCTFit}). In addition, the applicability of the theory of Millis\cite{Millis1987} to LiV$_2$O$_4$ is cast into doubt by the prediction that the Knight shift at a nucleus of an atom within the conduction electron sea (not a ``magnetic'' atom) ``would be of the same order of magnitude as in a normal metal, and would not show the mass enhancement found in $\chi$.''\cite{Millis1987b} In fact, the Knight shift of the $^7$Li nucleus in LiV$_2$O$_4$ for $T \sim 1.5$--10\,K is about 0.14\%,\cite{Kondo1997,Mahajan1998,Onoda1997,Fujiwara1997} which is about 6000 times larger than the magnitude (0.00024\%) found\cite{Dalton1994} at room temperature for the $^7$Li Knight shift in LiTi$_2$O$_4$. Similarly, the $^7$Li 1/$T_1 T$ from 1.5 to 4\,K in the highest-purity LiV$_2$O$_4$ samples is about 2.25\,s$^{-1}$K$^{-1}$,\cite{Kondo1997,Mahajan1998} which is about 6000 times larger than the value of $3.7 \times 10^{-4}$\,s$^{-1}$K$^{-1}$ found\cite{Dalton1994} at 160\,K in LiTi$_2$O$_4$, where $T_1$ is the $^7$Li nuclear spin-lattice relaxation time. \subsection{Quantum-Disordered Antiferromagnetically Coupled Metal} \label{SecMillis} The antiferromagnetic (AF) Weiss temperature of LiV$_2$O$_4$ from $\chi(T)$ measurements is $|\theta| = 30$--60\,K, yet the pure system exhibits neither static antiferromagnetic AF nor spin-glass order above 0.02\,K.\cite{Kondo1997,Kondo1998} A possible explanation is that the ground state is disordered due to quantum fluctuations. We consider here the predictions for $C_{\rm e}(T)$ of one such theory. A universal contribution to the temperature dependence of $C_{\rm e}$ of a three-dimensional (3D) metal with a control parameter $r$ near that required for a zero-temperature AF to quantum-disordered phase transition, corresponding to dynamical exponent $z = 2$, was calculated by Z\"ulicke and Millis,\cite{Zulicke1995} which modifies the Fermi liquid prediction in Eq.~(\ref{EqCeFL}). Upon increasing $T$ from $T = 0$ in the quantum-disordered region, the system crosses over from the quantum disordered to a classical regime. The same scaling theory predicts that the low-$T$ spin susceptibility is given by $\chi(T) = \chi(0) + A T^{3/2}$, where the constant $A$ is not determined by the theory.\cite{Ioffe1995} Z\"ulicke and Millis found the electronic specific heat to be given by\cite{Zulicke1995} \begin{mathletters} \label{EqGamCalcMillis:all} \begin{equation} \frac{C_{\rm e}}{T} = \gamma_0 - \frac{\alpha R N_0\sqrt{r}}{6T^*}\,F\Big(\frac{T}{rT^*}\Big)~~, \label{EqGamCalcMillis:a} \end{equation} \begin{equation} F(x) = \frac{3\sqrt{2}}{\pi^2}\int_0^\infty { dy\,\frac{y^2}{\sinh^2y}\sqrt{1 + \sqrt{1 + 4 x^2 y^2}} }~~. \label{EqGamCalcMillis:b} \end{equation} \end{mathletters} Here, $\gamma_0$ is the (nonuniversal) electronic specific heat coefficent at $T = 0$ in the usual Fermi liquid theory [$\gamma(0)$ above], $T^*$ is a characteristic temperature and $N_0$ is the number of components of the bosonic order parameter which represents the ordering field: $N_0$ = 3, 2, 1 for Heisenberg, {\em XY} and Ising symmetries, respectively. The number $\alpha$ is not determined by the scaling theory but is expected to be on the order of the number of conduction electrons per formula unit; thus for LiV$_2$O$_4$, we expect $\alpha \sim 3$. We have defined $F(x)$ such that $F(0) = 1$. The variable $r$ is expected to be temperature dependent, but this temperature dependence cannot be evaluated without ascertaining the value of an additional parameter $u$ in the theory from, {\it e.g.}, measurements of the pressure dependence of $C_{\rm e}(T)$; here, we will assume $r$ to be a constant.\cite{Assump} From Eq.~(\ref{EqGamCalcMillis:a}), the $T=0$ value of $\gamma$ in the absence of quantum fluctuations is reduced by these fluctuations, and the measured $\gamma(0)$ is \begin{equation} \gamma(0) = \gamma_0 - \frac{\alpha R N_0\sqrt{r}}{6T^*}~~. \label{EqGamMeas} \end{equation} We fitted our $C_{\rm e}/T$ {\em vs.}\ $T$ data for LiV$_2$O$_4$ sample~3 by Eqs.~(\ref{EqGamCalcMillis:all}), assuming $N_0$ =~3. The fitting parameters were $\gamma_0,\ \alpha,\ r$ and~$T^*$; the $\gamma(0)$ value is then obtained from Eq.~(\ref{EqGamMeas}). The 1--20\,K and larger ranges did not give acceptable fits. The fits for the 1--5, 1--10 and 1--15\,K fitting ranges are shown in Fig.~\ref{FigMillisFit}. From these fits, we infer the parameters and errors \begin{eqnarray} \gamma_0 & = & 800(50)\,\frac{\rm mJ}{\rm mol\,K^2}~,~~~\alpha = 2.65(9)~,~~~r = 0.40(6)~,\nonumber\\ T^* & = & 18.9(4)\,{\rm K}~,~~~\gamma(0) = 430(1)\,{\rm mJ/mol\,K^2}~. \label{EqMillisParams} \end{eqnarray} Within the context of this theory, quantum fluctuations reduce the observed $\gamma(0)$ by about a factor of two compared with the value $\gamma_0$ in the absence of these fluctuations. The value of $\alpha$ is close to the nominally expected value $\sim 3$ mentioned above. The relatively large value of $r$ indicates that LiV$_2$O$_4$ is not very close to the quantum-critical point, and therefore predicts that long-range AF order will not be induced by small changes in external conditions (pressure) or composition. The former prediction cannot be checked yet because the required experiments under pressure have not yet been done. The latter expectation is consistent with the data available so far. Magnetic defect concentrations on the order of 1\% do induce static magnetic ordering below $\sim 0.8$\,K, but this ordering is evidently of the short-range spin-glass type.\cite{Kondo1997} Substitution of Zn for Li in Li$_{1-x}$Zn$_x$V$_2$O$_4$ induces spin-glass ordering for $0.2 \lesssim x \lesssim 0.9$ but long-range AF ordering does not occur until $0.9 \lesssim x \leq 1.0$.\cite{Ueda1997} Finally, two caveats regarding the fits and discussion in this section are in order. The first is that (unknown) corrections of order $(T/T^*)^2$ and $r^1$ to the theory of Z\"ulicke and Millis\cite{Zulicke1995} exist but have not been included in the prediction in Eqs.~(\ref{EqGamCalcMillis:all}); incorporating these corrections may alter the parameters obtained from fits to experimental data.\cite{Millis1997} The second caveat is that the theory \begin{figure} \epsfxsize=3.3 in \epsfbox{Fig12.eps} \vglue 0.1in \caption{Electronic specific heat divided by temperature $C_{\rm e}/T$ {\it vs.}\ $T$ for LiV$_2$O$_4$ samples~2 (run~2) and~3. Fits to the data for sample~3 by the theory of Z\"ulicke and Millis,\protect\cite{Zulicke1995} Eqs.~(\protect\ref{EqGamCalcMillis:all}), are shown for the fitting ranges 1--5\,K (long-dashed curve), 1--10\,K (solid curve) and 1--15\,K (short-dashed curve).\label{FigMillisFit}} \end{figure} \noindent may need modification for compounds such as LiV$_2$O$_4$ in which geometric frustration for AF ordering exists in the structure.\cite{Millis1997} \subsection{Spin-1/2 Kondo Model} \label{SecS1/2Kondo} Calculations of the impurity spin susceptibility $\chi(T)$ and/or impurity electronic contribution $C_{\rm e}(T)$ to the specific heat for the $S = 1/2$ Kondo model were carried out by Wilson\cite{Wilson1975} and others\cite{Krishna1980a,Krishna1980b,Oliveira1981,Rajan1982,Rajan1983,Desgranges1982,Tsvelick1983,Jerez1997} using different techniques. Both $\chi(T)$ and $C_{\rm e}(T)$ depend only on the scaling parameter $T/T_{\rm K}$, where $T_{\rm K}$ is the Kondo temperature (here, we use Wilson's definition\cite{Wilson1975}). The impurity $\chi(T)$ is predicted to be Curie-Weiss-like at temperatures high compared with $T_{\rm K}$, and to level out at a constant high value for $T \lesssim 0.1\,T_{\rm K}$ due to the formation of a singlet ground state. In the limit of zero temperature, one has\cite{Rajan1983} \begin{equation} \gamma(T = 0) = \frac{\pi W N k_{\rm B}}{6 T_{\rm K}}~~, \label{EqGam00} \end{equation} where $N$ is the number of impurity spins. The Wilson number\cite{Wilson1975} $W$ is given by\cite{Andrei1981,Rasul1984} \begin{equation} W = \gamma{\rm e}^{1/4}\pi^{-1/2} \approx 1.290\,268\,998~~, \label{EqW} \end{equation} where $\ln\gamma \approx 0.577\,215\,664\,902$ is Euler's constant. Setting $N = N_{\rm A}$, Avogadro's number, one obtains from Eqs.~(\ref{EqGam00}) and~(\ref{EqW}) the electronic specific heat coefficient per mole of impurities \begin{equation} \gamma(0) = \frac{\pi WR }{6T_{\rm K}} = \frac{5.61714\,{\rm J/mol\,K}}{T_{\rm K}}~~. \label{EqGam0Calc} \end{equation} To characterize the $T$ dependence of $C_{\rm e}$, we utilized accurate numerical calculations using the Bethe ansatz by Jerez and Andrei.\cite{Jerez1997} The calculated $C_{\rm e}(T)$ shows a maximum, max[$C_{\rm e}(T)/Nk_{\rm B}] = 0.177275$, which occurs at $T^{\rm max}/T_{\rm K} = 0.6928$. The calculations were fitted by the expressions \begin{mathletters} \label{EqC(T):all} \begin{equation} \frac{C_{\rm e}(T)}{N k_{\rm B}} = f(t)~~,\label{EqC(T):a} \end{equation} \begin{equation} \frac{C_{\rm e}(T)}{N k_{\rm B} T/T_{\rm K}} = g(t) \equiv \frac{f(t)}{t}~~,\label{EqC(T):b} \end{equation} \begin{equation} f(t) = \bigg(\frac{\pi W}{6}\bigg) \frac{t(1 + a_1 t + a_2 t^2 + a_3 t^3 + a_4 t^4)}{1 + a_5 t + a_6 t^2 + a_7 t^3 + a_8 t^4 + a_9 t^5}~~,\label{EqC(T):c} \end{equation} \end{mathletters} where $t \equiv T/T_{\rm K}$ and the coefficients $a_n$ for the two types of fits are given in Table~\ref{TableVI} for the fitting range $0.001 \leq t \leq 100$. Equations (\ref{EqC(T):all}) incorporate the zero-temperature limit in Eqs.~(\ref{EqGam00}--\ref{EqGam0Calc}). The maximum (rms) deviations of the $C_{\rm e}(T)$ fit from the calculated numerical data are 0.011\% (0.0035\%) for $0 \leq t \leq 3$ and 0.031\% (0.021\%) for $3 \leq t \leq 10$ but then progressively deteriorate to 0.48\% (0.14\%) in the region $10 \leq t \leq 92$. The corresponding deviations for the $C_{\rm e}(T)/T$ fit are 0.0044\% (0.00091\%), 0.031\% (0.017\%) and 5.1\% (1.6\%). The experimental $C_{\rm e}(T)/T$ data for LiV$_2$O$_4$ sample~3 were least-square fitted from 1.2 to 5\,K by Eqs.~(\ref{EqC(T):b}) and~(\ref{EqC(T):c}),\cite{DigitizedC} yielding $T_{\rm K}$, and then $\gamma(0)$ from Eq.~(\ref{EqGam0Calc}): \begin{equation} T_{\rm K} = 26.4(1)\,{\rm K}~,~~~\gamma(0) =~426(2)\,{\rm mJ/mol\,K^2}~. \end{equation} The fit is shown in Fig.~\ref{FigCpFit1} as the solid curves. For comparison, also shown in Fig.~\ref{FigCpFit1}(a) are the predictions for $T_{\rm K}$~= 25\,K and 28\,K\@. Unfortunately, despite the good agreement of the theory for $T_{\rm K} = 26.4$\,K with our measured $C_{\rm e}(T)$ at low $T$, the $S = 1/2$ Kondo model prediction for $\chi(T)$ qualitatively disagrees with the observed temperature dependence at low $T$.\cite{Kondo1998} This difficulty of self-consistently fitting the $C_{\rm e}(T)$ and $\chi(T)$ data is a problem \begin{table} \caption{Coefficients $a_n$ in Eq.~(\protect\ref{EqC(T):c}) in the fits to the theoretical prediction for the specific heat vs.\ temperature of the $S = 1/2$ Kondo model by Jerez and Andrei.\protect\cite{Jerez1997}} \begin{tabular}{cdd} $a_n$ & $C(T)$ Fit & $C(T)/T$ Fit\\ \hline $a_1$ & 9.1103933 & 6.8135534\\ $a_2$ & 30.541094 & 21.718636\\ $a_3$ & 2.1041608 & 2.3491812\\ $a_4$ & 0.0090613513 & 0.017533911\\ $a_5$ & 9.1164094 & 6.8158433\\ $a_6$ & 36.143206 & 27.663307\\ $a_7$ & 67.91795 & 48.229552\\ $a_8$ & 53.509135 & 40.216156\\ $a_9$ & 1.7964377 & 2.4863342\\ \end{tabular} \label{TableVI} \end{table} \begin{figure} \epsfxsize=3.2 in \epsfbox{Fig13.eps} \vglue 0.1in \caption{(a) Electronic specific heat divided by temperature $C_{\rm e}(T)/T$ data for LiV$_2$O$_4$ samples~2 (run~2) and~3 below 30\,K (open symbols) and a fit (solid curve) of the data for sample~3 by the $S = 1/2$ Kondo model, Eqs.~(\protect\ref{EqC(T):b}) and~(\protect\ref{EqC(T):c}), for a Kondo temperature $T_{\rm K}$ = 26.4\,K\@. Shown for comparison are the predictions for $T_{\rm K}$ = 25.0\,K (long-dashed curve) and 28.0\,K (short-dashed curve). (b) The same data and the fit with $T_{\rm K}$ = 26.4\,K in a plot of $C_{\rm e}$ {\it vs.}\ $T$ up to 80\,K\@.\label{FigCpFit1}} \end{figure} \noindent we have encountered in all our attempts so far to fit our data for both measurements over any extended temperature range by existing theory (see also the next section). \subsection{Local Moment High-Temperature Description} \label{SecHTS} As discussed above, the $\chi(T)$ data for LiV$_2$O$_4$ suggest that at high temperatures a local moment description in which the moments are antiferromagnetically coupled with Weiss temperature $\theta \sim -30$ to $-60$\,K may be applicable.\cite{Kondo1997,Kondo1998} Accordingly, we have calculated the magnetic specific heat $C_{\rm m}(T)$ for localized moments on the octahedral ({\it B}) sublattice of the {\it A}[{\it B}$_2$]O$_4$ spinel structure assuming nearest-neighbor AF Heisenberg interactions using the general high-temperature series expansion (HTSE) results of Rushbrooke and Wood.\cite{Rushbrooke1958} The Hamiltonian is ${\cal H} = J \sum_{<ij>} \bbox{S}_i\cdot \bbox{S}_j$, where the sum is over all exchange bonds and the exchange constant $J > 0$ corresponds to AF interactions. In terms of this Hamiltonian, $\theta = -zJS(S + 1)/3$, where $z = 6$ is the coordination number for the {\it B} sublattice of the spinel structure. The above range of $\theta$ then gives $J/k_{\rm B} = 20$--40\,K assuming $S = 1/2$. The general HTSE prediction is\cite{Rushbrooke1958} \begin{equation} \frac{C_{\rm m}(T)}{Nk_{\rm B}} = \frac{z[S(S + 1)]^2}{6t^2}\bigg[1 + \sum_{n = 1}^{n^{\rm max}}{\frac{c_n(S)}{t^n}\bigg]}~~, \label{EqHTS} \end{equation} where $t \equiv k_{\rm B}T/J$ and the coefficients $c_n$ depend in general on the spin-lattice structure in addition to $S$. The coefficients $c_n$ for the {\it B} sublattice of the spinel structure with $S = 1/2$ and $S = 1$ up to the maximum available $n^{\rm max} = 5$ are given in Table~\ref{TableVII}. The predictions for $C_{\rm m}$ versus scaled-temperature $k_{\rm B}T/[JS(S + 1)]$ with $n^{\rm max}$ = 5 are very similar for $S = 1/2$ and $S = 1$. A comparison \begin{figure} \epsfxsize=3.1 in \epsfbox{Fig14.eps} \vglue 0.1in \caption{Comparison of the high temperature series expansion prediction for the magnetic specific heat $C_{\rm m}(T)$ of the {\it B} sublattice of the {\it A}[{\it B}$_2$]O$_4$ spinel structure assuming $S = 1/2$, $J/k_{\rm B} = 20$\,K and $n^{\rm max} = 5$, given by Eq.~(\protect\ref{EqHTS}) with $c_n$ coefficients in Table~\protect\ref{TableVII}, with the experimental $C_{\rm e}(T)$ data for LiV$_2$O$_4$ sample~2 (run~2) and sample~6 from Fig.~\protect\ref{FigCe}(a).\label{FigHTSFit}} \end{figure} \begin{table} \caption{Coefficients $c_n$ in Eq.~(\protect\ref{EqHTS}) for the high temperature series expansion for the magnetic specific heat of the {\it B} sublattice of the spinel structure, for the indicated values of spin $S$.} \begin{tabular}{ccc} $n$ & $S = 1/2$ & $S = 1$\\ \hline 1 & $-1/2$ & $-$13/6\\ 2 & $-23/16$ & $-$3\\ 3 & $65/48$ & 715/36\\ 4 & $1183/768$ & $-$4421/324\\ 5 & $-18971/7680$ & $-$670741/6480\\ \end{tabular} \label{TableVII} \end{table} \noindent of the $C_{\rm m}(T)$ predictions for $n^{\rm max}$ = 0 to 5 indicates that the calculations for $n^{\rm max} = 5$ are accurate for $k_{\rm B}T/[JS(S+1)] \gtrsim 2.5$, a $T$ range with a lower limit slightly above the temperatures at which broad maxima occur. In Fig.~\ref{FigHTSFit} the HTSE prediction of $C_{\rm m}(T)$ for the {\it B} sublattice of the spinel structure with $n^{\rm max} = 5$, $S = 1/2$ and $J/k_{\rm B} = 20$\,K in Eq.~(\ref{EqHTS}) is compared with the experimental $C_{\rm e}(T)$ data for LiV$_2$O$_4$ samples~2 and~6 from Fig.~\ref{FigCe}(a). The HTSE $C_{\rm m}(T)$ has a much lower magnitude than the data and a qualitatively different temperature dependence. From Eq.~(\ref{EqHTS}), changing $J$ just scales the curve with $T$. Thus the local moment picture is in severe disagreement with our $C_{\rm e}(T)$ measurements, despite the excellent agreement between the corresponding HTSE $\chi(T)$ prediction and the $\chi(T)$ data from 50--100\,K to 400\,K\@.\cite{Kondo1997,Kondo1998} \section{Summary and Concluding Remarks} \label{SecConcl} We have presented $C_{\rm p}(T)$ data for LiV$_2$O$_4$ sample~6 which extend our previous measurements\cite{Kondo1997} up to 108\,K\@. We have also presented $C_{\rm p}(T)$ data for the isostructural superconducting compound LiTi$_2$O$_4$ ($T_{\rm c} = 11.8$\,K) up to 108\,K which complement our earlier data\cite{Kondo1997} on the isostructural nonmagnetic insulator Li$_{4/3}$Ti$_{5/3}$O$_4$. We concluded here that the lattice contribution $C^{\rm lat}(T)$ to $C_{\rm p}(T)$ for LiTi$_2$O$_4$ provides the most reliable estimate of the $C^{\rm lat}(T)$ for LiV$_2$O$_4$, and we then extracted the electronic contribution $C_{\rm e}(T)$ to $C_{\rm p}(T)$ of LiV$_2$O$_4$ from 1.2 to 108\,K\@. Inelastic neutron scattering measurements of the lattice dynamics and spin excitations would be very useful in interpreting the measurements presented here. It will be important to determine whether or not there exist significant differences in the lattice dynamics of LiV$_2$O$_4$ and LiTi$_2$O$_4$; in our data analyses and modeling, we have assumed that these compounds are similar in this respect. For two high-magnetic-purity LiV$_2$O$_4$ samples~3 and~6, the electronic specific heat coefficients $\gamma(T) \equiv C_{\rm e}(T)/T$ were found to be $\gamma(1\,$K$) = 0.42$ and 0.43 J/mole\,K$^2$, respectively. To our knowledge, these values are significantly larger than previously reported for any metallic transition metal compound.\cite{Ballou1996} For LiTi$_2$O$_4$, we found $\gamma = 0.018$\,J/mole\,K$^2$. $\gamma(T)$ of LiV$_2$O$_4$ decreases rapidly with increasing temperature from 4 to 30\,K and then decreases much more slowly from a value of 0.13\,J/mol\,K$^2$ at 30\,K to 0.08\,J/mol\,K$^2$ at 108\,K\@. Even these latter two $\gamma$ values are exceptionally large for a metallic $d$-electron compound. The temperature dependences of $\gamma$, $\chi$, the low-$T$ resistivity and the $^7$Li NMR properties are remarkably similar to those of the heaviest mass $f$-electron heavy fermion compounds.\cite{HFIV} In a plot of $\chi(0)$ versus $\gamma(0)$, the data point for LiV$_2$O$_4$ sits amid the cluster formed by the $f$-electron heavy fermion and intermediate valent compounds as shown in Fig.~\ref{FigChiVsGam},\cite{Lee1987} where several \begin{figure} \epsfxsize=3.3 in \epsfbox{Fig15.eps} \vglue 0.1in \caption{Log-log plot of the magnetic susceptibility $\chi(0)$ versus electronic specific heat coefficient $\gamma(0)$ at zero temperature for a variety of $f$-electron heavy fermion and intermediate valence compounds compiled from the literature (after Ref.~\protect\onlinecite{Lee1987}). The plot also includes data for several elemental and/or $d$-electron metals and our data point for LiV$_2$O$_4$. Here, a ``mol'' in the axis labels refers to a mole of transition metal atoms for the $d$-metal compounds, and to a mole of $f$-electron atoms for compounds containing lanthanide or actinide atoms. The straight line corresponds to a Wilson ratio $R_{\rm W} = 1$ for quasiparticles with spin $S = 1/2$ and $g$-factor $g = 2$, which is also the Wilson ratio for a free-electron Fermi gas.\label{FigChiVsGam}} \end{figure} \noindent data for elemental metals, the A-15 superconductor V$_3$Si ($T_{\rm c} = 17$\,K),\cite{Junod1971,Maita1972} and superconducting and/or metallic $d$-metal oxides LiTi$_2$O$_4$ ($T_{\rm c} \leq 13.7$\,K),\cite{Johnston1976} Sr$_2$RuO$_4$ ($T_{\rm c}$ = 1\,K),\cite{Maeno1997} and (V$_{0.95}$Ti$_{0.05})_2$O$_3$,\cite{McWhan1971} are also included for comparison. From our theoretical modeling in Sec.~\ref{SecTheory}, Fermi liquid models and the $S = 1/2$ Kondo model (with a Fermi liquid ground state) are capable of describing our $C_{\rm e}(T)$ data for LiV$_2$O$_4$ from 1\,K up to $\sim 10$\,K, although the magnitudes of the derived parameters remain to be understood theoretically. The localized moment model in Sec.~\ref{SecHTS} failed both qualitatively and quantitatively to describe the data. None of the models we used can account for the additional contribution to $C_{\rm e}(T)$ at higher temperatures, from $\sim 10$\,K up to our high temperature limit of 108\,K, which appears to be distinct from the contribution beginning at much lower $T$ and could arise from orbital,\cite{Takigawa1996,Bao1997} charge and/or spin\cite{Silva1996,Andrei1995} excitations. The crystalline electric field and/or the spin-orbit interaction may produce some energy level structure which is thermally accessible within our temperature range.\cite{Radwanski1998} Conventional band structure effects cannot give rise to our results.\cite{Harmon1998} As is well-known for conventional metals, the electron-phonon interaction increases $\gamma$ by the factor $(1 + \lambda)$, where $\lambda$ is the electron-phonon coupling constant, but does not affect $\chi$; {\it i.e.}, ${\cal D}^*(E_{\rm F}) \rightarrow {\cal D}^*(E_{\rm F}) (1 + \lambda)$ in Eq.~(\ref{EqDOS:a}). One can correct the observed Wilson ratio for electron-phonon interactions by multiplying the observed value by $(1 +\lambda)$.\cite{Fulde1988} The electron-phonon interaction is not taken into account in any of the analyses or modeling we have done. This correction would have had a significant quantitative impact on our analyses if we used, {\it e.g.}\ $\lambda \approx 0.7$ as in LiTi$_2$O$_4$ (Refs.\ \onlinecite{Heintz1989,McCallum1976}); most previous analyses of the specific heats of other ($f$-electron) HF compounds also did not take the electron-phonon interaction into account.\cite{HFIV} From our combined specific heat and thermal expansion measurements on the same sample~6 of LiV$_2$O$_4$ from 4.4 to 108\,K, we derived the Gr\"uneisen parameter $\Gamma(T)$ which shows a dramatic enhancement below $\sim 25$\,K as the compound crosses over from the quasilocal moment behavior at high temperatures to the low-temperature Fermi liquid regime, confirming the discovery of Chmaissem {\it et al.}\ from neutron diffraction measurements.\cite{Chmaissem1997} Our estimated extrapolated value of the electronic Gr\"uneisen parameter $\Gamma_{\rm e}(0)$ is about 11.4, which is intermediate between values for conventional metals and for $f$-electron heavy fermion compounds. This large value indicates a much stronger dependence of the mass-enhanced density of states on the volume of the system than simply due to the decrease in the Fermi energy with increasing volume as in the quasi-free electron picture. In the $f$-electron HF systems, the large $\Gamma_{\rm e}(0)$ values are thought to arise from a strongly volume dependent hybridization of the $f$-electron orbitals with those of the conduction electrons.\cite{deVisser1989,Edelstein1983} In the present case of LiV$_2$O$_4$, the origin of the large $\Gamma_{\rm e}(0)$ is unclear. It is conceivable that the same mechanism is responsible for the heavy fermion behavior in LiV$_2$O$_4$ as in the $f$-electron heavy fermion systems if one of the 1.5 $d$-electrons/V atom is localized on each V atom due to electron correlation effects and crystalline electric field orbital energy level structure,\cite{Goodenough1997} and if the orbital occupied by the localized electron is hybridized only weakly with the conduction electron states. That such localization can occur in similar systems is supported by calculations for the $d^1$ compound NaTiO$_2$.\cite{Ezhov1998} Additional scenarios for the heavy fermion behavior mechanism(s) are given by Kondo {\it et al.}\cite{Kondo1997,Kondo1998} involving the geometric frustration for AF ordering within the V sublattice and/or low-lying coupled dynamical orbital-charge-spin excitations. Further experimental and theoretical investigations of the physical properties of LiV$_2$O$_4$ may thus reveal interesting new physics which may also allow a deeper understanding of the $f$-electron heavy fermion class of materials. \section*{Acknowledgments} We are indebted to F.~Izumi for helpful communications regarding the Rietveld analyses and to A.~Jerez and N.~Andrei for providing high-accuracy numerical values\cite{Jerez1997} of the magnetic susceptibility and specific heat of the $S = 1/2$ Kondo model. We thank V.~Antropov, F.~Borsa, O.~Chmaissem, J.~B.~Goodenough, R.~J.~Gooding, B.~N.~Harmon, J.~D.~Jorgensen, M.~B.~Maple and A.~J.~Millis for helpful discussions and correspondence, and V.~Antropov and B.~N.~Harmon for communicating to us the results of their unpublished band structure calculations for LiV$_2$O$_4$. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No.\ W-7405-Eng-82. This work was supported by the Director for Energy Research, Office of Basic Energy Sciences.
2024-02-18T23:40:11.918Z
1998-11-30T12:56:06.000Z
algebraic_stack_train_0000
1,614
14,332
proofpile-arXiv_065-8052
\section{Introduction} In this paper we will outline phenomenology of nonperturbative short-distance physics in QCD based on introduction of a tachyonic gluon mass. \\ To set up a theoretical framework, we will consider physical observables characterized by a large mass scale $Q$, $Q^2>> \Lambda_{QCD}^2$ where $\Lambda_{QCD}$ is the position of the Landau pole in the running coupling: \begin{equation} \alpha_s(Q^2)\simeq \frac{1}{b_0\ln{\left( {Q^2}/{\Lambda_{QCD}^2}\right)}}, \end{equation} where: \begin{equation}\label{beta} b_0\equiv -\frac{\beta_1}{2\pi}=\frac{1}{4\pi}\left( 11-\frac{2}{3}n_f\right) \end{equation} for $n_f$ flavours, is the first coefficient of the $\beta$ function. Then in the limit of $Q^2\rightarrow\infty$, QCD reduces to the parton model while at finite $Q^2$ there are corrections of order $(1/\ln Q)^k$ which are nothing else but perturbative expansions and power corrections, $(\Lambda_{QCD}/Q)^n$ which are nonperturbative in nature. \\ To be more specific, concentrate on the correlators of currents $J$ with various quantum numbers: \begin{equation} \Pi_J (Q^2)~=~i\int e^{iqx}\langle 0|T\{J(x),J(0)\}|0\rangle d x \, ,\label{corr} \end{equation} where $Q^2\equiv-q^2$ and we suppress for the moment the Lorentz indices. Furthermore, to stick to the standard notations \cite{svz,snb} we introduce the Borel/ Laplace transformed $\Pi(M^2)$ where: \begin{equation} \Pi_J (M^2)~\equiv~{ Q^{2n}\over (n-1)!}\left({-d\over dQ^2}\right)^n\Pi_J (Q^2) \end{equation} in the limit where both $Q^2$ and $n$ tend to infinity so that their ratio $M^2\equiv Q^2/n$ remains finite. Then a standard representation for correlators (\ref{corr}) at large $Q^2$ goes back to the QCD sum rules \cite{svz,snb} and somewhat schematically reads as: \begin{equation} \Pi_J(M^2)\approx (parton~model)\cdot \left( 1+{a_J\over \ln\left( {M^2}/{\Lambda_{QCD}^2}\right)}+{c_J\over M^4} +{\cal O}\left( \ln^{-2}\left({M^2}/{\Lambda_{QCD}^2}\right), M^{-6}\right)\right) \label{st}\end{equation} where the constants $a_J,c_J$ depend on the channel, i.e. on the quantum numbers of the current $J(x)$, and we assumed that the currents have zero anomalous dimensions. Moreover the terms of order $1/\ln M^2$ and $M^{-4}$ are associated with the first perturbative correction and the gluon condensate, respectively. \\ One of the central points about Eq. (\ref{st}) is the absence of $M^{-2}$ corrections. The reason is that Eq. (\ref{st}) utilises the standard operator product expansion (OPE) and there are no gauge invariant operators of dimension $d=2$ which could have vacuum-to-vacuum matrix elements. \\ We will consider a modified phenomenology accounting for a correction due to a (postulated) non-vanishing gluon mass squared, $\lambda^2$. In this approach Eqs (\ref{st}) are replaced by \begin{equation} \Pi_j(M^2)~\approx~(parton~model)\cdot\left( 1+{a_J\over \ln\left( {M^2}/{\Lambda_{QCD}^2}\right)}+{b_J\over M^2}+{c_J\over M^4} +{\cal O}\left( \ln^{-2}\left({M^2}/{\Lambda_{QCD}^2} \right), M^{-6}\right)\right)\label{b} \end{equation} where $b_J$ are calculable in terms of $\lambda^2$ and depend on the channel considered. As we shall explain later, the introduction of the gluon mass is a heuristic way to account for possible existence of small-size strings. Moreover, it is easy to realize that in this case we expect a tachyonic gluon mass, $\lambda^2< 0$. Also, we have to confine ourselves to the cases when the $\lambda^2$ correction is associated with short distances. \section{Power corrections beyond the OPE} The validity of the OPE beyond the perturbation theory cannot be proven without identifying more precisely nonperturbative mechanisms. One may discuss both infrared- and ultraviolet-sensitive nonperturbative contributions. As was emphasised rather recently \cite{grunberg} the general arguments based on dispersion relations alone cannot rule out extra $1/Q^2$ pieces. One can substantiate the point by the simple observation that the removal of the Landau pole from the running coupling: \begin{equation}\label{alfauncov} {\alpha_s}(Q^2)\simeq {1\over b_0\ln{\left( Q^2/\Lambda_{QCD}^2\right)}}~ \rightarrow~{1\over b_0\ln{\left( Q^2/\Lambda_{QCD}^2\right)}}- {\Lambda_{QCD}^2\over b_0(Q^2-\Lambda_{QCD}^2)}\end{equation} does introduce a $\Lambda_{QCD}^2/Q^2$ term at large $Q^2$. Note that the ad hoc modification of the coupling in the IR region cannot be accommodated by the OPE. This simple example emphasises that the standard OPE is a dynamical framework based on a particular analysis of a generic Feynman graph, with large $Q$ flowing in and out, and allowing some of the lines to become soft \cite{svz,snb}. In this paper we accept the standard OPE to treat the infrared-sensitive contributions. \\ As for the ultraviolet-sensitive contributions, an insight into power corrections is provided by renormalons (for review and further references see \cite{review}). Renormalons are a particular set of perturbative graphs which result in a divergent perturbative series, $\sum_na_n\alpha_s^n (Q^2)$ with $a_n\sim n!(-b_0)^n$. Starting with $n=N_{cr}\equiv 1/b_0\alpha_s(Q^2)$ the product $|a_n\alpha_s^n|$ grows with $n$ despite of the smallness of $\alpha_s(Q^2)$. The most common assumption is that the rising branch of the perturbative expansion is summed up a la Borel: \begin{equation} \sum_{N_{cr}}^{\infty}n!(-b_0)^n\alpha_s^n(Q^2)\rightarrow \int dt~{(-b_0\alpha_s(Q^2)\cdot t)^{N_{cr}}exp(-t)\over 1 +b_0\alpha_s(Q^2)\cdot t}\sim {\Lambda_{QCD}^2\over Q^2}. \end{equation} Although some further arguments can be given to support this assumption \cite{beneke1}, the Borel summation still looks an arbitrary solution and the theory is basically undefined on the level of $\Lambda_{QCD}^2/Q^2$ terms. \\ Hence, one may argue that there should be nonperturbative terms of the same order, $\Lambda_{QCD}^2/Q^2$ which make the theory uniquely defined. Note that this kind of logic is very common in case of infrared renormalons (see, e.g., \cite{review}). There were a few attempts to develop a phenomenology of unconventional $\Lambda_{QCD}^2/Q^2$ corrections \cite{yamawaki}. Since there is no means to evaluate the nonperturbative terms in various channels directly the crucial ingredient here is the assumption on the nature of nonperturbative terms. In particular, a chiral-invariant quark mass and dominance of four-fermionic operators a la Nambu-Jona-Lasino have been tried for phenomenology \cite{yamawaki}. \\ Here we will explore the phenomenology of the $1/Q^2$ corrections assuming that at short distances they can be effectively described by a tachyonic gluon mass. The assumption is of openly heuristic nature and can be motivated in the following way. Consider the static potential $V(r)$ of heavy quarks and represent it as: \begin{equation} V(r)~\approx~-{4\alpha_s(r)\over 3r}+kr\label{potential} .\end{equation} Such a form of the potential is quite common at large distances $r$ with $k\approx 0.2$ GeV$^2$ representing the string tension. At {\it short} distances one expects that the linear correction to the Coulomb-like potential is replaced by a $r^2$ term \cite{balitsky}. The vanishing of the linear correction at short distances is a reflection of the absence of the $Q^{-2}$ correction in the current correlators (see above). And vice versa, if the coupling $\alpha_s(Q^2)$ has an unconventional $Q^{-2}$ piece then the potential $V(r)$ has a linear correction at short distances as well \cite{az2}. \\ The power-like behaviour of the gluon propagator was a subject of intense theoretical studies both by means of the lattice simulations (see, e.g.,~\cite{Leinweber} and references therein) and by means of the Schwinger-Dyson equation (see, e.g.,~\cite{Buttner:1995hg} and references therein). However, these studies refer to large distances. As for the short distances, the smallest distances where the measurements of the potential on the lattice are available are of order 0.1fm \cite{bali}. Remarkably, there is no sign of the vanishing of the $kr$ term so far. Although it is not a proof of the presence of the linear term at $r\rightarrow 0$ since the measurements were not specifically dedicated to short distances, such a possibility cannot at all be ruled out and one is free to speculate that the linear term does not vanish at short distances. Thus, let us assume that the form (\ref{potential}) remains indeed true as $r\rightarrow 0$. There is still no obvious way to evaluate $1/Q^2$ corrections to other variables in terms of the same $k$ . Note, however, that as far as the $kr$ term is treated as a small correction at short distances, it can be reproduced by a tachyonic gluon mass $\lambda^2$ where \cite{vz2} \begin{equation} -{2 \alpha_s\over 3}\lambda^2~\approx~ k\label{mass} {}. \end{equation} In this framework $\lambda^2$ is intended to represent the effect of small-size strings. From a calculational point of view, we simply replace the gluon propagator \begin{equation} D_{\mu\nu}^{ab}(k^2)~=~{\delta^{ab}\delta_{\mu\nu}\over k^2}~\rightarrow~ \delta^{ab}\delta_{\mu\nu}\left({1\over k^2}+{\lambda^2\over k^4} \right)\label{simple} \end{equation} and check that the integral is saturated by large momenta, $k^2\sim Q^2$. One should notice that, to the approximation we are working, our analysis is gauge invariant. (An attempt to generalise the substitution in Eq. (\ref{simple}) to higher loops can be found in Ref. \cite{anishetty}). \\ There are obvious limitations to this approach. In particular, we do not include terms of order $\lambda^4$ and do not go beyond one loop. Moreover, to use the Eq. (\ref{simple}) consistently at all distances one should have evaluated the anomalous dimension of the string tension $k$ which goes beyond the scope of the present note (see, however, \cite{anishetty}). Also, the effect of strings at large distances cannot be reproduced by any modified propagator \cite{gubarev} and this is one more reason to stick to short distances. Finally, the procedure described above assumes that at short distances the $kr$ piece in the potential is associated with a vector exchange while at large distances it corresponds to a scalar exchange and there is no direct evidence to support such a change \cite{bali}. \\ With these reservations in mind, it is clear that we would better concentrate first of all on the qualitative features of the approach with $\lambda^2\neq 0$. Let us emphasise therefore from the very beginning that there are qualitative effects around. Indeed, the estimate in Eq. (\ref{mass}) implies a large numerical value for $\lambda^2$. Applying Eq. (\ref{mass}), for example, at distances corresponding to $\alpha_s=0.3$ we would find $\lambda^2\approx -1 $ GeV$^2$. Although this estimate cannot be trusted to the accuracy, say, better than factor of 2 such a big gluon mass might seem to be easily ruled out since it would produce too much distortion on the standard phenomenology. We shall argue that it is actually not the case and $\lambda^2\approx -0.5 $ GeV$^2$ can well be admittable. \section{Puzzles to be resolved} Turning to the phenomenology, let us first note that not all the $1/Q^2$ corrections in QCD are associated with short distances. Respectively, not all the terms proportional to $\lambda^2$ can be used to mark short-distance contributions. For example, in case of DIS the $1/Q^2$ corrections are coming from the IR region and perfectly consistent with the OPE. Thus, the class of theoretical objects for which an observation of the $1/Q^2$ corrections would signify breaking of the OPE is limited. One example was the potential $V(r)$ discussed above. Other examples are the correlator functions (\ref{corr}) where the power corrections start with $Q^{-4}$ terms. In terms of the expansion in $\lambda^2$, we are guaranteed then that terms proportional to $\lambda^2$ are associated with short distances. If such terms arose from the infrared region, then the final result would contain nonanalytical in $\lambda^2$ terms, $\lambda^2\ln\lambda^2$ \cite{braun} and we shall see that it is not indeed the case. \\ Even this limited set of variables exhibits actually a remarkable variety of scales which are not explained by the standard OPE \cite{novikov}. We will briefly review these puzzles since it is an important question, whether a novel phenomenology we are going to explore can resolve the puzzles of the existing phenomenology. \\ One of the basic quantities to be determined from the theory is at which scale the parton model predictions for the correlators in Eq. (\ref{corr}) get violated considerably via the power corrections. To quantify the scales inherent to various correlators in Eq. (\ref{corr}) one may introduce the notion of $M^2_{crit}$ which is defined as the value of $M^2$ at which the power corrections become 10 per cent from the unit term \cite{novikov}. While choosing just 10 per cent is a pure convention, the meaning of $M^2_{crit}$ is that at lower $M^2$ the power corrections blow up. Moreover, the values of $M^2_{crit}$ can be evaluated using either experimental or theoretical input. In the latter case the calculation is usually under control only as far as the power correction is relatively small and that is why $M^2_{crit}$ is chosen to refer to a 10 per cent, not 100 per cent correction. Consider first the best studied $\rho$-channel. On the theoretical side, $\Pi_{\rho}$ is represented as: \begin{equation} \Pi_{\rho}(M^2)~\approx~(parton~model)\cdot \left(1+{\alpha_s(M^2)\over \pi}+ {\pi^2\over 3M^4}\langle{\alpha_s\over \pi} (G^a_{\mu\nu})^2\rangle+...\right) \end{equation} The value of $M^2_{crit}$ is related then to the value of the gluon condensate, $\langle{\alpha_s\over \pi}(G^a_{\mu\nu})^2\rangle$: \begin{equation} M^2_{crit}(\rho-channel)~\approx~\sqrt{10{\pi\over 3} \langle\alpha_s G^2 \rangle} ~\approx~(0.6~-~0.8)~{\mbox{GeV}}^2\label{normal} \end{equation} where the factor of 10 in the r.h.s. reflects the 10 per cent convention introduced above. An independent information on $M^2_{crit}$ in this channel can be obtained by using the dispersion relations for $\Pi_{\rho}(M^2)$ and integrating over the corresponding experimental cross section of $e^+e^-$ annihilation into hadrons, or by optimising the $M^2$-dependence of the Borel/Laplace sum rules. The value of $M^2_{crit}$ is then most sensitive to parameters of the $\rho$-meson and comes out about the same $0.6~{\mbox{GeV}}^2$ as indicated by Eq. (\ref{normal}). Since the gluon condensate parametrises contribution of the infrared region one may say that at least in the $\rho$-channel the breaking of the asymptotic freedom is due to infrared phenomena. Recent detailed experimental studies of the $\tau$-decays did not contradict this picture \cite{aleph}. \\ If one proceeds to other channels, in particular to the $\pi$-channel and to the $0^{\pm}$-gluonium channels, nothing special happens to $M^2_{crit}$ associated with the infrared-sensitive contribution parametrised by the gluon condensate. However, it was determined from independent arguments \cite{novikov} that the actual values of $M^2_{crit}$ do vary considerably in these channels: \begin{eqnarray} M^2_{crit}(\pi-channel)~\ge~ 2~{\mbox{GeV}}^2\\\label{pion} M^2_{crit}(0^{\pm}-gluonium~channel)~\approx 15~{\mbox{GeV}}^2\label{gluonium} .\end{eqnarray} In the pion channel, the lower bound on $M^2_{crit}$ can be determined by evaluating the pion contribution to the dispersive integral and equating it to the parton-model result. Since all other hadronic states give positive-definite contributions, at lower $M^2$ the parton model is certainly violated. In more detail, one gets: \begin{equation}\label{pion1} M^2_{crit}(\pi-channel)~\geq~\sqrt{{16\pi^2\over 3} {f^2_{\pi}m_{\pi}^4\over (\bar{m}_u+\bar{m}_d)^2}} \approx 1.8~\rm{GeV}^2,\end{equation} where the running light quark masses are evaluated at the scale $M^2$, and $f_\pi$ = 93 MeV. A detailed $M^2$ optimisation analysis of the pion sum rule gives a value \cite{snb}: \begin{equation}\label{pion2} M^2_{crit}(\pi-channel)\approx (2\sim 2.5)~\rm{GeV}^2. \end{equation} Although the difference of factor (3-4) in the values of $M^2_{crit}$ in the $\rho$- and $\pi$-channels might seem not so big, it cannot be matched by the standard IR contributions. Note also that the difference in $M^2_{crit}$ in the $\rho$- and $\pi$-channels was confirmed by the direct measurements of the corresponding correlators on the lattice \cite{chu}. These measurements also provide with an independent support to the idea that asymptotic freedom is violated at moderate $M^2\sim 1~GeV^2$ sharply, due to power corrections. It is most important in case of channels with large perturbative corrections, the fact which might give rise to doubts in relevance of the notion of $M^2_{crit}$ \\ In case of the scalar-gluonium channel, the value of $M^2_{crit}$ follows \cite{novikov} from a low-energy theorem for the correlator associated with the current $\alpha_s(G^2_{\mu\nu})^2$. This low-energy theorem can be translated into correction to the parton model predictions at large $M^2$: \begin{equation} \Pi_G(M^2)\approx (parton ~model)\cdot\left(1+\left(\frac{4}{b_0}\right) \left(\frac{\pi}{\alpha_s}\right)^2 \frac{ \langle\alpha_s G^2 \rangle}{M^4} +...\right), \label{huge} \end{equation} where $b_0$ is defined in Eq. (\ref{beta}). Thus, the low-energy theorem brings in a large numerical factor $12/(b_0\pi)\left( \pi^2/\alpha_s^2\right)\approx 400$ in front of the $M^{-4}$ correction as compared with the $\rho$-channel. This changes dramatically the estimate of $M^2_{crit}$ to: \begin{equation}\label{glue2} M^2_{crit}(0^+-gluonium)~\approx~ 20M^2_{crit} (\rho-channel)~\approx~15~\rm{GeV}^2~, \end{equation} and of the resonance properties, respectively (see Ref.\cite{nva,nv}), in rough agreement with the $M^2$-stability analysis of the scalar \cite{nva,nv} and pseudoscalar \cite{shore,nv} gluonia sum rules. Moreover, the huge power correction (\ref{huge}) remains of a uniquely large scale and is not supported by any other large-scale correction which makes it very difficult to interpret the breaking of the asymptotic freedom in terms of resonances \cite{novikov}. \\ Also, the phenomenology build on the IR power corrections cannot resolve the problem of $\eta^{'}$-meson. As is known from general considerations \cite{thooft} a dynamical resolution of this problem is not possible without accounting for field configurations of nontrivial topology. On the other hand, the standard OPE is based on standard, i.e. perturbative Feynman graphs and cannot account for a nontrivial topology. Hence, one invokes direct instantons when the currents $J$ interact at large $Q^2$ with small-size instantons \cite{novikov}. The instanton-induced contributions to the correlators in Eq. (\ref{corr}) were extensively studied in the literature (for review and references see \cite{shuryak}). We shall further comment on this point later. \section{Some phenomenological implications} Now, we add a new term proportional to $\lambda^2$ to the theoretical side of $\Pi_J(M^2)$, see Eq.(\ref{b}). It is convenient to consider the effect of $\lambda^2 \neq 0$ channel by channel. \subsection{Constraint on $\lambda^2$ from the $\rho$-channel} Phenomenologically, in the $\rho$-channel there are severe restrictions \cite{narison} on the new term $b_{\rho}$: \begin{equation} b_{\rho}~\approx~-~(0.03-0.07)~{\rm GeV}^2\label{constr} .\end{equation} To find out the implications of this constraint for the gluon mass, we turn to the correlator of two vector currents $J^\mu_V(x)\equiv \bar{\psi}_i\gamma^\mu\psi_j(x)$, viz. \begin{eqnarray} \Pi_{V}(Q^2) &\equiv& i \int d^4x ~e^{iqx} \langle 0\vert {\cal T} J^\mu_{V}(x) \left( J^\nu _{V}\right)\dagger (0) \vert 0 \rangle ,\nonumber\\ &-&(g^{\mu\nu}q^2-q^\mu q^\nu)\Pi^{(1)}_{V}(Q^2) + q^\mu q^\nu\Pi^{(0)}_{V}(Q^2). \label{vec.corr.def} \end{eqnarray} Here the indexes $i,j$ correspond to quark flavours; $m_i$ is the mass of the quark $i$. To first order in $\alpha_s$ and expanding in $m_{i,j}$ we obtain: \footnote{The results described in Eqs.~(\ref{Pi1m2GGv1},\ref{mqq},\ref{pis5:res}, \ref{scalarGG:res}) below have been obtained with the help of program packages MATAD \cite{MATAD} and MINCER \cite{MINCER} written in FORM \cite{FORM}.} \begin{equation} \label{Pi1full} \Pi^{(1)}_{V} = \Pi^{(1)}_{V,con} + \Pi^{(1)}_{V,NO} {}\, , \end{equation} where \begin{equation} \label{Pi1con} \Pi^{(1)}_{V,con} = \frac{m_j\langle \bar{\psi_i} \psi_i \rangle + m_i \langle \bar{\psi_j} \psi_j \rangle}{QQ^2} \end{equation} and \begin{eqnarray} \lefteqn{(16\pi^2)\Pi^{(1)}_{V,NO} = } \nonumber\\ &{+}& \left[ \frac{20}{3} +6 \frac{m_{-}^2}{Q^2} -6 \frac{m_{+}^2}{Q^2} +4 l_{\mu Q} +6 \frac{m_{-}^2}{Q^2}l_{\mu Q} \right] \nonumber\\ &{+}& \frac{\alpha_s}{\pi} \left[ \frac{55}{3} -16 \,\zeta(3) +\frac{107}{2} \frac{m_{-}^2}{Q^2} -24 \,\zeta(3)\frac{m_{-}^2}{Q^2} -16 \frac{m_{+}^2}{Q^2} \right. \nonumber \\ &{}& \left. \phantom{+ \frac{\alpha_s}{\pi}} +4 l_{\mu Q} +22 \frac{m_{-}^2}{Q^2}l_{\mu Q} -12 \frac{m_{+}^2}{Q^2}l_{\mu Q} +6 \frac{m_{-}^2}{Q^2}l_{\mu Q}^2 \right] \nonumber\\ &{+}& \frac{\alpha_s}{\pi}\frac{\lambda^2}{Q^2} \left[ -\frac{128}{3} +32 \,\zeta(3) -\frac{46}{3} \frac{m_{-}^2}{Q^2} +16 \,\zeta(3)\frac{m_{-}^2}{Q^2} -18 \frac{m_{+}^2}{Q^2} \right. \nonumber \\ &{}& \left. \phantom{+ \frac{\alpha_s}{\pi}\frac{\lambda^2}{Q^2} } +6 \frac{m_{-}^2}{Q^2}l_{iQ} -6 \frac{m_{+}^2}{Q^2}l_{iQ} +6 \frac{m_{-}^2}{Q^2}l_{jQ} -6 \frac{m_{+}^2}{Q^2}l_{jQ} \right] {}. \label{Pi1m2GGv1} \end{eqnarray} The above result is in $\overline{\mbox{MS}}$ scheme and the notations are as follows: \[ m_{\pm}=m_i\pm m_j, \ \ \ l_{ \mu Q}= \log(\frac{\mu^2}{Q^2}),\ \ \ \mbox{\rm and} \ \ \ l_{ i Q}= \log(\frac{m_i^2}{Q^2}). \] The normal-ordered quark condensates in Eq.~\re{Pi1full} do not receive by definition any perturbative corrections and are displayed explictly only to make easier the discussion below. Note that the terms of order $\lambda^2/Q^2$ in Eq.~(\ref{Pi1m2GGv1}) are $\mu$ independent and, thus, do not depend on the way how the overall UV subtraction of the correlator in Eq. (\ref{vec.corr.def}) has been fixed. Eq.~(\ref{Pi1m2GGv1}) are written for the case of normal ordered quark condensate, that is the reason why quark mass logs appear there. As is well-known these are of long-distance nature and can (and even should) be absorbed into the quark condensate \cite{BroadGen84,CheSpi88,Jamin:1993se}. To do this we should use normal non-ordered condensates, which means that perturbative theory contributions should {\em not} be automatically subtracted from the latter. Instead, these contributions are to be minimally renormalized. \\ In order $\alpha_s$ there are two diagrams (see Fig. 1) leading to nonzero vacuum expectation value of $ \langle \bar{\psi_i} \psi_i \rangle $ in perturbation theory. A simple calculation gives \begin{eqnarray} \langle \bar{\psi_i} \psi_i \rangle = &{}&\frac{3 m_i^3}{4\pi^2} \left[ 1 + \ln (\frac{\mu^2}{m_i^2}) + 2 \frac{\alpha_s}{\pi} \left( \ln^2(\frac{\mu^2}{m_i^2}) + \frac{5}{3} \ln (\frac{\mu^2}{m_i^2}) + \frac{5}{3} \right) \right] \nonumber \\ &+& \frac{ m_i \lambda^2}{4\pi^2} \frac{\alpha_s}{\pi} \left( -5 + 6\ln \frac{\mu^2}{m_i^2} \right) {}. \label{mqq} \end{eqnarray} Now in order to find the polarization operator $\Pi^{(1)}_V$ in the case of normal non-ordered condensates one proceed by equating $\Pi^{(1)}_{V,NO}$ to the combination $\Pi^{(1)}_{V,NON} + \Pi^{(1)}_{V,con}$ with the condensate values taken from Eq.~(\ref{mqq}). The result of the treatment indeed does not contain any mass logs and reads \begin{eqnarray}\label{Pi1m2GGv2} (16\pi^2)\Pi^{(1)}_{V,NON} &=& \Bigg{[} \frac{20}{3} +6 \frac{m_{-}^2}{Q^2} -6 \frac{m_{+}^2}{Q^2} +4 l_{\mu Q} +6 \frac{m_{-}^2}{Q^2}l_{\mu Q} \Bigg{]} \nonumber\\ &{+}& \frac{\alpha_s}{\pi} \Bigg{[} \frac{55}{3} -16 \,\zeta(3) +\frac{107}{2} \frac{m_{-}^2}{Q^2} -24 \,\zeta(3)\frac{m_{-}^2}{Q^2} -16 \frac{m_{+}^2}{Q^2}\nonumber\\ &+& 4 l_{\mu Q} +22 \frac{m_{-}^2}{Q^2}l_{\mu Q} -12 \frac{m_{+}^2}{Q^2}l_{\mu Q} +6 \frac{m_{-}^2}{Q^2}l_{\mu Q}^2 \Bigg{]} \nonumber\\ &{+}& \frac{\alpha_s}{\pi}\frac{\lambda^2}{Q^2} \Bigg{[} -\frac{128}{3} +32 \,\zeta(3) -\frac{76}{3} \frac{m_{-}^2}{Q^2} +16 \,\zeta(3)\frac{m_{-}^2}{Q^2}\nonumber\\ &-& 8 \frac{m_{+}^2}{Q^2} +12 \frac{m_{-}^2}{Q^2}l_{\mu Q} -12 \frac{m_{+}^2}{Q^2}l_{\mu Q} \Bigg{]} {}. \end{eqnarray} \noindent As it follows from Eq.~\re{mqq} the quark condensate obeys the non-trivial RGE: \begin{equation} \mu^2\frac{d}{d\mu^2} m_q \langle \overline{\psi}_{{\rm i}} \psi_{{\rm i}} \rangle = \frac{3 m_q m_i \lambda^2}{2\pi^2} \frac{\alpha_s}{\pi} {}, \label{RGVEV} \end{equation} where we have neglected all quartic quark mass terms. For further analysis, we shall choose the subtraction point $\mu$ equal to the sum rule scale $M$, as discussed in \cite{chk}. In the light-quark case relevant to the $\rho$-channels we can neglect the $m^2$ terms and $\Pi_{\rho}(M^2)$ simplifies greatly: \begin{equation} \Pi_{\rho}(M^2) = (parton ~model)\cdot \left( 1+ \left(\frac{{\alpha_s}}{\pi}\right) \left[1 + \frac{\lambda^2}{M^2} \left( -\frac{32}{3} + 8\zeta(3) \right) \right] \right) \label{Bor.Pi1.male} {}, \end{equation} or, in numerical form, \begin{equation} \Pi_{\rho}(M^2) = (parton ~model)\cdot \left( 1 +\left(\frac{{\alpha_s}}{\pi}\right) \left[ 1 - 1.05\frac{\lambda^2}{M^2} \right] \right) \label{Bor.Pi1.male.num} {}, \end{equation} This result was actually obtained earlier \cite{ball}. It is interesting that terms of order $M^{-2}$ were looked for first \cite{narison} in connection with proposal (see first paper in Ref. \cite{yamawaki}) that UV renormalons can be imitated by a nonperturbative quark mass. In terms of the quark mass the correction is substantially larger and is $-{6m^2/ M^2}$ (see Eq. (\ref{Pi1m2GGv2})). The conclusion of the analysis \cite{narison} was that the data allow only for an {\it imaginary} quark mass and quite a small one: \begin{equation} m_q^2\simeq -(71-114)^2 ~\rm{MeV}^2 \end{equation} (further analysis favoured rather lower end of the band \cite{narison1}). \\ Now, that we are interpreting possible $M^{-2}$ correction in terms of the gluon mass, the conclusion \cite{narison} that a nonperturbative quark mass could only be imaginary fits very well our scheme with a tachyonic gluon mass. Moreover, the value of the quark mass, say, $m_q^2=-0.01~\rm{GeV}^2$ is translated into a much larger gluon mass, $\lambda^2\approx -0.5~\rm{GeV}^2$. \\ To summarize our discussion of the $\rho$-channel, we find an important independent indication that the present data could be interpreted in terms of a tachyonic gluon mass: \begin{equation} \lambda^2 (1~\rm{GeV})\approx -(0.2-0.5)~\rm{GeV}^2~, \end{equation} where we have used $(\alpha_s/\pi)(1~\rm{GeV})\simeq 0.17\pm 0.02$. This result indicates that, even relatively large masses , say, $\lambda^2\approx -0.5$ GeV$^2$ cannot be ruled out. A further improvement of the (axial-) vector spectral function from $e^+e^-$ or/and $\tau$ decay data would improve the present constraint on the eventual existence of such a term. \subsection{Alternative estimate of $\lambda^2$ from the $\pi$-channel} We shall be concerned with the (pseudo)scalar two-point correlator: \begin{equation} \psi_{(5)}(Q^2) \equiv i \int d^4x ~e^{iqx} \ \langle 0\vert {\cal T} J_{(5)}(x) J^\dagger _{(5)}(0) \vert 0 \rangle , \end{equation} built from current of the bilinear light quark fields: \begin{equation} \label{scalar} J_{(5)}(x)=(m_i+(-)m_j)\bar q_i(i\gamma_5)q_j, \end{equation} having the quantum numbers of the pseudoscalar $(O^{-+})$ $\pi$ or scalar $(O^{++})~a^0/\delta$ mesons. In the chiral limit ($m_u\simeq m_d=0$), the QCD expression of the absorptive part of the correlator reads: \begin{equation} \frac{1}{\pi}{\rm Im}\psi_{(5)}(s)\simeq (m_i+(-)m_j)^2\frac{3}{8\pi^2}s\Bigg{[}1+ \left(\frac{{\alpha_s}}{\pi}\right)\left( -2\log\left(\frac{s}{\mu^2}\right) +\frac{17}{3}-4\frac{\lambda^2}{s}\right)\Bigg{]}~, \label{pis5:res} \end{equation} where one should notice that the coefficient of the $\lambda^2$ term: \begin{equation} b_{\pi}\approx 4b_{\rho}~.\end{equation} \\ In order to see the effect of this term, we work with the Borel/Laplace sum rule: \begin{equation}\label{laplace} {\cal L}(\tau) = \int_{t_\leq}^{\infty} {dt}~\mbox{e}^{-t\tau} ~\frac{1}{\pi}~\mbox{Im} \psi_{(5)}(t), \end{equation} {and} the corresponding ratio of moments: \begin{equation}\label{ratio} {\cal R}(\tau) \equiv -\frac{d}{d\tau} \log {{\cal L}(\tau)}, \end{equation} where $t_\leq$ is the hadronic threshold. \begin{table*}[hbt] \begin{center} \caption{Ratio of moments ${\cal R}(\tau)$ and value of $\lambda^2$} \begin{tabular}{c c c c c} \hline &&&& \\ $\tau\equiv 1/M^2$[GeV$^{-2}$]&${\cal L}_{exp}/(2f_\pi^2m_\pi^4)$&$R_{exp}$& $R^{\lambda^2=0}_{QCD}$&$-(\alpha_s/\pi)\lambda^2$[GeV]$^2$\\ &&&&\\ \hline &&&& \\ 1.4&1.20&$0.27\pm 0.06$&$0.66^{+0.50}_{-0.31}$&$0.09^{+0.08}_{-0.06}$ \\ &&&&\\ 1.2&1.29&$0.36\pm 0.07$&$0.79^{+0.42}_{-0.28}$&$0.10^{+0.09}_{-0.05}$\\ &&&&\\ 1.0&1.42&$0.46\pm 0.10$&$0.94^{+0.34}_{-0.25}$&$0.11^{+0.07}_{-0.06}$\\ &&&&\\ 0.8&1.61&$0.58\pm 0.12$&$1.10^{+0.26}_{-0.21}$&$0.12\pm 0.06$\\ &&&&\\ 0.6&1.89&$0.73\pm 0.15$&$1.24^{+0.20}_{-0.17}$&$0.12\pm{0.06}$\\ &&&&\\ 0.4&2.29&$0.89\pm 0.16$&$1.33^{+0.15}_{-0.13}$&$0.10\pm 0.06$\\ &&&&\\ \hline &&&&\\ \end{tabular} \end{center} \end{table*} \noindent The QCD expression of the sum rule can be written as: \begin{eqnarray} {\cal L}(\tau)&=& \frac{3}{8\pi^2}\Big{[}\overline{m}_u(\tau)+\overline{m}_d(\tau)\Big{]}^2 \tau^{- 2} \Bigg{[} 1+\delta_0+(\delta_2+\delta_\lambda)\tau +\delta_4\tau^2+\delta_6\tau^3\Bigg{]} \nonumber\\ -\frac{d{\cal L}}{d\tau}&=&2\frac{3}{8\pi^2}\Big{[}\overline{m}_u(\tau)+ \overline{m}_d(\tau) \ Big{]}^2\tau^{-3}\Bigg{[} 1+\delta'_0+\frac{1}{2}\left( \delta_2+\delta_\lambda\right)\tau-\frac{1}{2}\delta_6 \tau^3\Bigg{]} \nonumber\\ {\cal R}(\tau)&\equiv&-\frac{d{\cal L}}{d\tau}\Big{/}{\cal L}~, \end{eqnarray} where \cite{becchi}--\cite{CHET}: \begin{eqnarray} \delta_0&=&4.821a_s+28.953a^2_s\nonumber\\ \delta'_0&=&6.821a_s+57.026a^2_s \nonumber\\ \end{eqnarray} while: \begin{equation} \delta_\lambda=-4a_s\lambda^2~. \end{equation} By keeping the leading linear and quadratic mass terms, one has \cite{svz,snb}: \begin{eqnarray} \delta_2&=&-2(\overline{m}^2_u+\overline{m}^2_d-\overline{m}_u \overline{m}_d) \tau ,\nonumber\\ \delta_4&=&\frac{8\pi^2}{3}\left\{ \frac{1}{8\pi}\langle \alpha_s G^2\rangle-\left( m_d-\frac{m_u}{2}\right) \langle \bar uu\rangle+(u\leftrightarrow d)\right\} ,\nonumber\\ \delta_6&=&k\frac{896}{27}\pi^3\alpha_s\langle \bar uu\rangle ^2 . \end{eqnarray} $\overline{m}_i(\tau)$ and $a_s\equiv \overline{\alpha}_s(\tau)/\pi$ are respectively the running quark masses and QCD coupling. The expression of the running coupling\index{running coupling} to two-loop accuracy can be parametrized as ($\nu^2\equiv \tau^{-1}$): \begin{eqnarray} a_s(\nu)\equiv \left(\frac{\bar{\alpha_s}}{\pi}\right) =a_s^{(0)}\Bigg\{ 1-a_s^{(0)} \frac{\beta_2}{\beta_1}\log \log{\frac{\nu^2}{\Lambda^2}}+{\cal{O}}(a_s^2)\Bigg\}~, \end{eqnarray} with: \begin{equation} a_s^{(0)}\equiv \frac{1}{-\beta_1\log\left(\nu/\Lambda\right)}~, \end{equation} and $\beta_i$ are the ${\cal{O}}(a_s^i)$ coefficients of the $\beta$ function\index{$\beta$ function} in the $\overline{\mbox{MS}}$-scheme for $n_f$ flavours, which, for three flavours, read: \begin{equation} \beta_1=-9/2~,~~~~~\beta_2=-8~. \end{equation} $\Lambda$ is a renormalization group invariant\index{Renormalization Group Invariant (RGI)} scale but is renormalization scheme\index{renormalization scheme} dependent. The expression of the running quark mass in terms of the invariant mass\index{invariant mass} $\hat{m}_i$ is \cite{snb}: \begin{eqnarray} \overline{m}_i(\nu)&=&\hat{m}_i\left( -\beta_1 a_s(\nu)\right)^{-\gamma_1/\beta_1} \Bigg\{1+\frac{\beta_2}{\beta_1}\left( \frac{\gamma_1}{\beta_1}- \frac{\gamma_2}{\beta_2}\right) a_s(\nu)+{\cal{O}}(a_s^2)\Bigg\}~ \end{eqnarray} where $\gamma_i$ are the ${\cal{O}}(a_s^i)$ coefficients of the quark-mass anomalous dimension\index{anomalous dimension}. For three flavours, we have: \begin{equation} \gamma_1=2~,~~~~\gamma_2=91/12~. \end{equation} We shall use in our numerical analysis, the QCD parameters compiled in \cite{snb}: \begin{eqnarray} \Lambda&=& (375\pm 50)~\rm{ MeV}\nonumber\\ \langle \alpha_s G^2\rangle&=& (0.07\pm 0.01)~\rm {GeV}^4 \nonumber\\ \langle \bar uu\rangle^{1/3}(1~\rm {GeV})&=& -(229\pm 9)~\rm{MeV} \nonumber\\ k&\simeq& 2-3\nonumber \\ M^2_0&=&(0.8\pm 0.1)~\rm{GeV}^2 \end{eqnarray} where $k$ indicates the deviation from the vacuum saturation assumption of the four-quark condensates, while $M_0^2$ parametrize the mixed quark-gluon condensate: \begin{equation} \langle\bar u G^a_{\mu\nu}\frac{\lambda_a}{2} u \rangle= M^2_0\langle\bar uu\rangle~. \end{equation} One can notice that the ratio of moments ${\cal R}(\tau)$ is not affected by the quark masses to leading order of chiral symmetry breaking terms. Therefore, we shall use it for extracting $\lambda^2$. We parametrize the pseudoscalar spectral function as usual by the pion pole plus the $3\pi$ contribution.The $3\pi$ threshold has been calculated using lowest order chiral perturbation theory, while two $\pi'$ resonances are introduced with the following widths and masses in units of MeV \cite{RAFAEL}: \begin{equation} M_1= 1300\pm 100 ~~~\Gamma_1=400\pm 200 ~~~~\rm{and}~~~~ M_2= 1770\pm 30 ~~~\Gamma_2=310\pm 50 \end{equation} Including finite width corrections, one obtains: \begin{equation} \frac{1}{\pi}{\rm Im}\psi_5(s)=2f^2_\pi m^4_\pi \Bigg{[}\delta(s-m^2_\pi)+ \theta(s-9m^2_\pi) \frac{1}{(16\pi^2f_\pi^2)^2}\frac{s}{18}\rho_{had.}(s)\Bigg{]} \end{equation} where $\rho_{had.}(t)$ is given in \cite{RAFAEL}. We use, in our analysis, the largest range of predictions given by the two different parametrisations of the hadronic spectral function proposed in \cite{RAFAEL}, which correspond to the two values $\xi=-0.23 +i~0.65$ (best duality with QCD) and $\xi=0.23+i~0.1$ (best fit to the experimental curve for the observed $\pi'(1770)$ in hadronic reactions) of the phenomenological interference complex parameter $\xi$ between the two $\pi'$ states. The result of this analysis corresponding to ${\cal R}_{exp}(\tau)$ is summarized in Table 1, where we have also used $t_c\simeq 2.5$ GeV$^2$ as fixed from the duality analysis in \cite{RAFAEL}. Therefore, we can deduce to a good approximation, at the $\tau-$stability region ($\tau\approx 0.8$ GeV$^{-2})$: \begin{equation} (\alpha_s/\pi)\lambda^2\simeq \frac{1}{4}\Big{[} {\cal R}_{exp}-{\cal R}^{\lambda^2=0}_{QCD}\Big{]} \simeq -(0.12\pm 0.06)~\rm{GeV}^2. \end{equation} One can notice, by comparing this result with the one from the $\rho$-channel, that, to leading order in $\lambda^2$, the two estimates lead to the consistent value: \begin{equation}\label{lambda2} (\alpha_s/\pi)\lambda^2\simeq -(0.06-0.07)~\rm{GeV}^2~~~~\Longrightarrow~~~~~ \lambda^2 (\tau\approx 0.8~\rm{GeV}^{-2}) \simeq -(0.43\pm 0.09)~\rm{GeV}^2~. \end{equation} One can also notice in Table 1 that the introduction of $\lambda$ has enlarged the region of QCD duality \cite{RAFAEL} or $\tau$ stability \cite{snb,snmass} to a lower value of $M^2\equiv 1/\tau$ of about $M_\rho$. \\ To summarize, the pseudoscalar channel can be considered as a success for the phenomenology with $\lambda^2\neq 0$. \subsection{Effects of $\lambda^2$ on $\overline{m}_{u,d}$ and $\overline{m}_s$ from the (pseudo)scalar channels} Re-analysing the ${\cal L}$ sum rule by the inclusion of this new $\lambda$ term, one can also notice that the presence of such a term tends to decrease only slightly the prediction for the sum $(\overline{m}_{u}+\overline{m}_{d})$ of the running light quark masses by about: \begin{equation} \delta_{ud}\approx -5.6\% \end{equation} Applying this effect to the recent estimates in \cite{RAFAEL,snmass}, one obtains: \begin{equation} (\overline{m}_u + \overline{m}_d)(1 ~{\rm{GeV}})=(11.3\pm 2.4)~{\rm{MeV}}~~~~ \Longrightarrow ~~~~ (\overline{m}_u + \overline{m}_d)(2 ~\rm{GeV})=(8.6\pm 2.1)~\rm{MeV}~. \end{equation} For the estimate of the strange quark mass from the kaon channel, one also obtains a decrease of: \begin{equation} \delta_{us}\approx -5\% \end{equation} which is slightly lower than the one of the pion channel due to the partial cancellation of the $\lambda$ effect by the $m_s^2$ one.\\ \subsection{ $\lambda^2$ and the value of $\alpha_s(M_\tau)$ from tau decays} A natural extension of the analysis is to test the effect of $\lambda^2$ on the value of $\alpha_s$ extracted from tau-decays. For this purpose, we re-do the analysis from $R_\tau$ given for the first in \cite{bnp}, by adding the new $\lambda^2$ term. The modified expression of the tau hadronic width is \cite{bnp}: \begin{equation} R_\tau= 3\left( (|V_{ud}|^2+|V_{us}|^2\right) S_{EW}^\tau\Big{\{} 1+\delta^\tau_{EW}+\delta^\tau_{PT}+ \delta^\tau_2+ \delta^\tau_{\lambda}+\delta^\tau_{NP}\Big{\}}. \end{equation} where: $|V_{ud}|=0.9751\pm 0.0006$, $|V_{us}| =0.221\pm 0.003$ are the weak mixing angles; $S^\tau_{EW}=1.0194$ and $\delta^\tau_{EW} =0.0010$ are the electroweak corrections; $\delta_2^\tau$ is a small correction from the running light quark masses; $\delta^\tau_{NP}$ are non-perturbative effects due to operators of dimension $\geq$ 4. \begin{table*}[hbt] \begin{center} \caption{Reduction of $\alpha_s(M_\tau)$ for different values of $\lambda^2$} \begin{tabular}{c c} \hline & \\ $ -\lambda^2$ [GeV$^2$]&$-\delta^{\alpha_s}_\lambda [\%]$\\ &\\ \hline &\\ 0.16& $4\pm 1$\\ &\\ 0.25&$7\pm 1$ \\ &\\ 0.34& $9\pm 1$\\ &\\ 0.43& $11\pm 2$\\ &\\ 0.52& $14\pm 2$\\ &\\ 0.61& $16\pm 3$\\ &\\ \hline \end{tabular} \end{center} \end{table*} \noindent The correction due to the new dimension-2 term is: \begin{equation} \delta^\tau_\lambda= 2\frac{b_\rho}{M^2_\tau}. \end{equation} where, from Eq. (\ref{Bor.Pi1.male.num}), $b_\rho=-1.05a_s\lambda^2$ and $\lambda^2$ is given in Eq. (\ref{lambda2}). At fixed order of perturbation theory, one has \cite{Gorishny91,bnp}: \begin{equation} \delta^\tau_{PT}= a_s+5.2023a_s^2+26.366a_s^3+{\cal O}(a_s^4)~, \end{equation} where we have truncated the series at the complete computed coefficients. We use the values of the QCD non-perturbative parameters given in \cite{narison1} and the experimental value \cite{aleph} $R_\tau=3.484\pm 0.024$. By comparing the extracted value of $\alpha_s(M_\tau)$ obtained using $\lambda^2=0$ and $\lambda^2\neq 0$, we deduce in Table 2, using the previous approximations, a reduction of the $\alpha_s(M_\tau)$-value for different values of $\lambda^2$. The error given in Table 2 reflects the uncertainty on the eventual scale-dependence of the quantity $a_s\lambda^2$ or $\lambda^2$. The quoted central value comes from the average between the result corresponding to the freedom by taking one of these two quantities as the input number in the analysis. \\ \noindent $\bullet$ If we use the value of $\lambda^2$ as given in Eq. (\ref{lambda2}), then, we get to leading order in $\lambda^2$: \begin{equation} \delta^{\alpha_s}_\lambda \simeq -(11\pm 3)\%\label{thirteen}. \end{equation} On the other hand, a comparison, at the $\tau$-mass, of the value of $\alpha_s$ from the $Z$ or/and the world average \cite{PDG} with the one from $\tau$ decays, where the latter includes $\alpha_s^3$ corrections, gives a difference of about \cite{menke}: \begin{equation}\label{five} \delta^{\alpha_s}_\lambda \simeq -(5\pm 10)\%, \end{equation} which is of the same sign and compatible with the correction in Eq. (\ref{thirteen}). One should also have in mind that there are obvious limitations on the accuracy of the estimate in Eq. (\ref{thirteen}), due to the no control so far over terms proportional to $\lambda^2$ and higher powers of $\alpha_s(M^2_{\tau})$, including the anomalous dimension of the tachyon mass. These unknown effects might be taken into account by enlarging the error in Eq. (\ref{thirteen}) by about a factor 2.\\ \noindent $\bullet$ Inversely, one can improve the previous determinations of $\lambda^2$ from the $\rho$ and $\pi$ channels, either by adding, in the different experimental fits \cite{aleph} of the moments of the $\tau$-decays \cite{bnp,ledi}, the new $\lambda^2$ term with the extra (compared with previous experimental fits) constraint that the sign of its contribution is unambiguously fixed, or by attributing the eventual small deviation of the values of $\alpha_s$ from $\tau$ and from e.g. $Z-$decays as being due to this new term. If one proceeds in the later way, and use the result in Eq. (\ref{five}), one would obtain from Table 2: \begin{equation} \lambda^2 (M_\tau) \simeq -(0.2\pm 0.4)~\rm{GeV}^2~, \end{equation} which is less conclusive than the result in Eq. (\ref{lambda2}). \\ \noindent $\bullet$ Since the introduction of the gluon mass still presumably gives a better control over $1/M^2_{\tau}$ terms which have been introduced so far ad hoc and are the major sources of theoretical uncertainties \cite{sntau}--\cite{neubert}, one expects, within the present approach, an improvement of the theoretical errors in the determination of $\alpha_s(M_\tau)$ by about a factor 2. \\ \noindent $\bullet$ Furthermore, by comparing our previous results with some other approaches which resum the higher order terms of the perturbative series \cite{sntau}--\cite{neubert}, the effect of the $1/M^2$ term becomes manifest when one extends the analysis of the tau hadronic widths to a lower hypothetical $\tau$-lepton mass \cite{bnp,narison1,aleph}. We have therefore checked that until $M_\tau\approx 1.3$ GeV, the agreement of the prediction with the recent data \cite{aleph} remains quite good. \\ \noindent $\bullet$ Finally, one should notice that a na\"{\i}ve redefinition of $\alpha_s$ by a coupling which contains implicitly a $\lambda^2/M^2$ term, in the expression of the physical hadronic width $R_\tau$, but not on the level of the propagator in Eq. (\ref{simple}), will lead to inconsistencies. \subsection{Effects of $\lambda^2$ on the value of $\overline{m}_s$ from $e^+e^-$ and $\tau$ decays} In the determinations of $m_s$ based on the ratio \cite{snmass} or on the difference \cite{chen} of the non-strange and strange quark channels, the leading terms which are flavour independent will not contribute. Taking the leading term in $m^2_s\lambda^2$, which can be obtained from the previous expression of $\Pi^{(1)}_V$, one can deduce that the presence of the $\lambda$ term implies a slight decrease of the value of $m_s$ of about (1--3)\%, which is, however, smaller than the quoted errors (15\%) in its determination.\\ In the case of the determinations of $m_s$ from the alone $\Delta S=1$ part of the inclusive $\tau$ decay rate \cite{pich}, one can check, after using the resummed series given there, and the expressions of the different corrections in the massless case given in the previous subsection, that the presence of $\lambda$ tends to increase by about (8--9)\% the value of $m_s$ obtained from this method. Again, this effect is still smaller than the large uncertainties (about 30\%) of this determination from this method. \\ However, the effect of $\lambda$ tend to improve the agreement betwen the two different determinations. \subsection{The quest of the scalar $a^0/\delta$ and $K^*_0$-channels} Consideration of the correlator of the scalar $J_{\delta}$ current (or of its strange analogue $J_{K^*_0})$, where \begin{equation} J_{\delta}={1\over \sqrt{2}}\left(\bar{u}u-\bar{d}d\right) \end{equation} brings most interesting problems and challenges to the phenomenology with $\lambda^2\neq 0$. \\ Indeed, in the limit of massless quarks, the (ordinary) QCD expressions of the pion and scalar sum rules only differ for the contributions of the dimension-six (or more) condensates, where here: \begin{equation} \delta_6=-k\frac{1408}{81}\pi^3\alpha_s\langle \bar uu\rangle ^2, \end{equation} such that the strength of $M^2_{crit}$ or the optimization scale of about 2 GeV$^2$ \cite{snb} is obviously the same in both channels, i.e. much larger than the one of the $\rho$-channel. However, one should also notice that, in this case of ordinary OPE ($\lambda^2=0$ and neglect of the instanton effects), the scalar sum rules can reproduce within a good accuracy the experimental mass of the $a^0/\delta$ meson, while the predicted decay constant leads to a width consistent with the present data \cite{snb}. In the same way, the estimate of the running light quark masses from the scalar sum rules \cite{paver,snb,JAMIN,CHET} within the ordinary OPE gives a prediction consistent with the analysis from other methods \cite{PDG} or from the $e^+e^-$ \cite{snmass}, $\tau$ decay data \cite{pich} or a global sum rule extraction of the light quark condensate \cite{dosch}. \\ It is also obvious that $M^2_{crit}$ associated with $\lambda^2\neq 0$ is the same in the pion and $\delta $-channels: \begin{equation} M^2_{crit} (\pi-channel)~\approx~ M^2_{crit}(\delta-channel). \end{equation} What may be even a more drastic prediction is that the sign of the leading $M^{-2}$ correction is the same in the two channels. Moreover, a detailed analysis of the $a_0/\delta$ sum rule in the case $\lambda^2\neq 0$ gives acceptable phenomenological results, and moves the scale of optimization to lower values of $M^2_{crit}$ like the case of the pion. \\ On the other hand, direct instantons predict the same $M^2_{crit}$ in the both channels but the opposite signs for the leading correction \cite{shuryak}. Thus, the scalar channel might be the right place to discuss in more details the choice between the direct instantons and the $M^{-2}$ corrections. \\ On pure theoretical grounds, both direct instantons and nonperturbative $M^{-2}$ corrections seem to be indispensable parts of the QCD phenomenology. Indeed, instantons are necessary to resolve the $\eta^{'}$-problem. The nonperturbative $M^{-2}$ terms are necessary to match the $M^{-2}$ uncertainty of the perturbative series due to the UV renormalons. \\ All these qualitative arguments cannot fix, however, which correction (if any of the two) becomes important first at large $M^{2}$. The signatures of the direct instantons and of the tachyonic mass are different in the $\pi$- and $\delta$-channels. Namely, the tachyonic mass gives the same-sign correction while direct instantons result in opposite signs for the deviations from the asymptotic freedom. The existing lattice data \cite{chu} indicate, to our mind, that it is rather a mixture of the two mechanisms which works. Indeed, the corrections in the two channels are rather of opposite signs but the correction in the pion channel is substantially stronger. In the $\delta$-channel the correction is much smaller than it should be if the instanton-model parameters are normalized to the $\pi$-channel data. \\ Further data with better accuracy could be helpful. For example, it is not ruled out that in the $\delta$-channel the correction is first positive because of the gluon mass and becomes negative at lower $M^2$ because of the effect of the direct instantons. \subsection{The gluonia channels} We shall first be concerned with the two-point correlator: \begin{equation} \Pi_G(Q^2)~ \equiv~ i \int d^4x ~e^{iqx} \ \langle 0\vert {\cal T} J_G(x) \left( J_G(0) \right)^\dagger\vert 0 \rangle \end{equation} associated to the scalar gluonium current: \begin{equation} J_G={\beta(\alpha_s)\over \alpha_s}(G^a_{\mu\nu})^2. \end{equation} Its evaluation leads to: \begin{equation} \label{GG_scalar_res} \Pi_G(M^2) = (parton~ model) \left(1-{3\lambda^2\over M^2}+...\right) {}. \label{scalarGG:res} \end{equation} Thus, one can expect that the $\lambda^2$ correction in this channel is relatively much larger since it is not proportional to an extra power of $\alpha_s$. With $\lambda^2 \approx-0.5$ GeV$^2$ we obtain, by using the same 10 per cent convention as in section 3: \begin{equation}\label{mcglue} M^2_{crit}(0^+-gluonium)~\approx~15 ~\rm{GeV}^2 \end{equation} in amusing agreement with the independent estimate in Eq. (\ref{glue2}). \\ \noindent Exactly the same phenomenon is observed for the case of the two point correlator of the pseudoscalar gluonium currents: the relative strength of the $\lambda^2/M^2$ term added to the parton result coincides with that for the scalar gluonium displayed in Eq.~\re{GG_scalar_res}. \\ \noindent However, one should notice that a more quantitative analysis based on $\tau$-stability of the corresponding (pseudo)scalar sum rules \cite{nv}--\cite{shore} leads to a lower value of $M^2_{crit}\approx (3-5)$ GeV$^2$, but still much larger than the scale of the $\rho$ meson.\\ \noindent Let us consider now the case of the tensor gluonium with the correlator: \begin{eqnarray} &{}& \psi^T_{\mu\nu\rho\sigma}(q)\equiv i\int d^4x~ e^{iqx} \langle 0| {\cal T} \theta^g_{\mu\nu}(x) \theta^g_{\rho\sigma}(0)^\dagger|0\rangle\nonumber \nonumber \\ &=& \psi^T_4 \left( q_\mu q_\nu q_\rho q_\sigma -\frac{q^2}4 (q_\mu q_\nu g_{\rho\sigma} + q_\rho q_\sigma g_{\mu\nu}) +\frac{q^4}{16}(g_{\mu\nu}g_{\rho\sigma} ) \right) \nonumber \\ &+&\psi^T_2 \left( \frac{q^2}{4} g_{\mu\nu} g_{\rho\sigma} - q_\mu q_\nu g_{\rho\sigma} - q_\rho q_\sigma g_{\mu\nu} + q_{\mu}q_{\sigma} g_{\nu\rho} + q_{\nu}q_{\sigma} g_{\mu\rho} + q_{\mu}q_{\rho} g_{\nu\sigma} + q_{\nu}q_{\rho} g_{\mu\sigma} \right) \nonumber \\ &+&\psi^T_0 \left( g_{\mu\sigma} g_{\nu\rho} + g_{\mu\rho}g_{\nu\sigma} - \frac{{1}}{{2}} g_{\mu\nu} g_{\rho\sigma} \right) {}\, , \label{tensor_def1} \end{eqnarray} where \begin{equation} \label{tensor_def} \theta^g_{\mu\nu}= -G_{\mu}^{\alpha}G_{\nu\alpha} + \frac{1}{4} g_{\mu\nu} G_{{\alpha_\beta}} G^{\alpha\beta} {}\, . \end{equation} A direct calculation gives the following results for the structure functions $\psi^T_i$ and their respective Borel/Laplace transforms: \begin{eqnarray} \label{psi4} \pi^2 \psi^T_4 &=& \frac{l_{\mu Q}}{15} + \frac{17}{450} - \lambda^2 \frac{1}{3 Q^2} {} =\!=\!\Longrightarrow \frac{1}{15} \left( 1 - 5 \frac{\lambda^2}{M^2} \right) {}\, , \\ \label{psi2} \pi^2 \psi^T_2 &=& \frac{Q^2 l_{\mu Q}}{20} +\frac{9 Q^2}{200} +\lambda^2 \left( \frac{l_{\mu Q}}{6} -\frac{2}{9} \right) =\!=\!\Longrightarrow -\frac{M^2}{20} \left( 1 - \frac{10}{3}\frac{\lambda^2}{M^2} \right) {}\, , \\ \label{psi0} \pi^2 \psi^T_0 &=& \frac{Q^4 l_{\mu Q}}{20} +\frac{9 Q^4}{200} +\lambda^2 Q^2 \left( \frac{l_{\mu Q}}{4} -\frac{1}{12} \right) =\!=\!\Longrightarrow \frac{M^4}{20} \left( 1 - \frac{5}{2}\frac{\lambda^2}{M^2} \right) {}\, . \end{eqnarray} If, instead of considering $\theta_{\mu\nu}^g$, we would introduce the total energy-momentum tensor of interacting quarks and gluons $\theta_{\mu\nu}$, then various functions components of $\psi_{\mu\nu\rho\sigma}$ are related to each other because of the energy-momentum conservation. Indeed, requiring that \[ \psi^T_{{\mu\nu\rho\sigma}} q_\mu \equiv 0 \] we immediately obtain: \[ \psi^T_2 = \frac{3}{4} Q^2 \psi^T_4 \ \ \ \mbox{and} \ \ \ \psi^T_0 = \frac{3}{4}Q^4 \psi^T_4 {}\, , \] and, a consequence, the following representation of the function in Eq. \re{tensor_def1}: \begin{eqnarray} \psi^T_{\mu\nu\rho\sigma}(q) &=& \left( \eta_{\mu\rho}\eta_{\nu\sigma}+\eta_{\mu\sigma}\eta_{\nu\rho} - \frac{2}{3} \eta_{\mu\nu}\eta_{\rho\sigma}\right) \psi^T(Q^2), \label{temsor_def1} \end{eqnarray} where: \begin{equation}\label{psiT} \psi^T(Q^2) \equiv Q^4 \frac{3}{4}\psi^T_4(Q^2), \ \ \ \eta_{\mu\nu}\equiv g_{\mu\nu}-\frac{q_\mu q_\nu}{q^2} {}. \end{equation} To compare the sum rules based on $\lambda^2\neq 0$ with the existing ones with $\lambda^2=0$ \cite{nv,narison3}, one should have in mind that the results in \cite{nv} come from the sum rules corresponding to the correlator $\psi^T$ defined in Eq. (\ref{psiT}). Literally, these sum rules are not affected by the new $\lambda^2$ terms because, in terms of the imaginary parts of the structure functions, the $\lambda^2$ correction exhibited in Eq. (\ref{psi4}) is proportional to $\lambda^2\delta(s) $ which vanishes once multiplied by an extra power of $s$. However, further analysis of once more subtracted sum rules as well as of the $\tau-$stability \cite{nv} would reveal now a large mass scale. \noindent To summarize, we conclude, from Eqs (\ref{psi4}--\ref{psiT}) above, that the Borel/Laplace transforms of the functions $\psi^T_i(i=0,2,4)$ as well as $\psi^T$ will have a scale analogue to the one of the (pseudo)scalar gluonium channels. This feature might reveal that all gluonia channels have an universal scale , in distinction from the instanton model \cite{shuryak} where the tensor-gluonium channel is expected to be insensitive to the largest scale exhibited in the spin-0 gluonium channel. Measurements of the corresponding correlators could provide therefore an interesting test of the theory. \subsection{Heavy quarks: $\bar{Q}Q$ and $\bar{Q}q$-channels.} Introduction of the gluon mass would result in a substantial change in the QCD phenomenology of heavy quark interactions. However, we expect that it would be rather a reshuffle of the parameters than a decisive test. Indeed, there are successful description of the quarkonia states with the linear potential in Eq. (\ref{potential}) extrapolated down to $r=0$, see, e.g., \cite{tye}. We plan to come back to these points in a future publication. \section{Summary and conclusions} To summarize our discussions of the various channels, the introduction of the tachyonic gluon mass allows to explain in a very simple and unified way the variety of scales of the violation of the asymptotic freedom at moderate $Q^2$ in various channels. More specifically, the phenomenology based on the introduction of $\lambda^2\simeq -0.5$ GeV$^2$ leads to the following successful predictions: \\ \noindent $\bullet$ The correct sign and order of magnitude of the $M^{-2}$ correction in the $\rho$-channel.\\ \noindent $\bullet$ The new $M^{-2}$ correction in the pion channel breaks the asymptotic freedom in this channel at the mass scale $M^2_{crit}$ which is about factor of 4 higher than in the $\rho$-meson channel. This scale falls close to the value of $M^2_{crit}$ determined independently from the values of $f_{\pi}$ and quark masses. The sign of the correction due to the tachyonic mass is also the one which is needed to bring QCD in agreement with phenomenology. \\ \noindent $\bullet$ A natural explanation of $M^2_{crit}$ in the scalar-gluonium channel (Eq. (\ref{mcglue})) which is much larger than $M^2_{crit}$ in the $\rho$-channel and which was found independently from a low energy theorem (Eq. (\ref{glue2})). \\ \noindent $\bullet$ The account for the $M^{-2}$ correction lowers by $(11\pm 3)\%$ the value of $\alpha_s(M_{\tau})$ determined from the $\tau$-decays, and may improve the accuracy of its determination by a factor of about 2. It also decreases by about 5$\%$ the value of the $u,~d$ and $s$ running quark masses from the pseudoscalar channels. The presence of this term slightly decreases by about (1--3) \% the value of $m_s$ obtained from $e^+e^-$ \cite{snmass} and $\tau$ decay data \cite{chen}. It increases by about (8--9)\% the results in the in \cite{pich} from the individual $\Delta S=1$ inclusive $\tau$ decay channel. This effect, then, improves the agreement between the two different determinations. \\ \noindent To overview the logic of our analysis, we were motivated to introduce a tachyonic gluon mass by the data on the $Q\bar{Q}$ potential at short distances (see section 2). In this case $\lambda^2\neq 0$ brings in a linear term $kr$ at short distances which is, however, only a small correction to the Coulombic potential. Any precise measurement of this correction would require a subtraction of large perturbative contributions and as a result there are no error bars available on the value of $k$ at short distances. Thus, the idea was to embed this (hypothetical) linear term in the potential into a relativistic framework and consider further consequences from this extension. Hence, the introduction of the tachyonic gluon mass. This extension allowed to analyse the current correlators. As a result, the bounds on the $\lambda^2$ narrow substantially. In particular, $\lambda^2\approx-$1 GeV$^2$ which would naively correspond to the known value of $k$ does not pass phenomenological hurdles, which is not so surprising. What is much more amusing, the value $\lambda^2\approx-$0.5 GeV$^2$ fits the data well (see above).\\ Most remarkably, the introduction of $\lambda^2\neq 0$ brought large qualitative effects, namely a variety of the mass scales $M^2_{crit}$ where the asymptotic freedom is broken at various channels. Various $M^2_{crit}$ may differ by a factor of about 20. Further crucial checks of the phenomenology based on $\lambda^2\neq 0$ could be provided by measurements of the correlators on the lattice. The $a^0/\delta$-scalar channel and gluonia channels appear most interesting from this point of view. \\ However, the extension of the standard QCD sum rules by introducing $\lambda^2\neq 0$ cannot resolve all the phenomenological problems. Namely, the $\eta^{'}$-problem cannot be solved in this way and asks for introduction of instantons or instanton-like configurations. The existing data on lattice simulations might indicate that in the $a^0/\delta$-scalar channels, direct instantons become important at about the same $M^2$ as the effects of the gluon mass do. Further measurements of the current correlators on the lattice could provide checks of the phenomenology explored in the present paper. \section*{Acknowledgements} The work by K.G. Chetyrkin has been supported by DFG under Contract Ku 502/8-1 and INTAS under Contract INTAS-93-744-ext.\\ S. Narison wishes to thank M. Davier and S. Menke for useful communications, and A. Pich for comments. \\ V.I. Zakharov gratefully acknowledges hospitality of Centre National de la Recherche Scientifique (CNRS) during his stay at the University of Montpellier where part of this work was done. He is also thankful to R. Akhoury, G. Bali, L. Stodolsky for valuable discussions. \newpage
2024-02-18T23:40:12.430Z
1999-04-13T12:38:02.000Z
algebraic_stack_train_0000
1,645
10,156
proofpile-arXiv_065-8134
\section{Introduction} Recent proper motion studies of infrared-luminous stars in the Galactic Center (GC) (\cite{EG96,Genzel97,Ghez98}) have convincingly demonstrated the existence of a compact $\sim2.6\,10^6\ifmmode M_\odot \else $M_\odot$ \fi$ dark mass in the dynamical center of the GC, which is located within $0.1^{\prime\prime}$ of the radio source $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ (\cite{Ghez98}) ($1^{\prime\prime}=0.039\,$pc at 8\,kpc). Lower bounds on the compactness of this mass concentration, together with dynamical considerations, argue against the possibility of a massive cluster, and point towards a super-massive black hole as the likely alternative (\cite{Genzel97}; \cite{Maoz98}). This conclusion has important implications for basic issues such as the prevalence of massive black holes in the nuclei of normal galaxies, and the nature of the accretion mechanism that makes $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ so much fainter than typical active galactic nuclei (\cite{Melia94}; \cite{NYM95}). Wardle \& Yusef-Zadeh (1992) considered several gravitational lensing effects that could be induced by a massive black hole in the GC. In particular, Wardle \& Yusef-Zadeh pointed out that gravitational lensing would occasionally lead to the amplification and splitting of the stellar images of stars which happen to move behind the black hole, and that such events would typically last from months to years, depending on the distance of such stars from the black hole. Wardle \& Yusef-Zadeh also estimated that an optical depth of order unity for such events requires an observed central surface density $>1000$ stars arcsec$^{-2}$, and that this in turn would require an angular resolution of $\la 0.01^{\prime\prime}$ to individually separate the lensed images from the crowded background of faint stars. However, the photometric sensitivities and spatial resolutions required for such observations are far beyond current observational capabilities. Presently, the deepest near-infrared $K$-band (2.2 $\mu$m) images of the central arcsec reach down to $K=17^{\rm m}$ (\cite{Ghez98}). As we will show, at this limiting magnitude the expected central surface density is $\sim 20$ stars arcsec$^{-2}$ (\cite{Davidge97}). The highest spatial resolutions obtained so far are the diffraction limited resolutions of $\sim 0.15^{\prime\prime}$ (\cite{Eckart95}; \cite{Davidge97}) and $\sim 0.05^{\prime\prime}$ (\cite{Ghez98}) achieved in the proper motion surveys. In this paper we investigate a different possibility, namely {\it microlensing} amplification of faint stars by the central black hole. Such events occur when the amplified but {\it unresolved} images of faint stars rise above the detection threshold and then fade again as such stars move behind the black hole close to the line-of-sight to $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. We present a quantitative study of such microlensing events, and we compute in detail the microlensing rates as functions of the event durations, for a wide range of assumed detection thresholds. In addition, we also consider the possible amplification of sources which lie above the detection thresholds, and we re-examine the limit in which the two lensed images can actually be resolved. Our study is motivated by several recent developments. Deep infrared star counts in the inner GC (\cite{Blum96}; \cite{Davidge97}) and infrared and optical star counts in Baade's Window (\cite{Tiede95}; \cite{Holtzman98}) now make it possible to reliably model the infrared stellar luminosity function in the vicinity of the Galactic Center. The on-going proper motion monitoring campaigns of the inner few arcsec of the GC record both the positions and fluxes of individual stars in the field at a sampling rate of 1--2 observing runs per year. As a by-product, these measurements can be used to search for microlensing events, albeit at a low sampling rate. Such events would appear as time varying sources very close to $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. It is therefore intriguing that one or two variable IR sources may have already been detected close to, or coincident with, the position of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ (\cite{Genzel97}; \cite{Ghez98}). These sources may have brightened to $K\sim15^{\rm m}$ before fading from view, and appear to have varied on a timescale of $\sim 1$ yr. As we will argue, this behavior is consistent with the expected behavior of bright long-duration microlensing events. However, we will also argue that the probability that a single such event could have been detected during the course of the recent proper-motion monitoring campaigns is small ($\sim 0.5\%$). The detection probabilities will increase considerably as the observational sensitivities improve (e.g. with the advent of adaptive optics), and if $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ is monitored more frequently than has been done so far. The structure of our paper is as follows. In \S\ref{s:rate} we set up the formalism required for our computations. Some of the more technical aspects are discussed in the appendix. In \S\ref{s:gc} we discuss our treatment of the stellar densities, velocity fields, and $K$-band luminosity functions, which enter into the computation of the lensing rates. We present our results in \S\ref{s:results}, where we provide computations of the lensing rates as functions of the event durations for a wide range of assumed detection thresholds. In \S\ref{s:discuss} we present a discussion and summary. \section{Lensing by the super-massive black hole} \label{s:rate} The effective size of a gravitational lens at the lens plane is set by the Einstein radius, $\ifmmode R_{\sc e} \else $R_{\sc e}$ \fi$, \begin{equation} \ifmmode R_{\sc e} \else $R_{\sc e}$ \fi = \left(\frac{4GM_\bullet}{c^2}\frac{Dd}{D+d}\right)^{1/2} \sim 2.2\,10^{15} (M_{2.6} d_1)^{1/2}\,{\rm cm}\,, \label{e:Re} \end{equation} where $G$ is the gravitational constant, $c$ is the speed of light, $M_\bullet$ is the lens mass (here the black hole mass), $D$ and $d$ are the observer--lens and lens--source distances, respectively (see e.g. review by Bartelmann \& Narayan 1998). We assume, as will be justified below, that $d \ll D$, and define $M_{2.6} = M_\bullet/2.6\,10^6\ifmmode M_\odot \else $M_\odot$ \fi$ and $d_1 = d/1\,$pc. The effective size of the lens at the source plane is $\ifmmode R_{\sc s} \else $R_{\sc s}$ \fi = \ifmmode R_{\sc e} \else $R_{\sc e}$ \fi(D+d)/D\sim\ifmmode R_{\sc e} \else $R_{\sc e}$ \fi$. The angular size of the Einstein radius, $\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi$, is \begin{equation} \ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi \sim 0.018^{\prime\prime} D_8^{-1}(M_{2.6}d_1)^{1/2}\,, \label{e:Oe} \end{equation} where $8D_8\,{\rm kpc}$ is the Sun's galactocentric distance (\cite{Carney95}). In our study we assume that any IR-extinction associated with an accretion disk or an `atmosphere' near the event horizon is negligible. We begin by showing that in our problem the stars can be treated as point sources and that the linear approximation (small light-bending angle approximation) holds. First, when $d\ll D$, a star can be approximated as a point source as long as its radius, $R_\star$, is much smaller than $\ifmmode R_{\sc e} \else $R_{\sc e}$ \fi/A$, where $A$ is the required amplification to observe the lensed source above the detection threshold (see \S\ref{s:mlrate}). As we show below (\S\ref{s:plklf}), the faintest stars that contribute significantly to the lensing are low-mass stars with $R_\star \la 10^{11}$\,cm, which require amplification factors of $A \la 100$. Brighter stars require progressively smaller amplification factors, down to $A\sim 1$, which for detection thresholds $K\sim 17^{\rm m}$ correspond to stars with $R_\star < 10^{12}$\,cm. Assuming a mean stellar mass of $\sim1\ifmmode M_\odot \else $M_\odot$ \fi$, a central mass density of $\rho\sim4\,10^6\,\ifmmode M_\odot \else $M_\odot$ \fi$pc$^{-3}$ in the GC (\cite{Genzel96}) implies a mean stellar separation of $\delta\sim 0.005\,$pc. Even at such a small distance from the black hole, $\ifmmode R_{\sc e} \else $R_{\sc e}$ \fi\sim 1.5\,10^{14}\,$cm, so that the condition $R_\star\ll\ifmmode R_{\sc e} \else $R_{\sc e}$ \fi/A$ is generally satisfied for all relevant stars in the system. We thus conclude that the point source approximation holds over the entire range of interest. Second, the linear approximation holds as long as the Einstein radius is much larger than the Schwarzschild radius of the black hole, $R_\bullet$, \begin{equation} \ifmmode R_{\sc e} \else $R_{\sc e}$ \fi/R_\bullet \sim c\left(d/GM\right)^{1/2} \sim 2.8\,10^3(d_1/M_{2.6})^{1/2}\,, \label{e:Rsch} \end{equation} which even for $d=\delta$ is as high as $\sim200$. A point lens produces two images, one within and one outside the Einstein radius. The angular separation between the two images is \begin{equation} \Delta\theta = \ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi\sqrt{u^2+4}\,, \label{e:Dtheta} \end{equation} and their mean angular offset from the optical axis is $u\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi$, where $u\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi$ is the angular separation between the unlensed source and the lens. As will be shown in \S\ref{s:gc}, $u\ll1$ for amplification above present-day detection thresholds in the GC, so that $\Delta\theta \sim 2\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi$. Three angular scales in the problem determine the way the lensing will appear to the observer: the Einstein angle $\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi(d)$, the FWHM angular resolution of the observations, $\phi$, and the mean projected angular separation between the observed stars, $\Delta p(K_0)$, where $K_0$ is the detection threshold. The lensed images have to be detected against the background of a very dense stellar system. This background will be low as long as as the central surface density of observed stars, $S_0 =\Delta p^{-2}$, is small enough so that $\pi \phi^2 S_0 < 1$. Thus, for a given detection threshold, an angular resolution of at least \begin{equation} \phi = \Delta p(K_0)/\sqrt{\pi}\,. \label{e:phi} \end{equation} is required to detect the lensed star. Lower resolutions correspond to the regime of `pixel lensing' (\cite{Crotts92}), which we do not consider here. When $\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi < \phi$, the two images will not be resolved\footnote{We are assuming here that, generally, a separation of at least $2\phi$ is required to resolve the two microlensed images. The exact value of the minimal separation probably lies in the range $\phi$ to $2\phi$, and depends on the details of the procedure for faint source separation in the crowded inner $1^{\prime\prime}$ of the GC.}, and the lensed source will appear as a {\em microlensing} event. For a given angular resolution, there is a maximal lens--source distance $d_\mu$ for microlensing, \begin{equation} d_\mu = \frac{D_8^2}{M_{2.6}}\left(\frac{\phi}{0.018^{\prime\prime}}\right)^2\, {\rm pc}\,. \label{e:dmu} \end{equation} More distant stars will have $\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi > \phi$ and their two images will be separately resolved. \subsection{The microlensing rate} \label{s:mlrate} The unresolved images of a faint star at $d<d_\mu$ close to the line of sight will appear as a `new source' at the position of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, to within the observational resolution. The total source amplification, $A$, is related to $u$ by (\cite{Paczynski86}) \begin{equation} u^2 = 2A/\sqrt{A^2-1}-2\,, \end{equation} and for small $u$, $A \sim 1/u$. The maximal amplification along the star's trajectory is reached when its projected position is closest to the lens. A star of stellar type $s$ and an absolute $K$ magnitude $K_s$, which is observed on a line of sight with an extinction coefficient $A_K$ and distance modulus $\Delta$, will be detected above the flux threshold $K_0$ if it is amplified by at least $A_s=10^{-0.4(K_0-K_s-\Delta-A_K)}$. This corresponds to a maximal impact parameter of $u_{0,s}$. As the minimal required amplification approaches 1, $u_0$ diverges. This, and the fact that in practice the threshold is not sharply defined, leads us to introduce a cutoff at $u_0=1$, which implies a minimal amplification of $A=1.34$. The basic quantity that is required for predicting the lensing properties is the differential lensing rate as function of stellar type, event duration, amplification, and source distance from the black hole. We derive an explicit expression for the differential lensing rate in the appendix. Here it is instructive to discuss the relations between these properties by considering the simpler case of the total lensing rate, regardless of the event duration. The total integrated rate for microlensing amplification of background stars above the detection threshold is \begin{equation} \Gamma(K_0) = 2\langle u_0 \rangle \int_{r_1}^{r_2} \ifmmode R_{\sc e} \else $R_{\sc e}$ \fi \bar{v}_2 n_\star\,dr\,, \label{e:rate} \end{equation} where $r_1$ is the inner radius of the central stellar cluster, $r_2$ the maximal radius for producing unresolved lensed images, $\bar{v}_2$ is the transverse 2D stellar velocity averaged over the velocity distribution function, and $n_\star$ is the total number density of stars. Here and below, the notation $\langle\ldots\rangle$ designates the average of the bracketed quantity over the stellar types with $K_s>K_0$, weighted by their fraction in the stellar population, $f_s$, where it is assumed that $f_s$ does not depend on $r$. The properties of the stellar population enter the integrated rate only through the mean impact parameter $\langle u_0 \rangle$, which for $u_0 \ll 1$ is simply \begin{eqnarray} \langle u_0 \rangle & = & \sum_{\{s | K_s>K_0\}} f_s u_{0,s} \nonumber\\ & \simeq & \sum_{\{s | K_s>K_0\}} f_s A_s^{-1} = \langle F_K\rangle/F_0\,, \label{e:avu} \end{eqnarray} where $F_K$ is the observed (dust extinguished) stellar $K$-band flux and $F_0$ is the detection threshold flux corresponding to $K_0$. We characterize the microlensing time-scale for stars of type $s$ as the average time they spend {\em above the detection threshold}, \begin{equation} \bar{\tau}(K_0) = \frac{\pi}{2}u_{0,s}\ifmmode R_{\sc s} \else $R_{\sc s}$ \fi \bar{v}_2\,, \label{e:t0} \end{equation} where a $\pi/4$ factor comes from averaging over all impact parameters with $u\le u_{0,s}$. The lensing rate, amplification and event duration are inter-related. For $u_0 \ll 1$, the {\em median} value of the distribution of maximal amplifications is simply $A(u_0/2) \simeq 2A(u_0)$. Note, however, that the {\em mean} maximal amplification, $\langle A_{\rm max}\rangle \simeq \int_0^{u_0} (u_0 u)^{-1}\,du$, diverges logarithmically. Generally, a fraction $x$ of the events will have a maximal amplification of $A_0/x$ or more (i.e. a peak magnitude above the threshold of $\Delta K = 2.5\log x$ or less). The rate of such events is smaller, \begin{equation} \Gamma(x) = x\Gamma(K_0)\,. \label{e:qc} \end{equation} The time-scale of events amplified by a factor of $1/x$ above the threshold is somewhat longer than the average time-scale (Eq.~\ref{e:t0}), since they must cross closer to the line of sight, \begin{equation} \bar{\tau}(x) = \frac{2}{\pi}\left(\sqrt{1-x^2}+\frac{\sin^{-1}x}{x}\right)\bar{\tau}(K_0)\,, \label{e:t0c} \end{equation} which approaches $4\bar{\tau}(K_0)/\pi$ for large amplifications. Conversely, for a given $v_2$ and $u_{0,s}$, even a small increase in $\bar{\tau}$ is associated with a considerable increase in the maximal amplification. \subsection{Resolved lensed images} When $\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi > \phi$ at $d > d_\mu$, the two images can be resolved. As we show in \S\ref{s:results}, $d_\mu$ is already quite small for present-day angular resolutions, and will become smaller still as the resolution improves. This implies that there may be a non-negligible contribution to the lensing rate from regions beyond $d_\mu$. We therefore have to consider also the case of resolved images. For $u \ll 1$, the two images will appear at an offset of $\sim \ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi$ from $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, on opposite sides of it. The amplifications of the individual images are related by $A = A_1+A_2$ and $A_2 = A_1+1$, which for the high amplifications that are requires in the GC can be approximated as $A_1 \approx A_2 = A/2$. The formalism used for calculating the lensing of unresolved images can therefore be applied in this case simply by raising the effective detection threshold by a factor of two ($0.75^{\rm m}$). This makes resolved images harder to detect, but on the other hand, if both images are observed, the identification of the event as lensing is much more certain. \subsection{Lensing of observed bright stars} The formalism for describing unresolved and resolved lensing of stars from below the detection threshold can be also extended to cases where the microlensed source is an observed bright, $K<K_0$ star. At the current detection threshold, the observed central surface density is $\sim10$ arcsec$^{-2}$ (\cite{Genzel97}; \cite{Ghez98}). For such a small number of stars, whose orbits can be tracked individually, a statistical treatment is not very useful. However, our calculations indicate that a ten-fold improvement in the detection threshold will yield an observed central surface density of at least $\sim100$ arcsec$^{-2}$ (Eq.~\ref{e:Dp}). For such a large sample, a statistical treatment is more meaningful. We therefore present below, for completeness, results for microlensing amplification of stars fainter than the detection threshold (`faint-star lensing') as well as for stars brighter than the detection threshold (`bright-star lensing'). For bright-star microlensing we assume that the effective minimum magnification factor for detection is $A=1.34$ ($u_0=1$). \section{Modeling the stellar population in the Galactic Center} \label{s:gc} Three basic quantities enter into the computation of the microlensing rate: the stellar number density distribution, the stellar velocity field, and the $K$-band luminosity function (KLF). The stellar population in the central $\sim 100$ pc appears to consist of a mixture of old bulge stars, and `central cluster' stars which may have been produced in various star-formation episodes during the lifetime of the Galaxy (\cite{GHT94}; \cite{SM96}). Evidence for recent star formation in the GC has been provided by Krabbe et al. (1995) who found a concentration of luminous early-type stars within a few arcsec of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, implying a recent ($\la 10^7$ yr) starburst in which $\sim 10^{3.5}$ stars were formed. Additional young stellar systems, the `Arches' and `Quintuplet' clusters also exist close the GC and contain large numbers of massive stars (\cite{SSF98}). A further indication of on-going star-formation in the GC is the fact that the KLF in the central cluster extends to more luminous stars than the KLF of the Galactic bulge as observed via Baade's window (\cite{LR87}; \cite{Tiede95}; \cite{Blum96}; \cite{Davidge97}). Serabyn \& Morris (1996) suggested that continuous star formation in the central cluster is maintained by molecular clouds in the GC, and that the $\sim r^{-2}$ radial light profile of the central cluster reflects the distribution of the star-forming molecular clouds. In our analysis we make the simplifying assumption that the discrete young clusters in the GC can be modeled by a volume averaged and smoothly distributed stellar population. In particular, we do not consider here lensing events that might be associated with the innermost stellar cusp, which has been identified by Eckart \& Genzel (1997) and Ghez et al. (1998) in the immediate vicinity of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. An over-density of stars above the smoothed distribution very near the black hole may contribute to very short duration microlensing events. However, the total stellar mass and luminosity function of the stars that are associated with this `$\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ cusp' are poorly constrained at the present time, so that computations of the lensing rates require more extensive modeling and analysis. We will present such computations elsewhere. Because the lensing time-scales increase with distance, long duration events, which are relevant for low sampling rates of the current proper motion studies, tend to be associated with more distant stars. We therefore consider both stars in the central star-forming cluster, as well as more distant old-population bulge stars in our analysis. We now proceed to discuss the stellar densities, velocities and KLFs of these two components. \subsection{Stellar densities and velocities} Genzel et al. (1996, 1997) derived density and velocity models for the central cluster based on fits to the observed star counts, stellar radial velocities and proper motions in the inner few pc. Their mass density distribution is parameterized by a softened isothermal distribution \begin{equation} \rho_{\rm core}(r) = \frac{\rho_c}{1+3(r/r_c)^2} \label{e:nstar} \end{equation} Where $r_c$ is the core radius and $\rho_c$ is the central density, with best fit values of $r_c = 0.38\,$pc and $\rho_c = 4\,10^6\,\ifmmode M_\odot \else $M_\odot$ \fi{\rm pc}^{-3}$. The 1D velocity dispersion of the late type stars in the core, which are dynamically relaxed, is modeled by (\cite{Genzel96}) \begin{equation} \sigma_{\rm core} = (55^2+103^2(r/r_{10})^{-2\alpha})^{1/2} \,\,{\rm km\,s^{-1}}\,, \end{equation} where $\alpha = 0.6$ and $r_{10}$ is the projected distance corresponding to $10^{\prime\prime}$. In the inner GC, the mass density is strongly constrained by the dynamics, whereas the $K$ luminosity density $\nu$ ($L_{K\odot}\,{\rm pc}^{-3}$) is not well defined due to the patchy nature of the extinction. The situation is reversed on scales larger than 1\,kpc, where the observed rotation curve and velocity dispersion are harder to interpret, but $\nu$ can be de-projected from the observed surface brightness. Kent (1992) proposed that both the rotation curve and the surface brightness along the major axis of the Galactic bulge in the mid-plane of the galaxy at $r\ga1\,$kpc can be described as a superposition of bulge and disk components with a mass-to-light ratio\footnote{Following Kent (1992), we define the mass-to-light ratio as $\Upsilon = (M/\ifmmode M_\odot \else $M_\odot$ \fi)/(L_K/L_{K\odot})$, where the solar monochromatic luminosity at $2.2\mu$m is $L_{K\odot} = 2.154\,10^{32}\,$erg\,s$^{-1}$\,$\mu$m$^{-1}$. Note that Genzel et al. (1996) define this quantity as $\Upsilon^\prime = (M/\ifmmode M_\odot \else $M_\odot$ \fi)/(\lambda L_\lambda/\ifmmode L_\odot \else $L_\odot$ \fi)$, with $\lambda = 2.2\mu$m. The two definitions are related by $\Upsilon^\prime = 8.07\Upsilon$.} $\Upsilon = 1\Upsilon_\odot$ and a luminosity density \begin{eqnarray} \nu(r) & = & \nu_{\rm bulge}+\nu_{\rm disk}\\ \nonumber & = & 3.53K_0\left(\frac{r}{r_b}\right) + 3\exp\left(-\frac{r}{r_d}\right)\,\,L_{K\odot}{\rm pc}^{-3}\,, \end{eqnarray} where here $K_0$ is a modified Bessel function (not to be confused with the detection threshold), $r_b = 667\,$pc and $r_d = 3001\,$pc. Kent also suggested a $\nu \propto r^{-1.85}$ extrapolation of $\nu$ towards the center based on observations of the intensity variation in the inner 10 pc. We replace this extrapolation with an updated and non-divergent model by adopting the Genzel et al. (1996) mass density model (Eq.~\ref{e:nstar}) for the core and by assuming $\Upsilon = 1\Upsilon_\odot$ for the bulge and disk, so that $\rho_{\rm bulge} = \nu_{\rm bulge}$ and $\rho_{\rm disk}= \nu_{\rm disk}$. We further assume that the bulge is axisymmetric, and model the mass density over the entire distance range by \begin{equation} \rho = \rho_{\rm core} + \rho_{\rm bulge}+\rho_{\rm disk}\,. \label{e:rho} \end{equation} Figure \ref{f:rho} shows our mass density model, and in particular the emergence of the bulge component at $\sim 100$ pc. The central cluster completely dominates the total mass density in the inner 10 pc, but then falls to $\sim85$\% of the total at 50\,pc and to $\sim65$\% at $100\,$pc. The velocity field on this scale includes both the bulk rotation and the random motion. We approximate Kent's models for the galactic rotation and the velocity dispersion in the inner 300\,pc along the major axis of the bulge by log-linear fits. We assume that the rotation velocity is perpendicular to the line of sight, and that its contribution to the transverse velocity is \begin{equation} v_{\rm rot} = 80+20\log_{10}(r/{\rm 1pc})\,\,{\rm km\,s^{-1}}\,, \end{equation} and that the 1D velocity dispersion in the bulge is \begin{equation} \sigma_{\rm bulge} = 60.9+18.9\log_{10}(r/{\rm 1pc})\,\,{\rm km\,s^{-1}}\,. \end{equation} We model the 1D velocity dispersion over the entire distance range by \begin{equation} \sigma = \max(\sigma_{\rm core},\sigma_{\rm bulge})\,. \end{equation} Note that both the proper motion of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, which is $\la 20\,{\rm km\,s}^{-1}$ (\cite{Backer96}), and the Sun's galactocentric rotation have a negligible contribution to the relative proper motion of the source and lens because of the high stellar velocities near the dynamic center and because $d\ll D$. \subsection{$K$ luminosity function} In our analysis we consider a model for the KLF which is based on the observed portions of the KLFs in the central cluster and in the bulge, and on a theoretical KLF computed in a population synthesis model for the central cluster. \subsubsection{The power-law KLF} \label{s:plklf} The KLF of the bulge has been observed through Baade's window, which at $l=1.0^\circ$, $b=-3.9^\circ$, is $\sim0.6\,$kpc from the GC at the tangential point. The bulge KLF approximately follows a single power-law $d\log N_\star/d\logL_K = \beta$, with $\beta = 1.695$ (\cite{Tiede95}), from $M_K \sim -7.5^{\rm m}$ to $M_K \sim 2^{\rm m}$ \footnote{$M_K = -2.5\log L_K +84.245$ mag for $L_K$ in erg\,$s^{-1}$\,$\mu{\rm m}^{-1}$.}, assuming a distance modulus $\Delta=14.5$ and $A_K\sim0$. The faintest observed stars are at the detection threshold, and the power-law KLF likely extends, or steepens slightly, to lower luminosity stars near the main-sequence turn-off point. Recent {\em HST} observations of the optical luminosity function in Baade's Window (\cite{Holtzman98}) probe it down to very low stellar masses ($\sim 0.3\, \ifmmode M_\odot \else $M_\odot$ \fi$), well below the main-sequence turn-off point. The $V$-band luminosity function (VLF) presented by Holtzman et al. (1998) shows that a sharp break occurs at the turn-off point $M_V \sim 4.5$--$5^{\rm m}$, corresponding to $\sim 1\,\ifmmode M_\odot \else $M_\odot$ \fi$ stars. We extrapolated the power-law KLF of Tiede et al. (1995) down to $M_K = 3.5^{\rm m}$ (assuming $V$-$K \sim 1.5$ for $1\,\ifmmode M_\odot \else $M_\odot$ \fi$ stars) and compared it to the Holtzman et al. (1998) VLF at $M_V = 5^{\rm m}$. We find that the predicted $K$ star-counts somewhat underestimate the $V$ star-counts, but agree with them to within a factor of 2. We thus conclude that the power-law KLF can be extended down to at least $M_K = 3.5^{\rm m}$. The observed VLF can be used to determine the behavior of the KLF at even lower luminosities by using the $V$-to-$K$ conversions for low mass stars presented by Henry \& McCarthy (1993). Doing this shows that the KLF likely flattens at a break point close to $M_K=3.5^{\rm m}$, with $\beta \sim 1.5$ in the range $3.5^{\rm m}\la M_K \la 6.5^{\rm m}$, and turns over at $M_K\sim7^{\rm m}$. As we have discussed above, the stellar population in the central cluster appears to be consistent with continuous star-formation throughout the Galaxy lifetime (see e.g. Serabyn \& Morris 1996). However, despite the many differences between the populations in the bulge and the central cluster, the observed KLF in the central $12.4^{\prime\prime} \times 11.9^{\prime\prime}$ of the GC follows a power-law similar to that of the bulge KLF (\cite{Davidge97}), but extends to more luminous stars (\cite{Blum96}). The exact upper luminosity cutoff is not well-defined due to statistical fluctuations in the counts and the contribution of asymptotic giant branch (AGB) stars, which do not follow the power-law. The central cluster's KLF is not known beyond the current detection threshold of $K_0 \sim 17^{\rm m}$, which is the region of interest for the microlensing calculations. However, the similarity of the KLF power-law indices in the central cluster and bulge KLFs suggests that the two KLFs are similar, apart for having different upper luminosity cutoffs. The much lower extinction in Baade's window, which is only $A_K=0.14^{\rm m}$ as compared to $3.4^{\rm m}$ in the GC (\cite{RRP89}), makes it possible to extend the KLF in the GC to lower luminosities. Indeed, Blum et al. (1996) find that the bulge KLF, after correcting for the $A_K$ difference, can be smoothly joined to the GC LF in the central $2^{\prime} \times 2^{\prime}$. Their resulting composite LF has a best fit power-law index $\beta = 1.875$, and extends from $M_K\sim-10^{\rm m}$ down to $M_K = 2^{\rm m}$. Our theoretical model for the central cluster, which we discuss below, indicates that this power-law character is further maintained down to $M_K \sim 3.5^{\rm m}$. We therefore adopt the same $\beta = 1.875$ power-law index for both the central cluster and bulge KLFs. For the central cluster KLF we set the upper luminosity cutoff at $K_h=8^{\rm m}$, and for the bulge KLF at $K_h=10.5^{\rm m}$. We set an effective low luminosity cutoff for both at $K_l=21.5^{\rm m}$, which corresponds to stars with masses $\sim 1\ifmmode M_\odot \else $M_\odot$ \fi$. As we show in \S\ref{s:fit}, the lensing rates are insensitive to the exact choice of the low luminosity cutoff. We note that the observed bulge KLF is better fitted by a somewhat flatter power-law than $\beta = 1.875$, which implies less faint-star lensing candidates. On the other hand, the comparison with the optical luminosity function suggests that the KLF is a conservative estimate of the total number of stars. In addition, the flatter power-law fails to account for the strong excess of horizontal branch (HB) stars above the power-law (\cite{Tiede95}). These are important potential microlensing sources, as they lie in a magnitude range that is just below the threshold if the bulge population is observed through the GC extinction of $A_K=3.4^{\rm m}$. We therefore consider these small discrepancies in slope and number to be within the limitations of the power-law approximation and the observational uncertainties. \subsubsection{The theoretical KLF} \label{s:popsynt} As a check on the empirically based pure power-law KLF, we have also computed a series of theoretical KLFs using our `population synthesis' code (see e.g. \cite{Sternberg98}). In our models we used the Geneva stellar evolutionary tracks (\cite{Schaerer93}), and concentrated on stellar models with twice-solar metallicities, as is indicated by the enhanced abundances of the gas in the GC (\cite{MS96}). However, recent measurements of stellar spectra of cool luminous stars in the GC point to solar metallicities (\cite{Ramirez98}). We therefore considered also such stellar models, and verified that our results do not depend strongly on the assumed metallicity. We computed the stellar $K$ luminosities using the empirical bolometric corrections and $V$-$K$ colors for dwarfs, giants and supergiants compiled by Schmidt-Kaler (1982), and Tokunaga (1998). The Geneva tracks for intermediate mass stars ($\sim 2$--$7 \ifmmode M_\odot \else $M_\odot$ \fi$) do not extend beyond the end of the early AGB phase. We extended these tracks to include also the more luminous thermal-pulsing AGB phases following prescriptions described by Charlot \& Bruzual (1991) and Bedijn (1988). In our models we assume explicitly that stellar remnants and stars less massive than $0.8\ifmmode M_\odot \else $M_\odot$ \fi$ are negligible sources of $K$-band luminosity. This low-mass luminosity cutoff roughly corresponds to the low luminosity cutoff of the empirically based power-law KLF. We constructed theoretical models for a range of cluster parameters assuming continuous star-formation lasting for 10\,Gyr. We considered models with pure power-law IMFs, and the Miller-Scalo IMF (\cite{MS79}; \cite{Scalo86}) for a range of lower- and upper-mass cutoffs. We selected the model which yields a KLF which best matches the observed KLF for the central cluster measured by Davidge et al. (1997). We find that a model with a Miller-Scalo IMF ranging from $0.1$ to $120\,\ifmmode M_\odot \else $M_\odot$ \fi$ provides the best overall fit to the data. In Figure~\ref{f:klf} we compare our model KLF for the central cluster with the $\beta = 1.875$ power-law fit of Blum et al. (1996) to their composite KLF. The overall agreement between the theoretical KLF and the power-law KLF is remarkable, for both stellar models with solar and twice-solar metallicities. The model successfully reproduces both the power-law character and slope of the observed KLF of the central cluster. Furthermore the model shows that the power-law KLF likely extends down to at least $K=21.5^{\rm m}$. In this model, the most probable lensed sources are $K\sim 21^{\rm m}$ stars with $M_\star \sim 1\,\ifmmode M_\odot \else $M_\odot$ \fi$, $R_\star\sim 10^{11}$\,cm just off the main sequence, which require a magnification of order $A\sim 50$ to be detected above the threshold. Our population synthesis model predicts a mass-to-light ratio $\Upsilon = 0.24\Upsilon_\odot$, in excellent agreement with $\Upsilon \sim 0.25\,\Upsilon_\odot$ measured in the central cluster by Genzel et al. (1996). \subsubsection{Normalizing the KLF} \label{s:fit} As shown by eq.~\ref{e:rate}, the lensing rate depends on, $n_\star$, the total number density of stars which are effective sources of K-band luminosity. We refer to such stars as `K-emitting' stars, and define \begin{equation} n_\star = \frac{f_\star}{\bar{m}}\rho\,, \label{e:norm-rho} \end{equation} where $\rho$ is the total dynamical mass density, $\bar{m}$ is the mean stellar mass of the K-emitters, and $f_\star$ is the fraction of the total dynamical mass contained in $K$-emitting stars, i.e. excluding low-mass stars and remnants (i.e. objects which lie below the effective low-luminosity cutoff of the KLF) and gas clouds. The observed star counts, $dN^{\rm obs}_\star/dL_K$, within an area S, can be used to obtain a best fit value of $f_\star/\bar{m}$ given the constraint \begin{equation} \frac{dN^{\rm obs}_\star}{dL_K} = \frac{df}{dL_K} \frac{f_\star}{\bar{m}}\int\rho\,dS\,dr\,, \label{e:norm} \end{equation} where the KLF $df/dL_K$ is normalized to 1. The mass-to-light ratio over the integration volume used in Eq.~\ref{e:norm} can then be deduced from the fit by \begin{equation} \Upsilon = \frac{M}{L_K} = \frac{\bar{m}}{f_\star\bar{L}_K}\,. \label{e:mtol} \end{equation} where $\bar{L}_K$ is the average K-band luminosity of the K-emitting stars. The power-law KLF we adopt in our computations ($df/dL_K\proptoL_K^{-\beta}$) is characterized by $1<\beta<2$, $L_l \ll L_u$ and $L_l\ll L_0 \ll L_u$, where $L_l$ and $L_u$ are the effective lower and upper luminosity cutoffs to the KLF, and $L_0$ is the $K$ luminosity corresponding to the detection threshold. By approximating $u \sim 1/A$ (where $A$ is the required amplification) the mean stellar luminosity and mean impact parameter for such KLFs are given by the simple approximate relations \begin{eqnarray} \bar{L}_K & \approx & \frac{(\beta-1)}{(2-\beta)}\frac{L_u^{2-\beta}}{L_l^{1-\beta}}\,,\nonumber\\ \langle u_0 \rangle & \approx & \frac{(\beta-1)}{(2-\beta)}\left(\frac{L_l}{L_0}\right)^{\beta-1}\,. \end{eqnarray} It also follows that $n_\star$, $\Upsilon$ and the lensing rate $\Gamma$ scale as \begin{eqnarray} n_\star & \sim & f_\star/\bar{m} \sim \frac{1}{\beta-1}L_l^{1-\beta}\,,\nonumber\\ \Upsilon & \sim & (2-\beta)L_u^{\beta-2}\,,\nonumber\\ \Gamma & \sim & \langle u_0 \rangle n_\star \sim \frac{1}{2-\beta}L_0^{1-\beta} \,. \label{e:plrho} \end{eqnarray} Eq.~\ref{e:plrho} shows that the total lensing rate is {\it insensitive to the low-luminosity cutoff} of the KLF. This simply reflects the fact that an increase in the number of $K$-emitting stars as $L_l$ decreases is offset by a correspondingly smaller mean lensing impact parameter for the stellar system, and vice-versa. This important property allows us to robustly compute the lensing rate even if the effective lower luminosity cutoff of the power-law KLF is not well determined\footnote{We note that if $\beta>2$ then the rate does depend on $L_l$ with $\Gamma \sim L_l^{2-\beta}$. However, as we have discussed, the observations of Holtzman et al. (1998) strongly suggest that the KLF flattens, rather than steepens, below our assumed value for $L_l$.}. We note that since the differential contribution of stars with luminosity $L_{K,s}$ to the mean impact parameter $\langle u_0 \rangle$ scales like $(L_{K,s}/L_0)L_{K,s}^{-\beta} \sim L_{K,s}^{1-\beta}$, the integrated contribution of stars from a $1^{\rm m}$-wide bin is, for $\beta=1.875$, a very weakly decreasing function of the bin's $K$ magnitude. We therefore expect that the lensed stars will exhibit a wide range of intrinsic $K$ magnitudes, with a weak trend towards lensing by stars close to the detection threshold. It is uncertain at which radius the stellar population makes the transition from a population that is characteristic of a star-forming cluster to a more bulge-like population. However, as we argued above, the observed properties of the KLFs in the GC and the bulge suggest that they are very similar for $K>10.5^{\rm m}$. Since the normalization of the KLF does not depend strongly on the upper luminosity cutoff (Eq.~\ref{e:plrho}), and since the very high luminosity tail of the KLF is not relevant for the lensing calculations (the $8^{\rm m}<K<10.5^{\rm m}$ stars have a negligible contribution to the lensing rate of observed stars), we infer the value of $f_\star/\bar{m}$ using the star-counts observed in the core, and we adopt this normalization for the entire inner 300 pc. In carrying out this procedure\footnote{The volume integration in Eq.~\ref{e:norm} was carried out by approximating the rectangular field with a circular field of projected radius $p$ having the same area. The integration was done over a cylinder of radius $p$, centered on the black hole and extending along the line of sight $300\,$pc in each direction, a distance large enough to ensure that the surface density reaches its asymptotic value.} we used the observed number counts in the $K=14^{\rm m}$ bin (stars per mag per arcsec$^{-2}$), averaged over the $12.4^{\prime\prime}\times 11.9^{\prime\prime}$ field observed by Davidge et al. (1997). The star counts in this luminosity range are likely complete, and this range is also well separated from the regions that are affected by AGB and HB stars, which cause deviations from the power-law behavior (\cite{Tiede95}). Using Eq.~\ref{e:norm}, we then find that $f_\star/\bar{m} = 0.2\,\ifmmode M_\odot \else $M_\odot$ \fi^{-1}$. This value for $f_\star/\bar{m}$ can be reproduced, for example, by a choice of $f_\star = 0.2$ and $\bar{m} = 1\ifmmode M_\odot \else $M_\odot$ \fi$, which is comparable with the values $f_\star=0.22$ and $\bar{m} = 0.84\ifmmode M_\odot \else $M_\odot$ \fi$ of our population synthesis model for the central cluster. For the central cluster power-law KLF, $\bar{L}_K=22\,L_{K\odot}$, so that Eq.~\ref{e:mtol} yields $\Upsilon \sim 0.22\,\Upsilon_\odot$, again in excellent agreement with $\Upsilon \sim 0.25\,\Upsilon_\odot$ inferred by Genzel et al. (1996) for the central cluster. \section{Results} \label{s:results} We have carried out detailed computations of the lensing event rates using the formalism described in \S\ref{s:rate} and in the appendix, and using the stellar number and velocity distributions and the power-law $K$-band luminosity function, as discussed in \S\ref{s:gc}. The integrations were carried out from a minimum distance $r_1 = 0.005\,$pc, equal to the mean central stellar separation, to a distance $r_2 = 300\,$pc, where the integrated lensing rates approach their asymptotic values. The normalized central cluster KLF allows us to estimate the stellar surface density as a function of the detection threshold, and from it derive the mean angular separation of the stars, $\Delta p$, the required angular resolution of the observations $\phi$, and the maximal distance for observing (unresolved) microlensed stars $d_\mu$ (as given by Eqs.~\ref{e:phi} and \ref{e:dmu}). We find that for the central cluster KLF \begin{eqnarray} \log \Delta p & = & 2.31-0.175 K_0\,,\nonumber\\ \log S_0 & = & -4.62+0.35 K_0\,, \label{e:Dp} \end{eqnarray} for $\Delta p$ in arcsec and $S_0$ in arcsec$^{-2}$. Thus, for a detection threshold of $K_0 = 17^{\rm m}$ we expect a central surface density of 21 stars per arcsec$^{-2}$ for complete counts\footnote{Genzel et al (1997) and Ghez et al (1998) reported $S_0\sim20$ and 15 stars arcsec$^{-2}$, respectively. However, the star counts are incomplete close to the detection thresholds, and the measured $S_0$ actually indicate a significant density enhancement (the `$\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ cusp') relative to the immediate surroundings.}. We present our results in a way which makes it possible to flexibly estimate the actual detection probabilities for a broad range of observing strategies. We consider the generic monitoring campaign to consist of a series of $n$ very short observing runs which are carried out during a total time period $T$ (typically $T \sim$ several years), with a mean interval $\Delta T$ between each observing run (typically $\Delta T \sim$ a month to a year). We now wish to distinguish between `long' and `short' events, where we define the event duration as the time the source spends above the detection threshold. Long events are those with durations $\tau > \Delta T$, and would appear as time varying sources that brighten and fade during the course of several observing runs. The microlensing origin of long events could be verified, in principle, from the shape of the light curve and its achromatic behavior. Short events are those with durations $\tau < \Delta T$, and would be observed as a single `flash' provided they occur within a time $\tau$ of any one of the $n$ observing runs. In the limit of small event rates the detection probability may be written as \begin{equation} P = P_{\rm short}+P_{\rm long} = n\bar{\tau}_{\rm short} \Gamma_{\rm short} + T\Gamma_{\rm long\,, \label{e:P}} \end{equation} where $\Gamma_{\rm short}$ and $\Gamma_{\rm long}$ are the rates of short and long events, and $\bar{\tau}_{\rm short}$ is the rate averaged lensing duration for short events. In the (ideal) limit of continuous monitoring, $\Gamma_{\rm long}$ approaches the total event rate and $P = T\Gamma_{\rm total}$. The results of our computations are displayed in Figs.~\ref{f:cumr} and \ref{f:dK}. In Fig.~\ref{f:cumr} we plot the cumulative rates, $\Gamma_{\rm long} (\tau>\Delta T)$, for all lensing events with durations $\tau$ longer than the timescale $\Delta T$, as functions of $\Delta T$. We present results for events which produce unresolved and resolved images for stars which are either intrinsically below or above the detection thresholds. We present computations for detection thresholds ranging from 16 to 19 magnitudes. The total lensing rates, can be read off the plot from the asymptotic values of the curves as $\Delta T \rightarrow 0$. The curves are flat for timescales less than $\sim$ 1 week, which is shorter than most of the event durations. As $\Delta T$ increases $\Gamma_{\rm long}$ decreases as a smaller fraction of the lensing events satisfy $\tau > \Delta T$. As an example, Fig.~\ref{f:cumr} shows that for $K_0=17^{\rm m}$, the total lensing rate is equal to $3\times 10^{-3}$ yr$^{-1}$, and that for events with durations greater than 1 yr the rate is equal to $1\times 10^{-3}$ yr$^{-1}$. In Fig.~\ref{f:cumr} we also plot the rate-averaged lensing timescale, $\bar{\tau}_{\rm short}$, for events with $\tau <\Delta T$, which is needed to estimate the detection probability of short events. The values of $\bar{\tau}_{\rm short}$ are almost independent of $K_0$, since the shape of the cumulative rate function is insensitive to $K_0$. We note also that for small timescales $\bar{\tau}_{\rm short} \approx \Delta T/2$, as would be expected for a rate which is independent of the timescale. The rate of short events is simply $\Gamma_{\rm short}(\Delta T) = \Gamma_{\rm long}(0)-\Gamma_{\rm long}(\Delta T)$. Thus, for $K_0=17^{\rm m}$ the rate of events lasting less than 1 yr is $2 \times 10^{-3}$ yr$^{-1}$, and the average duration of such events is about 3 months. In Fig.~\ref{f:dK} we plot $\Delta K_{\rm long}$, the median value of the maximal amplifications for long events ($\tau>\Delta T$), as a function of $\Delta T$. Long events which amplify sub-threshold stars are associated with stars at greater distances from the black hole, stars at the low velocity tail of the velocity distributions, or stars that cross closer to the line of sight to $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. Because of the latter effect, $\Delta K_{\rm long}$ is greater than $0.75^{\rm m}$, which is the median value for all the events. This effect is less marked for resolved lensing, which is characterized by longer timescales, and is very weak for lensing of stars which lie above the detection threshold, where $\Delta K_{\rm long}$ approaches the limit $\sim 0.75^{\rm m}$. Figure~\ref{f:dqdr} shows the contributions to the lensing rate from different regions in the GC, for a specific example where $K_0=16.5^{\rm m}$ and $\Delta T = 1\,$yr. Several important features are apparent in the results displayed in Fig.~\ref{f:dqdr} and \ref{f:cumr}. First, it is evident that the cumulative rate $\Gamma_{\rm total}(<r)$ approaches its asymptotic value at $r\la 10\,$pc, and that short lensing events due to stars near the black hole dominate the total rate. For example, Fig.~\ref{f:cumr} shows that the median timescale of unresolved amplifications of sub-threshold stars for is $\sim 2$ months, and Fig.~\ref{f:dqdr} shows that the median distance from the black hole is $\sim 0.3\,$pc. The run of the differential rate for long events, $d\Gamma_{\rm long}/dr$, with distance illustrates that the inner regions hardly contribute any long events. Second, unresolved lensing of sub-threshold stars have the shortest timescales. This is because such events are due mainly to stars close to the black hole where the velocities are high and the lensing cross sections are small. Unresolved lensing of stars which lie above the threshold, for which the cross sections are larger, have somewhat longer timescales, and resolved events, which are due to stars at larger distances from the black hole, are longer still. Third, the rates for resolved lensing are about an order of magnitude smaller than those for unresolved lensing events. We now apply our results to estimate the probability that the variable $K$-band source (or sources) reported by Genzel et al. (1997) (source S12) and Ghez et al. (1998) (source S3) was a microlensing event. The monitoring of proper motions in the GC has been going on for about $T = 6\,$yr, at a sampling interval of $\Delta T = 1\,$yr, with a detection threshold of $K_0 = 16.5^{\rm m}$ and a FWHM resolution of $\la0.15^{\prime\prime}$ (\cite{Genzel97}). For this detection threshold, $\Delta p = 0.25^{\prime\prime}$ (which corresponds to a required spatial resolution $\phi = 0.15^{\prime\prime}$), and $d_\mu = 65\,$pc. We can now use Figs.~\ref{f:cumr} and \ref{f:dK} and Eq.~\ref{e:P} to estimate the detection probability and typical amplification of a lensing event in this experiment. The rate of unresolved and long events which amplify sub-threshold stars is $7.5\times 10^{-4}\,$yr$^{-1}$, with a resulting detection probability $P_{\rm long} \sim 0.5\%$. The median amplifications of such events is $\Delta K_{\rm long}\sim 1.5^{\rm m}$ above the detection threshold. The probability for detecting a short amplification of sub-threshold stars is $P_{\rm short}\sim0.2\%$. The probabilities for detecting unresolved events involving above threshold stars is of the same order of magnitude. The probabilities for detecting long resolved events are an order of magnitude smaller, and the probabilities for detecting short resolved events are negligible in this experiment. Thus, we find that the behavior of the variable source (or sources) at $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, i.e. a brightening of a previously undetected source to $1.5^{\rm m}$--$2^{\rm m}$ above the threshold with an event duration of $\sim 1$ yr, is the typical behavior that would be expected for a microlensing event. However, we also find that the a-posteriori probability for detecting such an event is only $\sim 0.5\%$. The probability for detecting a lensing event can be increased considerably by carrying out more sensitive observations at higher sampling rates. For example, 10 years of monthly observations with a detection threshold of $K_0 = 19^{\rm m}$ will require a resolution of $\phi = 0.06^{\prime\prime}$, which is already available (\cite{Ghez98}), and will increase the total detection probability of long lensing events to $P_{\rm long}\sim 20\%$. These estimates are uncertain by a factor of few due to uncertainties in the stellar density distribution, the $K$ luminosity function and the extinction. \section{Discussion and summary} \label{s:discuss} In the past several years, very high resolution, very deep IR observations of the GC have regularly monitored stellar motions within few arcsec of the radio source $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. The primary objective of these efforts is to dynamically weigh the central dark mass and set lower bounds on its compactness. As a by product, these observations can detect and record the light curves of faint time-varying sources in the inner GC over timescales of years. These proper motion studies have recently revealed the possible presence of one or two variable $K$-band sources very close to, or coincident with, the position of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ (\cite{Genzel97}; \cite{Ghez98}). The first issue to resolve, when considering results from such technologically challenging observations, is whether these sources are real, or merely artifacts of the complex procedures for obtaining deep diffraction limited images in the crowded GC. Assuming that such a source is indeed real, and that it is not simply a variable star that happens to lie very near to the line of sight to $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, there are two interesting possibilities to consider, both directly linked to the existence of a super-massive compact object in the GC. A time variable IR source, coincident with $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, may be the IR counterpart of the radio source, with the IR flare resulting from fluctuations in the accretion process, which has up to now eluded detection in any band other than the radio (see e.g. \cite{Backer96}). Another possibility is that the new source is a faint star in the dense stellar cluster in the GC, which is microlensed by the black hole. Such amplification events would appear as time varying sources very close to the position of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. We note that a detection of microlensing could provide an independent probe of the compactness of the central dark mass. The innermost observed stars in the GC set an upper limit on the size of the dark mass, $r_{\rm dm}\sim 7\,10^4\,R_\bullet$, where $R_\bullet$ is the Schwarzschild radius (\cite{Ghez98}). Microlensing has the potential, given a well sampled light curve, to improve on the dynamical limit, since for typical values of the lens--source distance, $r_{\rm dm}\ll\ifmmode R_{\sc e} \else $R_{\sc e}$ \fi < 3\,10^4\,R_\bullet$ (see Eq.~\ref{e:Rsch}). If the central mass is not a black hole, but rather an extended object, then the microlensing light curve will deviate from that of a point-mass lens when $r_{\rm dm} \ga \ifmmode R_{\sc e} \else $R_{\sc e}$ \fi$. Because the innermost observed stars already constrain $\theta_{\rm dm} \la 0.1^{\prime\prime}$, only lensed stars in the inner few pc have a small enough $\ifmmode \theta_{\sc e} \else $\theta_{\sc e}$ \fi$ to probe possible structure in the dark mass distribution (Eq.~\ref{e:Oe}). Therefore, the question whether there is a significant chance to detect such events is related to the yet unresolved question of whether there is a strongly peaked stellar cusp in the innermost GC. We will discuss these issues elsewhere. In this paper we investigated the possibility of microlensing amplification of faint stars in the dense stellar cluster in the GC by the super-massive black hole, which is thought to coincide with the radio source $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. We calculated in detail the rates, durations and amplifications of such events, and considered separately the cases of unresolved and resolved images and of intrinsically faint and bright sources. We presented our results in a general way that can be used to estimate the detection probabilities of microlensing for a wide range of observing strategies. The background stellar surface density increases with the detection threshold $K_0$. This determines the observational angular resolution required to detect a microlensing event, and fixes the maximal distance behind the black hole for which the two lensed images of a star will appear unresolved. This maximal distance occurs before the integrated lensing rate reaches its asymptotic value, even for present-day detection thresholds. We therefore considered two manifestations of the lensing: unresolved microlensing of stars near the black hole, and resolved lensing of stars farther away. We find that short lensing events, due to stars close to the black hole, dominate the total lensing rate. This reflects the fact that the high stellar density and velocities near the black hole more than compensate for the smaller lensing cross-sections there. For this reason, and because unresolved images are on average twice as bright as the resolved images, unresolved microlensing dominates the lensing rate. We have also considered the lensing amplification of bright observed stars. The contribution of this type of microlensing to the total rate becomes progressively more important as the detection threshold decreases, and at low sampling rates, which are primarily sensitive to longer events. We find that low sampling rates significantly bias the detection towards high amplitude events. Our predicted lensing rates are small, but not so small as to be negligible. In particular, longer, deeper proper motion monitoring done at higher rates, e.g. 10 years of monthly monitoring with $K_0 = 19^{\rm m}$, may have a significant chance of detecting such an event. Finally, could either of the variable sources reported by Genzel et al. (1997) (source S12) and by Ghez et al. (1998) (source S3) be the amplified microlensed image of a faint star? The lack of evidence for related variability in the radio and X-rays, as would be expected in some accretion scenarios if the $K$-flare were due to fluctuations in the accretion process, argues against the possibility that the new source is the IR counterpart of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. Another possibility is that the new sources are variable stars, which are below the detection limit in their low state. Long Period Variable stars (LPVs), which are probably luminous Mira variables, are observed in the GC (\cite{HR89}; \cite{Tamura96}). Typical amplitude variations of $\Delta K\sim 0.15^{\rm m}$--$0.5^{\rm m}$ are observed over $1$--$2\,$ yr, although in some cases the variations are as large as $\Delta K \ga 1^{\rm m}$. Haller \& Rieke (1989) find 12 LPVs in a 4.5 arcmin$^2$ survey of the GC (not including the central $1^{\prime}\times1^{\prime}$) down to $K = 12^{\rm m}$ ($M_K = -5.9^{\rm m}$). Of these, only one exhibited high amplitude variations ($1^{\rm m}$ in 4 months). At $K\sim 15^{\rm m}$, the new variable $K$ source is much fainter than the LPVs observed in this survey. If it is an intrinsically bright LPV, it must be lie on a highly extinguished line of sight. Using the observed surface density of $M_K<-5.9^{\rm m}$, $\Delta K\sim 1^{\rm m}$ LPVs, it is possible to make a rough estimate of the probability for finding such a star within $0.15^{\prime\prime}$ of $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. Even after taking into account a factor $\sim40$ difference in the surface mass density between the dynamical center and the survey area at $\sim2.5^\prime$ from the center, the probability is only $\sim 0.005\%$. This of course does not rule out the possibility of an intrinsically faint but highly variable star. In any case, if the new source is a variable star, future observations should detect continued variability from this source. We can also argue statistically against the possibility that these flares are due to microlensing of a star by another star in the GC, which happens to be close to the line-of-sight to $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$. Because $\bar{m}/M_\bullet\sim10^{-6}$, the typical lensing timescale by a star will be of the order of $1$ hr, $10^3$ times shorter than that of lensing by the black hole (Eqs.~\ref{e:Re} and \ref{e:rate}). Furthermore, it is straightforward to show that the total rate of lensing of a star by a star within an angle $0.1^{\prime\prime}$ of the line-of-sight to $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ is $\sim 5\,10^{-5}$\, yr$^{-1}$ for $K_0 = 17^{\rm m}$, which is 100 time smaller than the rate of lensing by the black hole. Our analysis has shown that the behavior of the variable $K$-band source (or sources) at $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$, in particular a brightening of a previously undetected source to $\sim1.5$--$2^{\rm m}$ above the threshold, on a timescale of $\sim 1$ yr, is the typical behavior that would be expected for a microlensing event. However, we also estimate that the probability that such an event could have been observed during the course of the proper motion studies that have been carried out thus far is only $\sim 0.5\%$. While this probability is small, it is not so small as to rule out this possibility entirely. The probability of detecting a microlensing event at $\ifmmode {\rm SgrA}^\star\, \else SgrA$^\star$\fi$ will increase considerably in the future as the observational sensitivities and the monitoring sampling rates improve. \acknowledgments We thank D. Backer, R. Blandford, A. Eckart, R. Genzel, A. Ghez, I. King, E. Maoz, B. Paczy\'{n}ski and E. Serabyn for helpful discussions and comments. This work was supported in part by a grant from the German-Israeli Foundation (I-551-186.07/97). A. S. thanks the Radio Astronomy Laboratory at U.C. Berkeley, and the Center for Star Formation Studies consortium for their hospitality and support.
2024-02-18T23:40:12.644Z
1999-02-13T18:38:07.000Z
algebraic_stack_train_0000
1,660
10,300
proofpile-arXiv_065-8250
\section{Introduction} This paper reports a measurement of the quasielastic $^{12}$C(e,e$'$p) reaction at momentum transfer $q \approx 1000$ MeV/c and two energy transfers, $\omega = 330$ MeV and $\omega = 475$ MeV. After an introductory discussion, we describe the experiment and its analysis. We present a representation of the differential cross section's $\omega$ dependence around each of the two central values, using Legendre polynomials. Finally, we discuss the results of the experiment in terms of single-nucleon knockout, multinucleon knockout, and other processes. The PhD thesis of Morrison\cite{Morrison} presents the experiment in more detail. We define several quantities here: $\omega$ is the energy transferred from the electron to the nuclear system. The 3-momentum transfer is $\bf q$, with magnitude $q$. The momentum transfer four-vector is $Q \equiv (\omega, {\bf q})$, and $Q^2 = q^2 - \omega^2$. The missing energy of the coincidence reaction is $E_m \equiv \omega - T_p$, where $T_p$ is the outgoing proton's kinetic energy. $M$ is the mass of the nucleon. The missing momentum is ${\bf p}_m \equiv {\bf q} - {\bf p}_p$ where ${\bf p}_p$ is the outgoing proton's momentum. At quasielastic kinematics, $\omega \approx Q^2/2M$, interactions with independent nucleons are expected to dominate the nuclear electromagnetic response. However, despite the apparent agreement of non-relativistic Fermi gas calculations\cite{Moniz} with quasielastic (e,e$'$) measurements for a large range of nuclei\cite{Whitney}, measurements of the separated longitudinal and transverse (e,e$'$) cross sections have shown that other processes contribute significantly to the reaction. The longitudinal and transverse reduced response functions, $f_L$ and $f_T$, for $^3$He at $q \approx 500$ MeV/c\cite{Marchand} are equal, in accordance with the predictions of independent particle models. However, $f_L$ is $\approx$40\% smaller than $f_T$ for heavier nuclei including $^4$He, $^{12}$C, $^{40}$Ca, $^{56}$Fe, and $^{238}$U\cite{Reden,Altemus,Finn,Meziani2,Meziani3,Barreau,Blatchley} at $q \approx 500$ MeV/c. This indicates the presence of a non-quasifree process that may depend on the density or number of available nucleons. Yates {\em et al.\/}\cite{Yates} measured a different result on $^{40}$Ca: $f_L$ is less than 20\% smaller than $f_T$. At a larger momentum transfer, $q=1050$ MeV/c, $f_L$ and $f_T$ were comparable for both $^3$He and $^4$He on the low $\omega$ side of the quasielastic peak\cite{Meziani}, but $f_T$ was still significantly larger than $f_L$ at $q = 1050$ MeV/c for $^{56}$Fe\cite{Chen}. Thus there is some experimental ambiguity in the magnitude and momentum-transfer dependence of the transverse-longitudinal ratio. Many different models of inclusive quasielastic electron scattering attempt to treat aspects of the reaction correctly, but no model can explain all of the data. Such older models include $\sigma$-$\omega$ calculations\cite{Serot}, meson exchange currents\cite{vanOrden}, two-particle-two-hole models\cite{Alberico}, modification of the mass and/or the size of the nucleon\cite{Noble,Celenza}, and quark effects\cite{Mulders}. Recent Green's Function monte-carlo (GFMC) calculations by Carlson and Schiavilla\cite{Carlson}, which include pion degrees of freedom, final state interactions, and two-body currents, can reproduce the $^3$He and $^4$He longitudinal and transverse response functions. They interpret the PWIA response quenching as due to the charge-exchange component of the nuclear interaction, which shifts the strength to higher excitation energy. The quenching of the transverse response is more than offset by the contribution of two-body currents associated with pion-exchange. This work indicates the necessity of including correlated initial state wave functions, two-body reaction mechanisms, and final state interactions. We expect that more reaction mechanisms, including real pions, deltas, and three-nucleon currents, need to be included for heavier nuclei and higher excitation energies. Unfortunately, no GFMC calculations are possible yet for heavier nuclei. Coincidence (e,e$'$p) electron scattering, in which a knocked out proton is detected in coincidence with the scattered electron, can distinguish among some of the various reaction processes proposed, because different reactions occur at different missing energies. The C(e,e$'$p) cross section was first measured at Saclay\cite{Mougey} out to $E_m \approx 60$ MeV, and more recently by van der Steenhoven\cite{Steenhoven}. The spectrum exhibits a large narrow peak at $E_m \approx 16$ MeV, several small, narrow peaks at larger missing energies, and a broad structure from 25 MeV to 60 MeV. The momentum distributions indicated that the narrow peaks correspond to the knockout of a proton in a p-shell state, while the broad structure results from s-shell proton knockout. The spectroscopic factors were reported as 2.5 for the p-shell peaks, and 1.0 for the s-shell peak.\cite{Mougey} The s-shell peak is broad because the residual nucleus is in an excited state, and decays rapidly. Two-nucleon knockout may also contribute to the strength in the s-shell region as the threshold for this process is at $E_m \approx 27$ MeV. Lapik\'as\cite{Lapikas} has found the strength for valence shell knockout in (e,e$'$p) to be reduced by 20\% for elements throughout the periodic table. Several experiments at Bates have measured the C(e,e$'$p) cross section as a function of missing energy for the following kinematical conditions: The maximum of the quasielastic peak at $q = 400$~MeV/c (an L/T-separation), 585, 775, and 827~MeV/c\cite{Ulmer,Weinstein}; the dip region at $q = 400$~MeV/c\cite{Lourie}, and the delta peak at $q = 400$ and 475~MeV/c\cite{Baghaei2}. These measurements had four major results: \begin{enumerate} \item The cross section for single nucleon (e,e$'$p) knockout is only 40\% to 60\% of that predicted by Distorted Wave Impulse Approximation (DWIA) analysis assuming four p-shell and two s-shell protons. This is consistent with the Saclay results and all other published quasielastic data. In the delta-region measurements, as expected, the single-nucleon-knockout is virtually invisible. \item In stark contrast to the transverse response function, the longitudinal response function measured at $q=400$ MeV/c is consistent with zero for $E_m \ge 50$ MeV. This suggests that single nucleon knockout is minimal beyond $E_m = 50$ MeV. \item A considerable fraction of the cross section occurs at $E_m > 50$ MeV. The separated measurement at $q=400$ MeV/c indicates that this strength is transverse and begins at $E_m \approx 27$ MeV, the threshold for 2-nucleon emission. This ``continuum'' strength is attributed to two- and multi-nucleon knockout. The continuum strength persists in the measurements on the delta peak, and constitutes a large fraction of the total cross section even where pion production is expected to dominate. Note that excess transverse cross section was observed on other nuclei at missing energies above the 2-nucleon emission threshold.\cite{Lanen} \item No abrupt change in cross section was seen at pion threshold, $E_m \approx 155$ MeV, for $q = 775$ MeV/c, the only quasielastic measurement so far to probe sufficiently high missing energies. However, an abrupt increase in the cross section was seen in the delta-region measurements. \end{enumerate} Figure~\ref{q:omega} shows the momentum and energy transfer regions of the quasielastic, dip, and $\Delta$ measurements at Bates, including this experiment. Kester {\em et al.}\cite{Kester} have recently measured the $^{12}$C(e,e$'$p) reaction in the dip region at a variety of angles away from parallel kinematics. They find that large-angle cross sections can be explained by meson-exchange-currents and intermediate deltas, while smaller-angle cross sections suggest correlated pair emission. \section{The Experiment} We report two measurements of the $^{12}$C(e,e$'$p) reaction, at $q = 970$ and 990~MeV/c. Both were done in parallel kinematics. The energy transfers were respectively $\omega = 330$ and 475~MeV. The latter point is at the maximum of the C(e,e$'$) quasielastic peak, and extends the investigation of the momentum-transfer dependence of the C(e,e$'$p) reaction cross section measured at $q = 400$, 585, 775, and 827 MeV/c. With both measurements, we investigate how the single-nucleon and continuum cross sections depend on the energy transfer on and below quasielastic kinematics. The specific kinematics are shown in table~\ref{kinematics} and figure~\ref{q:omega}. We performed the experiment at the MIT-Bates Linear Accelerator Center in Middleton, Massachusetts. The recirculated electron beam had an average energy of 696~MeV $\pm$~3~MeV for the $\omega=330$ MeV measurement, and 796~MeV $\pm$~3~MeV for the $\omega=475$ MeV measurement. The beam had a duty factor of approximately 1\%, with 1--20 $\mu$A average (0.1--2 mA peak) current. We used several natural carbon targets with areal density or thickness ranging from 24 mg/cm$^2$ to 410 mg/cm$^2$. We also used a spinning polyethylene target to measure the elastic H(e,e$'$) reaction for normalization, and tantalum and beryllium oxide targets for testing and calibration. We used the magnetic spectrometers MEPS to detect electrons and OHIPS to detect protons. The polarity of OHIPS was reversed to detect electrons during calibration measurements. The spectrometers are described in detail elsewhere\cite{Morrison}. In each spectrometer, a scintillator array detected a particle passing through the spectrometer's focal plane and triggered the readout system. A two-plane vertical drift chamber measured the particle's trajectory at the focal plane. MEPS used an Aerogel \v{C}erenkov counter with an index of refraction of 1.05 to distinguish between electrons and pions. We identified coincidence events by the time elapsed between the electron trigger in MEPS and the proton trigger in OHIPS. The coincidence time resolution was approximately 2~ns FWHM. Accidental events under the timing peak were subtracted, and this subtraction is included in the statistical errors of the spectra. \subsection{Calibrations, Corrections and Efficiencies} We measured H(e,e) in MEPS, elastic C(e,e) in OHIPS and coincidence H(e,ep) at various spectrometer magnetic fields to determine the spectrometer constants and beam energies. The uncertainties are 3 MeV in the beam energy. We calculated correction factors to account for losses due to many effects including software track reconstruction, simultaneous events in a wire chamber, more than one event per beam burst, and other software and hardware limitations. The correction factors varied from run to run, ranging from 1.40 to 1.90. Some correction factors were deduced from run-to-run variations and are only valid up to an overall normalization, discussed in the following section. Because the (e,$\pi^-$p) cross section is much larger than the (e,e$'$p) cross section at deep missing energies, we needed to reject pions. We used the $n=1.05$ Aerogel \v{C}erenkov counter in MEPS for this purpose. Electrons passing through the aerogel radiated \v{C}erenkov light, whereas pions with momentum less than 430 MeV/c did not radiate. The electron detection efficiency of the Aerogel \v{C}erenkov counter varied strongly with the MEPS magnetic field. For $\omega=475$ MeV, the electron detection efficiency was 93\% and the pion rejection efficiency was 99.5\%. For $\omega=330$ MeV, the electron detection efficiency was only 60\% and the pion rejection efficiency was 98.5\%. We also determined the electron detection efficiency as a function of focal plane position. To obtain the relative acceptance (including detection efficiency) of the spectrometers as a function of focal plane position (ie: of relative momentum), we measured the quasielastic C(e,e$'$) cross section in MEPS and the C(e,p) cross section in OHIPS. We varied the magnetic field, placing particles with a given momentum at different positions in the focal plane. We deconvoluted the acceptance from the single arm cross section to obtain the focal plane acceptance as a function of relative momentum. We then combined this with the variation in \v Cerenkov counter electron detection efficiency with focal plane position to get the total spectrometer relative efficiency-acceptance product (hereafter called `relative acceptance'). We applied these relative acceptances to all of our data. The absolute normalization of the spectrometers is discussed in the next section. \subsection{Normalizations} To normalize the experiment absolutely, we measured the H(e,e$'$) elastic cross section in MEPS, the H(e,e$'$p) elastic cross section detecting electrons in MEPS and protons in OHIPS, and the C(e,e$'$) elastic cross section in OHIPS. We corrected these measured cross sections for the relative acceptances as a function of momentum (described in the previous section). We then compared the corrected measured H(e,e$'$p) cross section with Simon {\em et al.}'s parametrization of the H(e,e$'$) cross section\cite{Mainz}, and the corrected C(e,e$'$) cross section with the phase-shift calculation of the program ELASTB\cite{Norum}. Ideally, the H(e,e$'$p) measurement would fully normalize the experiment after taking into account relative efficiencies and dead times. However, if the electron from H(e,e$'$p) enters MEPS, kinematics restrict the proton to a small region within OHIPS's solid angle. C(e,e$'$p) protons populate the entire OHIPS solid angle approximately uniformly. Particles entering OHIPS near the edges of OHIPS's collimator may not reach the focal plane. These losses affect the overall normalization, but H(e,e$'$p) alone would not measure them. We measured the elastic C(e,e$'$) cross section in OHIPS to account for those losses, but the electrons from C(e,e$'$) did not cover the OHIPS solid angle uniformly either. At 17$^\circ$, the C(e,e$'$) cross section is approximately inversely proportional to the fourth power of the scattering angle. Most electrons entered OHIPS near the front of the angular acceptance. We used the transport program TURTLE\cite{Carey}{} to model the physical characteristics of OHIPS between the entrance near the target and the focal plane, and to estimate the fraction of particles entering the solid angle that reach the focal plane. We used three initial distributions of particles over the solid angle. TURTLE gave the following results for the indicated distribution of entering particles: \begin{itemize} \item 100\% --- Uniform over the restricted H(e,e$'$p) region \item 85\% --- Inversely proportional to $\theta^4$ as we expect for C(e,e$'$) \item 89\% --- Uniform over the entire OHIPS solid angle as we expect for C(e,e$'$p) \end{itemize} The C(e,e$'$) cross section measured in OHIPS was (82 $\pm$ 5)\% of the cross section calculated by ELASTB. After applying the correction functions calculated in the previous section for the \v{C}erenkov counter inefficiency and the spectrometer acceptances as a function of momentum, the H(e,e$'$) and H(e,e$'$p) measured cross sections were the same, indicating that OHIPS had no additional losses. TURTLE's results were consistent with both. The overall normalization factor is the product of the two terms: \begin{itemize} \item The Mainz H(e,e$'$p) cross section calculation divided by the measured H(e,e$'$p) cross section --- 1.06 for $\omega = 330$~MeV, and 1.24 for $\omega = 475$~MeV \item The OHIPS factor from TURTLE and C(e,e$'$), given by $({1\over 0.89})({0.85\over 0.82\pm 0.05}) = 1.16\pm 0.07$. The factor of $({1\over 0.89})$ comes from TURTLE for a uniformly illuminated solid angle. The factor $({0.85\over 0.82\pm 0.05})$ is a small correction to the TURTLE normalization from the measured C(e,e$'$) cross section. \end{itemize} The normalization factors at the center of the focal plane (0\% relative momentum) were 1.23 for $\omega = 330$~MeV, and 1.44 for $\omega = 475$~MeV. Normalization factors at other locations on the focal plane were the product of the focal plane center normalization and the relative acceptance of the other location determined as described in the previous section. The systematic uncertainty in the C(e,e$'$p) cross section is 8\% for the entire missing energy spectrum, primarily due to beam energy uncertainty coupled to the C(e,e$'$) and H(e,e$'$) cross sections and statistical uncertainty in the normalization measurements. In addition, there is a further systematic uncertainty of 4\% in the continuum region ($E_m > 50$ MeV) due to possible residual pion contamination. \subsection{Representation of the Differential Cross Section} \label{Multipole} We measured the coincidence cross section as a function of missing energy for each of the two kinematics, at $\omega = 330$~MeV and 475~MeV, varying only the proton final momentum $p_f$. For each measurement, we represented the $\omega$ dependence of the cross section within the $\omega$ acceptance of the electron spectrometer by expanding the cross section around the central value of $\omega$ using orthogonal polynomials: \begin{equation} {d^4\! \sigma \over d\Omega_e\, d\Omega_p\, d\omega\, dE_m} = \sum_{l=0}^{l_{max}} \alpha_l(E_m) P_l\left({\omega-\omega_0\over\Delta\omega/2}\right) \label{LegendreExpansion} \end{equation} where $P_l(x)$ are Legendre polynomials, $\omega_0$ is the central value, and $\Delta\omega$ is the width of the $\omega$ acceptance. The experimental coefficients $\alpha_l(E_m)$ are determined from the data using the method described in \cite{Morrison}. For a given $E_m$, the true differential cross section is expected to vary smoothly with $\omega$, so $\alpha_l(E_m)$ should approach zero rapidly as $l$ increases. This expansion of the $\omega$-dependence of the cross section is necessary since we lack sufficient experimental statistics to determine a full two-dimensional $(E_m,\omega)$ spectrum. All $\alpha_l$ have the same units: picobarns/MeV$^2$-sr$^2$. $\alpha_0(E_m)$ is an average of the cross section over the $\omega$ acceptance. The nature of the average depends on the cutoff $l_{max}$. $\alpha_1(E_m)$ multiplies $(\omega-\omega_0)/(\Delta\omega/2)$ in equation~\ref{LegendreExpansion}; it measures the change of the cross section over $\Delta\omega$. The ratio $\alpha_1/\alpha_0$, which measures the relative change of the cross section with $\omega$, may be more relevant in comparing the experiment with theory. Higher order terms ($\alpha_l$ with $l \ge 2$) multiply higher order polynomials of $\omega$, and indicate the curvature of the cross section. The calculation of the coefficients $\alpha_l(E_m)$ depends somewhat on the choice of cutoff $l_{max}$. Values of $\alpha_l$ significantly different from zero are available from the data for $l = 0$, 1, 2, and 3, although $\alpha_0$ and $\alpha_1$ yield the dominant features. We verified that $\alpha_l$ (for $l \leq l_{max}$) was roughly independent of $l_{max}$ for $l_{max}=2$, 3, or 4. $\alpha_0$ calculated using $l_{max}=0$ and using $l_{max}$ = 2, 3, and 4 differ by less than 15\%. For $l_{max}=0$, $\alpha_0$ is the average of the cross section over the $\omega$ acceptance. As $l_{max}$ increases, the variation of the cross section over the $\omega$ acceptance is described by the higher order terms so that $\alpha_0$ becomes the cross section at the center of the $\omega$ acceptance. The calculations we present use $l_{max} = 0$ and 3. The cross sections of the previous experiments at $q=400$, 585, and 775, and 827 MeV/c were averaged over the $\omega$ acceptance, corresponding to $\alpha_0$ with $l_{max} = 0$. Therefore, comparisons with previous measurements use the results from $l_{max}=0$. \subsection{Radiative Corrections} We used the prescription of Borie and Drechsel\cite{Borie} to subtract the radiative tails of the p-shell and s-shell peaks from the missing energy spectra. Computing these tails requires knowledge of the coincidence cross section for all values of $\omega$ and $E_m$ less than the experimental values. Lacking this knowledge, we calculated both the peak and radiative tail cross sections using the Plane Wave Impulse Approximation (PWIA) and harmonic oscillator initial state wave functions. We scaled the tail calculation by the ratio of the measured peak cross section to the calculated peak cross section before subtracting the tail from the spectrum. We calculated the Schwinger correction\cite{Schwinger,Maximon}, with a hard photon cutoff of 11.5 MeV. We multiplied the p-shell peak by the Schwinger correction and subtracted the p-shell radiative tail from the s-shell and continuum regions of the spectrum. Then we multiplied the s-shell peak (limited to $E_m = 50$ MeV) by the Schwinger correction using the same cutoff and subtracted the s-shell tail from the continuum region. Finally we applied the Schwinger correction to the continuum. We did not attempt to calculate continuum tails as we had no satisfactory model for them. \section{Results and Discussion} \subsection{Features of the Spectra} Figures~\ref{spectra:high} and~\ref{spectra:low} show the Legendre expansion of the radiatively corrected cross-section as a function of missing energy ($\alpha_0$ through $\alpha_3$, calculated with $l_{max}= 3$ [see section IIC for a description of the expansion]). (Note the difference in scales among the plots.) We see three features in $\alpha_0$ for both kinematics: \begin{itemize} \item A peak centered at $E_m = 18$ MeV primarily due to single nucleon knockout from the p-shell \item A broader peak out to $E_m \approx 60$ MeV primarily due to knockout from the s-shell, but with possible contribution from the continuum. \item Continuum strength at larger missing energy attributed to two- and multi-nucleon knockout \end{itemize} Ulmer's $R_L/R_T$-separation at $q=400$ MeV/c\cite{Ulmer} indicates that s-shell knockout becomes small at 50 MeV, and that the continuum strength starts at 27 MeV. We note that the ratio of s-shell to p-shell cross section is much smaller at $\omega=330$ MeV than at $\omega=475$ MeV. The continuum strength ($E_m>50$ MeV) extends beyond $E_m = 300$ MeV for $\omega=475$ MeV, but goes to zero at approximately $E_m = 90$ MeV for $\omega=330$ MeV. We do not see any increase in cross section at pion threshold, $E_m \approx 155$ MeV. The $\omega=475$ MeV $\alpha_0$ cross section spectrum appears to have a peak around $E_m = 60$~MeV. The peak does not appear in the spectrum if we use a bin size of 6 MeV instead of the 3 MeV size used in figure~\ref{spectra:high}, and we do not judge it statistically significant. The $\alpha_1$ spectra have features that correspond to the features of the $\alpha_0$ spectra. In the $\omega=330$ spectrum, there is a narrow peak at 18 MeV and a broad peak beyond 25 MeV. These have corresponding peaks in the $\alpha_0$ spectrum, and indicate that the cross section increases strongly across the $\omega$ acceptance. The continuum cross section beyond 50 MeV also has a large $\alpha_1$ relative to $\alpha_0$ indicating that it also increases strongly with $\omega$. In the $\omega=475$ $\alpha_1$ spectrum, the p-shell peak is small and positive, indicating a small average increase in the cross section over the $\omega$ acceptance. The s-shell $\alpha_1$ is zero, indicating that the cross section is on the average constant over the $\omega$ acceptance. At 60 MeV of missing energy, $\alpha_1$ becomes positive, suggesting that the reaction mechanism has changed. This is consistent with the result of the L-T separation at $q=400$ MeV/c\cite{Ulmer} that s-shell single-nucleon knockout becomes small around 50 MeV. Beyond 110 MeV in missing energy, $\alpha_1$ is consistent with zero, indicating no $\omega$ dependence within the acceptance. Although $\alpha_0$ and $\alpha_1$ exhibit the most dominant and statistically significant features, $\alpha_2$ and $\alpha_3$ display some features. For $\omega=475$ MeV, $\alpha_2$ is consistent with zero, but $\alpha_3$ has a statistically significant negative value in the s-shell region and possibly in the p-shell region, indicating a measurable curvature in the cross section as a function of $\omega$. For $\omega=330$ MeV, $\alpha_2$ and $\alpha_3$ are consistent with zero except in the p-shell region, where they are both negative. We offer no interpretation of $\alpha_2$ and $\alpha_3$ in this paper. \subsection{Momentum Distributions} \label{momentum:dists} The $\alpha_0$ and $\alpha_1$ spectra for the p and s shells collectively exhibit qualitative features consistent with the momentum distributions expected of p- and s-shell orbitals, as displayed in figure~\ref{mom:dist}. The s-shell momentum distribution has its maximum around zero missing momentum, while the p-shell momentum distribution has its maxima around $\pm$100 MeV/c, and reaches a minimum at zero. In parallel kinematics, the energy transfer is related to the missing momentum by \begin{displaymath} \omega - {Q^2 \over 2M} \approx {{\bf p} \cdot {\bf q} \over M} = {p_m^\parallel q \over M} \end{displaymath} for quasielastic single-nucleon knockout. Choosing $\omega$ determines the central value of the parallel component of the missing momentum. Although the experiment was centered at parallel kinematics, its finite angular and momentum acceptances covered a large range of the missing momentum perpendicular to $\vec q$. The parallel and perpendicular components of the missing momentum ranges sampled by the experiment are shown in figure ~\ref{mom:dist}. The central parallel missing momenta for the measurements are given in table~\ref{kinematics}. At $\omega=475$ MeV, the parallel component of the missing momentum covers approximately $-30$ MeV $< p_m^\parallel < 100$ MeV (see figure~\ref{mom:dist}). It is greater for the p-shell than for the s-shell, reflecting the difference in binding energy. The s-shell momentum distribution is near its maximum. Thus the s-shell cross section should be flat in $\omega$ (ie: $\alpha_1$ should be small). The p-shell cross section should increase slightly with $\omega$. We see these features in the $\alpha_0$ and $\alpha_1$ spectra in figure~\ref{spectra:high}. At $\omega=330$ MeV, the central parallel missing momentum is much larger than $-100$ MeV/c. The p shell should dominate and both the p- and s-shell cross sections should increase strongly with $\omega$. $\alpha_0$ and $\alpha_1$ in figure~\ref{spectra:low} reflect these traits. The p-shell cross section is much larger relative to the s-shell at $\omega=330$~MeV than at $\omega=475$~MeV. \subsection{Distorted Wave Impulse Approximation} \label{DWIA:section} We compared the observed single-particle knockout strength from each shell with factorized Distorted Wave Impulse Approximation (DWIA) cross section calculations. We integrated the observed cross section over missing energy from 10 MeV to 27 MeV for the p-shell, and 27 MeV to 50 MeV for the s-shell. The factorized DWIA cross section is given by \begin{equation} {d^4\!\sigma \over d\Omega_e\, d\Omega_p\, d\omega\, dE_m} = E_f\,p_f \sigma_{ep} |\phi^D({\bf p}_m,{\bf p}_f)|^2 f(E_m) \label{DWIA:crosssection} \end{equation} where $\sigma_{ep}$ is deForest's CC1 off-shell electron-proton cross section\cite{deForest}; $f(E_m)$ is the missing energy distribution for the shell, normalized to a unit area; and $|\phi^D({\bf p}_m,{\bf p}_f)|^2$ is the effective distorted momentum distribution of the shell. We used a delta function for $f(E_m)$ to describe the p-shell, and a quadratic function between 30 and 50 MeV to describe the s-shell. Giusti and Pacati\cite{GiustiPacati} have calculated the effects of Coulomb distortions of the electron wave function. They find effects of approximately one to two percent for $^{12}$C in parallel kinematics at an electron energy of 350 MeV. They also find that the effects decrease with initial energy. Since we performed this experiment at higher energies, we can disregard electron distortions. We calculated $|\phi^D({\bf p}_m,{\bf p}_f)|^2$ using the program PEEPSO, based on the non-relativistic (e,e$'$p) formalism of Boffi\cite{Boffi}. PEEPSO converts the relativistic Dirac optical potential into a Schr\"odinger-equivalent potential including spin-orbit terms, and then solves the Schr\"odinger Equation and calculates the unfactorized (e,e$'$p) cross section for each shell, with a given separation energy, at the center of the spectrometer solid angle acceptances. The effective distorted momentum distribution is this calculated cross section divided by $E_f\, p_f \sigma_{ep}$. We used Woods-Saxon proton wave functions as measured by van der Steenhoven {\em et al.} at NIKHEF \cite{Steenhoven} for the initial bound states. The optical potentials are fit to C(p,p) elastic scattering results for different proton energies. We used the optical potential of Hama {\em et al.}\cite{Hama} for the $\omega=475$ MeV measurement. For the $\omega=330$ MeV point, we calculated cross sections from the Hama potential and also from the parameterization of Meyer {\em et al.}\cite{Meyer}. The Meyer potential is only fit to C(p,p) elastic scattering data for 200 to 300 MeV protons; we extrapolated it using the parametrized expressions. We substituted the momentum distribution derived from PEEPSO into the factorized expression, equation~\ref{DWIA:crosssection}, to obtain the cross section over the entire experimental solid angle and energy ranges. From this we derived theoretical predictions for $\alpha_l(E_m)$ as described in section~\ref{Multipole}, averaged over the solid angle acceptances, using $l_{max}$ equal to 0 and 3 in equation~\ref{LegendreExpansion}. Tables~\ref{DWIA:low} and~\ref{DWIA:high} display the results of the calculations along with the data. The data differ from the calculations; the ratio is the `data-theory-ratio'.\footnote{Other experiments refer to the `data-theory-ratio' as a `spectroscopic factor' and use it to infer properties of the proton initial state wavefunction. The tremendous variation of the data-theory-ratio with $\omega$ in this experiment casts doubt on the theory and precludes our using the term `spectroscopic factor'.} The Hama and Meyer potentials give similar results for the $\omega=330$ MeV p-shell, but less similar results for the s-shell. We used the average of the two results for the calculated s-shell cross section, and assigned half the difference (10\%) as an uncertainty in all the DWIA calculations due to the choice of potential. All other differences between the Hama and Meyer potentials were less than 10\%. We also calculated the DWIA cross sections using a delta-function s-shell distribution in missing energy. The difference between the delta-function s-shell result and the quadratic s-shell result was 10\% for $\omega=330$ MeV and 3\% for $\omega=475$ MeV. This contributed to the overall uncertainty in the s-shell DWIA calculations. We tested the factorization approximation by calculating the distorted momentum distribution (see equation~\ref{DWIA:crosssection}) from the PEEPSO unfactorized cross section at fixed $(E_m, p_m)$ at the center and at the edges of the spectrometer angular acceptances. These differed by 5\% for the $\omega=475$ MeV p-shell and by 1\% for the s-shell and for both shells at $\omega=330$ MeV. The overall uncertainties were 15\% for the $\omega=330$ s-shell calculation, and 11\% for the $\omega=330$ p-shell, and both $\omega=475$ shells. We obtained the `data-theory-ratio' for each shell by dividing the measured cross section by the calculation. We used the average of the Hama and Meyer calculations for the $\omega=330$ MeV theory cross section. The `data-theory-ratio' calculated for $l_{max}=0$ and 3 are given in table~\ref{spect:factors}. We use $l_{max}=0$ to compare with results from prior papers. (See section IIC for a description of the Legendre expansion of the cross-section.) Note that these comparisons of data to DWIA calculations are limited to the range of missing energies and missing momenta ($\Delta p_m \approx 200$ MeV/c) sampled by the measurements. No (e,e$'$p) experiment has measured the entire three dimensional missing momentum distribution. At the quasielastic kinematics, $\omega=475$ MeV, the data-theory-ratios are 0.40 for both the p- and s-shells. Figure~\ref{spect:factor:plot} shows these data-theory-ratios, along with those from previous quasielastic and dip measurements. The data-theory-ratios appear to be constant or perhaps decrease slightly with momentum transfer. The s-shell region ($27 < E_m < 50$ MeV) also includes two-nucleon knockout; this greatly increases the uncertainties of the s-shell data-theory-ratios. For $\omega=330$ MeV the data-theory-ratios are 0.85 for the p-shell and 1.0 for the s-shell, close to the naive expectation. The 3-vector momentum transfer of 970 MeV is approximately the same as for $\omega = 475$~MeV ($ q= 990$ MeV/c). The p-shell data-theory-ratio is approximately equal to the s-shell data-theory-ratio for both data sets even though the ratio of p-shell cross section to s-shell cross section increases by factor of four between $\omega=475$ MeV and $\omega=330$ MeV. This lends credence to the model. Ryckebusch has calculated C($\gamma$,N) and C(e,e$'$p) differential cross sections from models that include two-nucleon knockout\cite{Ryckebusch,Ryckebusch2,Ryckebusch3}. His single-nucleon knockout calculations include meson exchange currents, Delta currents, and Mahaux's prescription for the missing energy spreading of the s-shell. For the data presented in this paper, Ryckebusch's s-shell knockout calculations match the above results; he obtains the same data-theory-ratios of 1 for $\omega=330$ MeV and 0.4 for $\omega=475$ MeV. This also lends credence to the models. This variation in data-theory-ratios from quasielastic kinematics to low-$\omega$ kinematics is qualitatively similar to that observed by van der Steenhoven {\em et al.}\cite{Steenhoven} who also measured a significantly larger ratio of data to DWIA at large negative missing momenta ($\omega \le Q^2/2M$) than at positive missing momenta ($\omega \ge Q^2/2M$). Bernheim\cite{Bernheim} obtained a similar result. The model of the (e,e$'$p) cross section may have to be modified at large negative missing momentum. This is suggested from the measurement of $\alpha_1$ at $\omega=330$ MeV in table~\ref{DWIA:low}. The ratio $\alpha_1/\alpha_0$ is 1.5 times theory for the p-shell, indicating that the cross section is much steeper in $\omega$ or missing momentum than theory predicts. The reverse is true for the s-shell. Penn {\em et al.}\cite{Penn} have measured the C(e,e$'$p) cross section for a similar momentum transfer, but a lower $\omega$ and larger p-shell central missing momentum: $\omega = 235$ MeV and $|{\bf p}_m| = 240$ MeV/c. In figure~\ref{mom:dist}, that would be farther to the left than the $\omega=330$ MeV measurement. Penn obtained a p-shell data-theory-ratio of $0.45 \pm 0.05$. This is similar to our $\omega = 475$ MeV measurement, but different from $\omega=330$ MeV. However, the ratio $\alpha_1/\alpha_0$ at $\omega=330$ MeV is 1.5 times the DWIA calculation in table~\ref{DWIA:low}. Thus the experimental cross section decreases more rapidly with decreasing $\omega$ than theory predicts, leading us to expect a lower data-theory-ratio at lower $\omega$ using the same model. We recognize limitations in the available DWIA models. In particular, variations due to different optical potentials are already included in our estimate of the uncertainty of the data-theory-ratios. In addition, the code PEEPSO does not include relativistic dynamics. However, the factor of two difference between the $\omega=330$ MeV and the $\omega=475$ MeV data-theory-ratios remains a challenge for nuclear theory. \subsection{Quasielastic C(e,e$'$) Cross Section} We have also measured the single-arm quasielastic $^{12}$C(e,e$'$) cross section for each energy transfer. We used a model by Warren and Weinstein\cite{Warren} to extrapolate the measured coincidence single-proton-knockout cross section of each shell to the entire $4\pi$ steradian nucleon solid angle. We compared the sum of the p- and s- shell extrapolations with the measured single arm cross section. For $\omega=330$ MeV, the extrapolated coincidence cross section was $0.93\pm0.04$ of the single arm cross section. For $\omega=475$ MeV, the extrapolated coincidence cross section was $0.50\pm0.05$ of the single arm cross section. These ratios are consistent with the C(e,e$'$p) data-theory-ratios. \subsection{Multinucleon Knockout and Other Processes} In figure~\ref{spectra:high}, we see extensive cross section beyond $E_m = 50$ MeV at quasielastic kinematics ($\omega=475$ MeV). This strength is approximately constant beyond about 100 MeV, and appears to extend out to the deepest missing energy measured. The strength is similar to that seen in previous quasielastic measurements \cite{Ulmer,Weinstein,Lourie}. Below the quasielastic peak, at $\omega=330$ MeV, the continuum strength is present, but far weaker relative to the single-nucleon cross section, and is consistent with zero beyond $E_m = 90$ MeV. We plot the ratio of the multi-nucleon-knockout cross section (integrated over $E_m > 50$ MeV) to the single-nucleon-knockout cross section (integrated over $E_m < 50$ MeV) for various continuum regions from previous experiments and the $\omega=475$~MeV measurement in figure~\ref{multi:nucleon:ratios}. We estimated the contribution of multi-step processes, such as (e,e$'$N) followed by (N,p), to the continuum cross section, by convoluting the PWIA nucleon knockout reaction with two models of (N,p) scattering. The first model uses the intra-nuclear cascade code MECC-7\cite{MECC7} to monte carlo the propagation of nucleons through the nucleus as a series of independent collisions with other nucleons. The code enforces the Pauli exclusion principle in the collisions. The second model uses C(p,p$'$) data at 300 MeV and 20$^\circ$, and at 500 MeV and 16$^\circ$\cite{Segal}. We multiplied the results from the C(p,p$'$) data by 1.5 to approximately include neutrons, because the (e,e$'$N) cross section is approximately proportional to the square of the magnetic moment, and $(\mu_n/\mu_p)^2 \approx 0.5$. The results are given in table~\ref{MECC7}, along with the measured cross sections from this experiment. These calculations can only account for less than 6\% of the data beyond $E_m = 27$ MeV. The MECC-7 calculation produces almost no cross section beyond $E_m = 100$ MeV. The C(p,p$'$) based calculation reaches its maximum at $E_m = 70$--80 MeV, but has a long tail reaching to the deepest missing energy. Half its cross section may lie beyond $E_m=100$ MeV. The cross section out to 90 MeV in missing energy in both $\omega=330$ and $\omega=475$ MeV measurements has the approximate shape expected from Takaki's model of two-nucleon knockout\cite{Takaki}. However, its magnitude is larger by a factor of 16\cite{Takaki2}. Beyond 90 MeV, the shape at $\omega=475$ MeV is consistent with Takaki's three-nucleon knockout model. At $\omega=330$ MeV, there is no strength beyond 90 MeV; the continuum strength up to 90 MeV should be mostly due to two-nucleon knockout. Both rescattering calculations (MECC-7 and C(p,p$'$)) and Takaki's calculation used harmonic oscillator initial-state momentum distributions. It is unlikely that using bound states derived from realistic Woods-Saxon potentials will change this result at $\omega=475$ MeV where the initial momenta involved are small. Even at $\omega=330$ MeV, the initial momenta of 100 to 250 MeV/c are reasonably small. In addition, the strong decrease of the continuum cross section at large $E_m$ for $\omega=330$ MeV compared to $\omega=475$ MeV indicates that an initial momentum distribution plus rescattering cannot explain the continuum cross sections. However, initial-state correlations could contribute to the cross section at deep missing energy, because two nucleons share the transfered energy and we detect only one nucleon. The C(p,p$'$) rescattering calculation shows a larger tail than MECC-7 calculation; this may reflect such correlations. If so, those correlations are not strong enough to explain our continuum cross section when they are part of the rescattering picture. However, neither the C(p,p$'$) nor the MECC-7 calculations included such correlations in the initial (e,e$'$N) reaction; the initial nucleon bound state was a simple harmonic oscillator. If the large yield we see at deep missing energy results from strong initial-state correlations, this is very interesting. But this is unlikely to explain the longitudinal response at $q=400$ MeV/c\cite{Ulmer} which is small beyond $E_m = 50$ MeV. The dynamical correlations should influence both the longitudinal and transverse responses. Later in section~\ref{Ryckebusch:Section} of this paper (figure~\ref{Ryckebusch:figure}), we discuss calculations by Ryckebusch using initial state Jastrow correlations. Ryckebusch was unable to generate more than one percent of our $\omega=475$~MeV continuum cross section from the correlations. Furthermore, one could use Ryckebusch's missing energy spectrum as an input to a rescattering calculation. Ryckebusch's calculated s-shell (which does not include correlations) fits our data after renormalization for data-theory-ratios; it should therefore generate a rescattering cross section comparable to our estimates. Ryckebusch's continuum cross section (which includes correlations) is $10^{-3}$ of his s-shell cross section and $10^{-2}$ of our measured continuum cross section. Thus, his continuum cross section cannot generate through rescattering a cross section comparable to our data. We see no increase in strength at pion threshold, $E_m \approx 155$ MeV. Baghaei's PWIA $\Delta$-resonance pion-production calculation\cite{Baghaei2,Baghaei} predicts more strength than we see beyond pion threshold. A calculation that we performed based on Nozawa and Lee's pion-production model\cite{Nozawa} involving both non-resonant and resonant production underpredicts the cross section in that region by about half. The calculation also predicts the pion-production cross section to increase with $\omega$, resulting in a positive $\alpha_1$. Basic considerations of pion production occurring at the tail of the $\Delta$ resonance also lead to the same conclusion. The measured $\alpha_1$ and the ratio $\alpha_1/\alpha_0$ are consistent with zero and inconsistent with the pion-production prediction. The results of the pion-production calculations are presented in table~\ref{pion}. We estimate an upper bound on the amount of two-nucleon knockout due to $N-\Delta$ interactions. Pion scattering experiments indicate that the two-nucleon knockout cross section from the reaction $N\Delta\rightarrow 2N$ is comparable to the pion-nucleon production cross section due to $\Delta\rightarrow N\pi$\cite{NDelta}. The latter has to be less than the total integrated cross section above $E_m=155$ MeV. In the $\omega=475$ MeV measurement, if we assume that the cross section for $N\Delta \rightarrow NN$ is less than or equal to the integral of the experimental cross section for $E_m > 155$ MeV and we distribute this strength in missing energy according to Takaki's shape for two nucleon knockout in the region $50 < E_m < 150$ MeV, then $\Delta N \rightarrow NN$ can account for at most one-sixth of the cross section for $50< E_m < 100$ MeV and none of the cross section above 100 MeV. At $\omega=330$ MeV, this can account for none of the cross section. However, one must be cautious; at quasielastic kinematics, many of the $\Delta$s may not have enough mass to decay into a real pion and a real nucleon. The two-nucleon cross section due to $N\Delta$ interactions could be greater than the above estimate. \subsection{Recent Multinucleon Calculations} \label{Ryckebusch:Section} Ryckebusch has calculated C($\gamma$,N) and C(e,e$'$p) differential cross sections from models that include two-nucleon knockout\cite{Ryckebusch,Ryckebusch2,Ryckebusch3}. His single-nucleon knockout calculations include meson exchange currents, Delta currents, and Mahaux's prescription for the missing energy spreading of the s-shell. His two-nucleon knockout cross sections include Jastrow correlations in addition. These calculations fit the shape of the single nucleon knockout part of our data. Using Mahaux's s-shell spreading, these calculations also fit our data out to $E_m \approx 60$ MeV. This is consistent with the experiment reported by Makins \cite{Makins} at $Q^2 = 1$ (GeV/c)$^2$. Their calculations appear to match their data using only single-nucleon-knockout and radiative corrections, but their cross section data extends only out to $E_m = 100$ MeV. (Note that in this paper we use $E_m = 50$ MeV as the starting point for multinucleon knockout since $R_L$ is small beyond that point.) Ryckebusch's calculations of real photon absorption understate the measured C($\gamma$,N) cross sections at forward angles and at high missing energies by about half\cite{Ryckebusch,Ryckebusch2}. His preliminary C(e,e$'$p) calculations\cite{Ryckebusch3} also account for at most half the cross section beyond $E_m = 70$ MeV measured in parallel kinematics at Bates for $q=585$ MeV/c, $\omega=210$ MeV. However, his calculations reproduce data taken in non-parallel kinematics at NIKHEF\cite{Kester} far from quasielastic kinematics --- $q=270$ MeV/c, $\omega=212$ MeV, and $\theta_{pq}=42^\circ$. For the data presented in this paper, Ryckebusch's calculated multinucleon knockout cross section is less than one percent of the measured continuum cross section at $\omega=475$ MeV (see figure~\ref{Ryckebusch:figure}). For $\omega=330$ MeV, well below quasielastic kinematics, his calculations are consistent with the measurement beyond $E_m = 100$ MeV, although the measurement is also consistent with zero. Ryckebusch predicts more multinucleon knockout at $\omega=330$ MeV than at $\omega=475$ MeV; we see the opposite effect. Recently Benhar \cite{Benhar} calculated the continuum cross sections at $E_m > 220$ using a correlated nuclear matter spectral function in PWIA. The magnitude of his calculated cross sections is consistent with the data at $\omega = 475$ MeV and slightly overpredicts the data at $\omega= 330$ MeV. However, his calculated cross section decreases much more rapidly with missing energy than does the data. A calculation using the $^{12}$C spectral function would be very valuable to help us understand the large differences between the $\omega =330$ and $475$ MeV measurements in both the valence knockout and continuum regions. \section{CONCLUSIONS} The different data-theory-ratios at $\omega = 330$~MeV and at $\omega = 475$~MeV are consistent with the different cross sections seen beyond $E_m = 50$ MeV. At $\omega=330$~MeV, we see nearly four p-shell and two s-shell protons, but little continuum cross section. At $\omega=475$~MeV, we see half as many protons, but much more continuum cross section, extending out to the deepest missing energy measured (Figures~\ref{spectra:high} and~\ref{multi:nucleon:ratios}). We associate the cross section at $E_m > 50$ MeV with multinucleon knockout. We infer that some mechanism that increases with $\omega$ transforms some of the single-nucleon-knockout into multinucleon-knockout. The measurement at $\omega=475$ MeV strongly confirms prior results that the (e,e$'$) reaction at quasielastic kinematics involves strong many-body physics and reactions in addition to quasielastic knockout. These other reactions do not stem from either nucleon rescattering or from $\Delta$ interactions. The $\omega=330$ MeV measurement indicates that well below quasielastic kinematics, but above collective phenomena such as giant resonances, the (e,e$'$) reaction is primarily single-nucleon quasielastic knockout. The data-theory-ratios, within large uncertainties, are close to the expected values from the simple shell model. However there is still some residual many-body physics at that low energy transfer. These data, especially the strength at high missing energies, strongly support the growing realization that the inclusive (e,e$'$) quasielastic peak contains much more many-body physics than was originally thought. This additional complexity persists at large momentum transfer and is not understood. The low $\omega$ side of the quasielastic peak appears to be dominated by the simple single-nucleon knockout process, but some complexity still appears. This work was supported in part by the Department of Energy under contract \mbox{\#DE-AC02-76ERO3069}, and the National Science Foundation.
2024-02-18T23:40:13.007Z
1998-11-30T17:32:54.000Z
algebraic_stack_train_0000
1,688
7,952
proofpile-arXiv_065-8609
\section{Introduction} The tendency of stars to form in approximately coeval groups, rather than in isolation, is by now a well established observational fact (Lada et al.~\cite{Lea93}; Zinnecker et al.~\cite{Zea93}; Gomez et al.~\cite{Gea93}; Hillenbrand~\cite{H95},~\cite{H97}; Testi et al.~\cite{Tea97}). A result which sets stringent constraints to the theory is the fact that the ``richness'' of the groups depends on the mass of the most massive star in the cluster: low-mass stars form in loose groups of a few objects per cubic parsec (Gomez et al.~\cite{Gea93}), which in the following we will call ``aggregates'', while high-mass stars are usually found to form in dense clusters with densities up to 10$^4$ objects per cubic parsec in the case of the Trapezium cluster (see e.g. McCaughrean \& Stauffer~\cite{McCS94}; Hillenbrand~\cite{H97}). The transition between these two modes of formation occurs in the mass interval $2{_{<}\atop^{\sim}}\rm M/M_\odot {_{<}\atop^{\sim}} 15$. Herbig Ae/Be stars (Herbig~\cite{H60}) are pre--main-sequence (PMS) stars of intermediate-mass located outside complex star forming regions. These stars are sufficiently old (age $\sim0.5-5$~Myr) to be optically visible, but still young enough that any population of lower mass stars born in the same environment has not had time to move away from their birthplace. Thus, the fields around Herbig Ae/Be stars represent ideal targets to study the transition from aggregates to dense clusters and to empirically probe the onset of cluster formation. Multiwalength studies of the environments of several Herbig Ae/Be stars in the optical, near infrared and millimeter (LkH$\alpha$~101: Barsony et al.~\cite{Bea91}, Aspin \& Barsony~\cite{AB94}; BD$+$40$^\circ$4124: Hillenbrand et al.~\cite{Hea95}, Palla et al.~\cite{Pea95}; MacH12: Plazy \& M\'enard~\cite{PM97}) have shown that the young clusters are still partially embedded in the parent molecular clouds and that near infrared observations, especially at K-band ($2.2\,\mu$m), are best suited to detect the less massive companions to the Herbig Ae/Be star itself. Observations at near infrared wavelengths of a consistent number (16) of fields around Herbig Ae/Be stars have been presented by Li et al.~(\cite{Lea94}). Their study is focused at the detection of bright companions and/or diffuse emission which may affect large beam photometry of the Herbig stars, and, for this reason, it is limited to a small area around each star, much smaller than the extent of the known clusters (e.g. LkH$\alpha$~101, BD$+$40$^\circ$4124). The first statistical study aimed at investigating the properties of star formation around Herbig Ae/Be stars using K-band images has been carried out by Hillenbrand~(\cite{H95}). In spite of the relatively small number of observed fields (17), Hillenbrand has found evidence for a correlation between the mass of the Herbig Ae/Be star and the surface density of K-band stars detected around it. These initial results were confirmed by Testi et al.~(\cite{Tea97}; hereafter Paper~I), who obtained J, H and K images of moderately large fields around 19 Herbig Ae/Be stars. Using various methods to define the richness of the star cluster, in addition to simple K-band star counts, in Paper~I we have shown a clear dependence of the richness of the embedded clusters with the spectral type of the Herbig Ae/Be star. Moreover, our data seem to indicate that the clustered mode of star formation appears at a detectable level only for stars of spectral type B5--B7 or earlier. Whether the transition from isolated stars to clusters has a threshold or is a smooth function of stellar type (or, better, mass) is still unclear because of the small statistical significance of the samples studied so far. In order to complete our systematic study of Herbig stars, we have observed in the near infrared a second sample of 26 fields selected from the compilations of Finkenzeller \& Mundt~(\cite{FM84}) and Th\'e et al.~(\cite{Tea94}). Among the observed stars, we included Z~CMa whose spectral type is F5 and whose membership to the Herbig group has been disputed. However, we will not consider it in the analysis of the clustering properties. Together with the 19 stars analysed in Paper~I, our final sample consists of 44 objects covering almost uniformly the whole spectral range from O9 to A7 stars. We have included in our sample 33 out of the 39 stars (85\%) with declination greater than $-11.5^\circ$ listed in Finkenzeller \& Mundt~(\cite{FM84}) and replaced the remaining six objects with 11 stars of similar spectral type taken from the updated catalog of Th\'e et al.~(\cite{Tea94}, Table I of members and candidate members). Thus, we are confident that the final sample gives a fairly complete representation of the whole class of Herbig Ae/Be stars and that the inferences discussed in this paper should have a solid physical and statistical meaning. In a companion paper (Testi et al.~\cite{Tea98}; hereafter Paper~II), we have collected the observational material (images, colour-colour diagrams and stellar density profiles) of each of the combined sample of 44 fields. In this paper, we present the results of the analysis of this large body of observations aimed at the detection, characterization and comparison of the small star clusters around intermediate-mass PMS stars. \section{Results} \label{sres} \subsection{Richness Indicators} \begin{table*} \caption[]{\label{trind}Values of the {\it richness indicators}.} \vskip 0.3cm \begin{tabular}{lcccccccccc} \hline Star&Type&Age (Myr)&${\cal N}_{\rm K}$&I$_{\rm C}$&${\cal N}^0_{\rm K}$&${\cal N}^2_{\rm K}$&I$^0_{\rm C}$&I$^2_{\rm C}$&Log($\rho_{{\cal N}_{\rm K}}$)&Log($\rho_{\rm I_{\rm C}}$)\\ \hline V645~Cyg &O7&--&$>$5 &$ 29.5\pm2$ & $>$5& $>$5&$>29.5\pm 2$&$>29.5\pm 2$&2.1& 2.9\\ MWC~297 &O9&--&37 &$ 20.4\pm1$ & 24& 23&$ 14.3\pm 1$&$ 13.3\pm 1$&3.0& 2.8\\ MWC~137 &B0&--&$>$59&$ 76.0\pm9$ & 57& 55&$ 70.1\pm 5$&$ 64.4\pm 4$&3.2& 3.4\\ R~Mon &B0&--&0 &$-12.8\pm3$ & 0& 0&$ -5.1\pm 3$&$ -5.1\pm 3$&0.0& 0.0\\ BHJ~71 &B0&--&4 &$ 4.0\pm3$ & 1& 1&$ -0.4\pm 1$&$ -0.3\pm 1$&2.0& 2.1\\ MWC~1080 &B0&--&$>$9 &$ 31.0\pm3$ & $>$9& $>$9&$>31.0\pm 3$&$>31.0\pm 3$&2.4& 3.0\\ AS~310 &B0&--&$>$37&$ 70.0\pm17$&$>$34&$>$34&$>70.2\pm17$&$>70.2\pm17$&3.0& 3.3\\ RNO~6 &B1&--&$>$11&$ 11.0\pm1$ &$>$11&$>$11&$>11.0\pm 1$&$>11.0\pm 1$&2.5& 2.5\\ HD~52721 &B2&--&10 &$ 20.5\pm4$ & 5& 5&$ 11.1\pm 8$&$ 11.1\pm 8$&2.4& 2.8\\ BD+$65^\circ$1637&B2&--&29 &$ 75.0\pm5$ & 24& 28&$ 58.4\pm 4$&$ 63.4\pm 4$&2.9& 3.3\\ HD~216629 &B2&--&29 &$ 34.0\pm6$ & 23& 22&$ 27.0\pm 6$&$ 25.3\pm 7$&2.9& 3.0\\ BD+$40^\circ$4124&B2&--&19 &$ 11.0\pm3$ & 16& 16&$ 12.6\pm 2$&$ 12.9\pm 2$&2.7& 2.5\\ HD~37490 &B3&--&9 &$ 9.9\pm3$ & 6& 6&$ 3.4\pm 1$&$ 3.8\pm 1$&2.4& 2.5\\ HD~200775 &B3&--&8 &$ 1.9\pm1$ & 7& 7&$ 1.0\pm 1$&$ 1.0\pm 1$&2.3& 1.8\\ MWC~300 &Be&--&$>$2 &$ 21.0\pm8$ & $>$2& $>$2&$>21.9\pm 8$&$>21.9\pm 8$&1.7& 2.8\\ RNO~1B &Be&--&12 &$ 9.7\pm1$ & 12& 12&$ 8.9\pm 1$&$ 9.0\pm 8$&2.5& 2.5\\ HD~259431 &B5&0.05&2 &$ 0.9\pm2$ & 0& 0&$ -1.7\pm 3$&$ -0.2\pm 2$&1.7& 1.4\\ XY~Per &B6&2.0&3 &$ 11.3\pm3$ & 2& 5&$ -0.5\pm 1$&$ -0.1\pm 1$&1.9& 2.5\\ LkH$\alpha$~25 &B7&0.2&11 &$ 14.5\pm5$ & 8& 5&$ 10.3\pm 3$&$ 9.0\pm 3$&2.5& 2.6\\ HD~250550 &B7&0.5&4 &$ 2.2\pm2$ & 2& 2&$ 0.8\pm 1$&$ 0.8\pm 1$&2.0& 1.8\\ LkH$\alpha$~215 &B7&0.1&7 &$ 3.9\pm1$ & 6& 4&$ 5.4\pm 1$&$ 3.6\pm 1$&2.3& 2.1\\ LkH$\alpha$~257 &B8&2.0&15 &$ 5.5\pm6$ & 14& 16&$ 8.3\pm 3$&$ 9.5\pm 4$&2.6& 2.2\\ BD+$61^\circ$154 &B8&0.1&8 &$ -1.4\pm3$ & 4& 4&$ 2.0\pm 1$&$ 2.4\pm 1$&2.3& 0.0\\ VY~Mon &B8&0.05&25 &$ 23.2\pm5$ & 18& 16&$ 14.9\pm 2$&$ 12.9\pm 2$&2.8& 2.8\\ VV~Ser &B9&1.0&24 &$ 16.9\pm5$ & 20& 23&$ 3.1\pm 2$&$ 3.0\pm 2$&2.8& 2.7\\ V380~Ori &B9&1.0&3 &$ -2.0\pm2$ & 3& 3&$ -1.5\pm 3$&$ -1.5\pm 3$&1.9& 0.0\\ V1012~Ori &B9&--&4 &$ 1.9\pm2$ & 4& 4&$ 1.0\pm 1$&$ 1.0\pm 1$&2.0& 1.8\\ LkH$\alpha$~218 &B9&1.0&8 &$ 2.0\pm5$ & 5& 7&$ 1.7\pm 1$&$ 2.0\pm 1$&2.3& 1.8\\ AB~Aur &A0&1.5&$>$3 &$ 3.0\pm6$ & $>$1& $>$3&$ -1.0\pm 2$&$ -2.3\pm 2$&1.9& 2.0\\ VX~Cas &A0&1.0&13 &$ 4.5\pm4$ & 34& 35&$ 3.9\pm 4$&$ 4.4\pm 4$&2.5& 2.1\\ HD~245185 &A2&7.0&10 &$ 4.5\pm5$ & 19& 19&$ 5.0\pm 1$&$ 5.0\pm 1$&2.4& 2.1\\ MWC~480 &A2&7.0&$>$3 &$ 5.0\pm6$ & $>$9& $>$9&$ -2.4\pm 2$&$ -2.4\pm 2$&1.9& 2.2\\ UX~Ori &A2&1.0&0 &$ -0.3\pm1$ & 0& 0&$ -0.1\pm 1$&$ -0.1\pm 1$&0.0& 0.0\\ T~Ori &A3&1.0&5 &$ 1.0\pm2$ & 4& 7&$ 0.4\pm 1$&$ 1.9\pm 2$&2.1& 1.5\\ IP~Per &A3&7.0&3 &$ 5.3\pm4$ & 6& 6&$ -0.7\pm 3$&$ -0.7\pm 3$&1.9& 2.2\\ LkH$\alpha$~208 &A3&1.0&4 &$ 2.2\pm5$ & 3& 2&$ 1.2\pm 1$&$ 2.0\pm 1$&2.0& 1.8\\ MWC~758 &A3&6.0&$>$2 &$ 3.4\pm1$ & $>$3& $>$3&$ -0.3\pm 1$&$ -0.3\pm 1$&1.7& 2.0\\ RR~Tau &A4&0.1&7 &$ 0.8\pm6$ & 2& 2&$ -0.4\pm 4$&$ -3.2\pm 3$&2.3& 1.4\\ HK~Ori &A4&7.5&7 &$ 2.2\pm1$ & 11& 11&$ 2.2\pm 1$&$ 2.2\pm 1$&2.3& 1.8\\ Mac~H12 &A5&--&15 &$ 5.1\pm1$ & 21& 21&$ 5.9\pm 1$&$ 6.6\pm 1$&2.6& 2.2\\ LkH$\alpha$~198 &A5&10.0&6 &$-10.6\pm11$& 14& 14&$ -9.4\pm10$&$ -8.4\pm 9$&2.2& 0.0\\ Elias~1 &A6&--&$>$2 &$ 2.0\pm3$ & $>$2& $>$2&$ 1.0\pm 1$&$ 1.0\pm 1$&1.7& 1.8\\ BF~Ori &A7&3.0&4 &$ 1.1\pm1$ & 3& 3&$ 1.1\pm 1$&$ 1.1\pm 1$&2.0& 1.5\\ LkH$\alpha$~233 &A7&4.0&2 &$ 1.0\pm1$ & 4& 5&$ 0.6\pm 1$&$ 1.5\pm 1$&1.7& 1.5\\ \hline \end{tabular} \end{table*} In our previous studies we have identified the most suitable indicators of the richness of embedded stellar cluster that greatly reduce the problem of background/foreground contamination. The two quantities of interest are the number of stars in the K-band image within 0.21 pc from the Herbig star with an absolute K magnitude M$_{\rm K}\leq 5.2$ (\Nk), and the integral over distance of the source surface density profile subtracted by the average source density measured at the edge of each field, (I$_{\rm C}$). The choice of a radius of 0.2 pc for computing \Nk\ is suggested by the typical value of the cluster size determined in Paper~II, as the distance from the Herbig star where the sources surface density profile reaches the background level. As illustrated in Fig.~\ref{fsize}, the distribution of the radius of the stellar density enhancement shows a clear peak around $r\sim$0.2~pc. This result is in good agreement with that found in various young stellar clusters (Hillenbrand~\cite{H95}; Carpenter et al.~\cite{Car97}). It is remarkable that stellar groups with a few to several hundred members share similar sizes, corresponding to the typical dimensions of dense cores in molecular clouds. \begin{figure} \centerline{\psfig{figure=7985f1.ps,width=8.8cm}} \caption[]{Distribution of the cluster radius of 21 stellar groups detected by computing the source surface density profiles obtained in Paper~II.} \label{fsize} \end{figure} The values of \Nk\ and I$_{\rm C}$\ computed for the 44 stars of our complete sample are given in Columns~3 and 4 of Table~\ref{trind}, together with the name and spectral type of the Herbig Ae/Be star at the center of each field. The uncertainty on I$_{\rm C}$\ has been calculated by propagating the error in the determination of the background stellar density at large distances from the Herbig star. As in Paper~I, we discuss the dependence of the richness indicators on the spectral type of the Herbig star rather than on its mass, even though the latter is the relevant physical parameter. The main reasons for this choice are the proximity of the Herbig star to the main sequence (and hence a close relation between spectral type and mass is ``almost'' well defined) and the much larger uncertainties in the mass estimates based on the location of the stars in the HR diagram and the use of evolutionary tracks. On the other hand, spectral types of young stars sample are usually determined with an uncertainty of one or two subclasses, with the exception of a few cases (see the discussion in Paper~II). In particular, RNO~1B and MWC~300 do not have a reliable classification and in the literature they are generically referred to as Be stars. For plotting purposes, we have arbitrarily assigned a B5 classification to both of them. These stars have both values of I$_{\rm C}$\ and \Nk\ typical of B stars, and the assigned spectral type of B5 does not affect our results. The variation of \Nk\ and I$_{\rm C}$ as a function of the spectral type of the Herbig star is shown in Figures~\ref{fncksp} and \ref{ficsp}, respectively. Both figures confirm the initial results found in Paper~I and by Hillenbrand~\cite{H95} that higher mass stars tend to be surrounded by richer clusters. As expected, the evidence for clusters is more pronounced using I$_{\rm C}$ instead of ${\cal N}_{\rm K}$, but the fact that the same trend is seen in both indicators gives strength to its real significance. \begin{figure}[t] \centerline{\psfig{figure=7985f2.ps,width=8.8cm}} \caption[]{${\cal N}_{\rm K}$ as a function of the spectral type of the Herbig Ae/Be star. The two stars with an uncertain spectral type Be have been plotted as B5.} \label{fncksp} \end{figure} \begin{figure}[t] \centerline{\psfig{figure=7985f3.ps,width=8.8cm}} \caption[]{I$_{\rm C}$ versus spectral type for the whole sample. As in Fig.~\ref{fncksp}, the two stars with an uncertain spectral type Be have been plotted as B5.} \label{ficsp} \end{figure} In addition to the evidence for a variation of I$_{\rm C}$ with spectral type, the results of Fig.~\ref{ficsp} suggest the existence of three regimes for the distribution of stars around Herbig stars. In the first one, characterized by I$_{\rm C}{_{>}\atop^{\sim}}$40, the Herbig stars are definitely associated with rich clusters; in the intermediate regime with $10{_{<}\atop^{\sim}}$ I$_{\rm C}{_{<}\atop^{\sim}} 40$ a small cluster may be present; in the third case where I$_{\rm C}{_{<}\atop^{\sim}}$10 only small aggregates or background stars in the field are found, a situation similar to that observed in low-mass star forming regions. Three stars of our sample (MWC~137, AS~310 and BD$+$65$^\circ$1637) belong to the first group. The sizes of the clusters derived for these stars in Paper~II are on the large side of the distribution ($\sim 0.4$~pc), however, at least in two cases (AS~310 and BD$+$65$^\circ$1637) the Herbig star is not at the center of the cluster, and this results in an overestimate of the cluster radius, determined from the radial density profile. Among stars of early spectral types, we find a large spread of values of I$_{\rm C}$. In some cases, the low I$_{\rm C}$\ are dubious. For example, V645 Cyg and MWC~1080, have reletively low values of I$_{\rm C}$\ typical of the intermediate regime. However, both stars are embedded in bright nebulosities which may affect the star count resulting in a severe underestimate of the actual stellar density. In other cases, however, the low values of I$_{\rm C}$\ may be real. The presence of groups or small clusters is secure around most of the stars belonging to the intermediate regime. A clear central density peak is observed in the density profiles of almost all stars of this group, as shown in Paper~II. Typically, the Herbig star is located at the center of the stellar group, which generally has a round shape. A possible exception is the VY~Mon field, where the stellar aggregate has an elongated structure with an aspect ratio of approximately 4:1 (the Herbig star is at the center). HD~52721 is not associated with molecular gas (Fuente et al.~\cite{Fea98}), it is a rather old star, and the large cluster radius is probably the result of dynamical relaxation. Finally, the large majority ($\sim 65$\%) of the Herbig stars of the sample have very low values of I$_{\rm C}$\, showing no enhancements above the background stellar density. All the fields around stars with spectral type later than B9 belong to this group. However, although most of these stars have spectral types later than B7-B8, there are extreme cases of early-type stars that deserve some discussion. First, the negative value of I$_{\rm C}$\ found for R~Mon (B0) is probably due to localized extinction around the star, which is probably on the observer side of a molecular clump. Also, the stars BHJ~71 (B0), HD~200775 (B3) and HD~259431 (B5) have anomalously low values of I$_{\rm C}$. The first one is a poorly studied star, and we have not been able to find any information on the molecular gas in the literature (see Paper~II). HD~200775 is in many respect very similar to BD~65$^\circ$1637: same spectral type, same molecular gas morphology and similar evolutionary status (Fuente et al.~\cite{Fea98}). The absence of a cluster of young stars around HD~200775, as opposed to the rich cluster around BD~65$^\circ$1637, may be the result of dynamical dissipation (see below) or reflect a difference in the environment. HD~259431 appears to be isolated in spite of the large amount of molecular material around it (Hillenbrand~\cite{H95}). Among late-type stars, LkH$\alpha$~198 stands out for its negative value of I$_{\rm C}$\ due to the bright nebulosity and localized extinction. Interesting cases with an indication of an extended population of embedded sources but not spatially concentrated around the Herbig star are VX~Cas (A0), HD~245185 (A2) and Mac~H12 (A5). \subsection{Mass sensitivity correction} \label{sms} \begin{figure} \centerline{\psfig{figure=7985f4.ps,width=8.8cm}} \caption[]{${\cal N}^0_{\rm K}$ and ${\cal N}^2_{\rm K}$ as a function of the spectral type of the Herbig Ae/Be star. Open circles denote fields for which we could not compute the age using the Herbig~Ae/Be star parameters (see Paper~II and sect.~\ref{sms} of the main text). As in Fig.~\ref{fncksp}, the two stars with an uncertain spectral type Be have been plotted as B5.} \label{fnckspm} \end{figure} \begin{figure} \centerline{\psfig{figure=7985f5.ps,width=8.8cm}} \caption[]{I$^0_{\rm C}$ and I$^2_{\rm C}$ versus spectral type for the whole sample. Open circles as in Fig.~\ref{fnckspm}. As in Fig.~\ref{fncksp}, the two stars with an uncertain spectral type Be have been plotted as B5.} \label{ficspm} \end{figure} Since young stars change their bolometric luminosity and effective temperature during PMS evolution, the mass of the smallest star detectable ($M_l$) in our K-band images is a function of age ($t_c$), of the absolute completeness K-magnitude (M$^c_K$) and of the extinction along the line of sight (A$_K$). In Paper~II, using the PMS evolutionary tracks for intermediate mass stars of Palla \& Stahler~(\cite{PS93}) and the compilation of stellar parameters from the literature, we have computed in an homogeneous way the ages of most of the stars later than B5. Then, using the PMS evolutionary tracks for low mass stars of D'Antona \& Mazzitelli~(\cite{DM94}), we have translated M$^c_K$ into $M_l$ for two values of the extinction (A$_K=0$ and 2) assuming that all the stars in each group are coheval with the Herbig Ae/Be star. There are some stars (mostly early Be types) for which it is not possible to derive ages from PMS tracks. For these stars, we have estimated the age in the following way. From the results of Paper~II we see that the Herbig Ae systems tend to be 1 to 10~Myr old (the mean value is $\sim 4.6$~Myr), while Be systems tend to be younger than 1~Myr (with a mean age of $\sim 0.7$~Myr). Thus we decided to adopt as age $t_c=0.5$~Myr for the 17 Be systems and $t_c=5$~Myr for the 2 Ae systems without an age determination in Paper~II. If $A_K\sim$0 mag, all the fields with a good age estimate from Paper~II are complete to less than 0.1 M$_\odot$, with the only exception of the two fields around HK Ori and LkH$\alpha$198. In this case, \Nk\ and $I_c$ sample the totality of the stellar population in practically all fields. If $A_K\sim$ 2 mag, the minimum mass is generally larger, ranging from $<0.1$ to 0.49 M$_\odot$ in HK~Ori. For each field we have computed the absolute magnitude corresponding to these masses and the corresponding extinctions at K-band, then we have calculated the two {\it richness indicators} discussed in Sect.~\ref{sres} considering, in each field, only the stars with magnitude lower than the computed one. In this way, under the assumption that the mean extinction is approximately the same in all fields (and for all stars in each field), we have calculated mass limited {\it richness indicators}. The values of ${\cal N}_{\rm K}$ and I$_{\rm C}$ for A$_K=0$ and 2 (${\cal N}^0_{\rm K}$,${\cal N}^2_{\rm K}$, I$^0_{\rm C}$ and I$^2_{\rm C}$) are reported in Table~\ref{trind}, Columns~6--9. Since some of the fields without an estimate of the age of the Herbig~Ae/Be star have mass sensitivities lower than our limits (which are based on HK~Ori), some of the values reported are lower limits (indicated by $>$). In Fig.~\ref{fnckspm} and~\ref{ficspm} we show the behaviour of the ``mass sensitivity corrected'' indicators. Fig.~\ref{fnckspm} clearly shows that trend detected in Fig.~\ref{fncksp} is completely cancelled by the bias introduced by the contamination from field stars. In fact, since the late type stars are older than early type the absolute magnitude corresponding to the same mass limit is higher for the Ae systems than for the Be systems, and since this discrepancy is not compensated by the difference in distance, we are effectively counting more field objects in Ae fields than in Be fields. On the contrary, Fig.~\ref{ficspm} clearly shows the same trend as Fig.~\ref{ficsp} supporting our conclusion of a dependence of I$_{\rm C}$\ on the spectral type of the Herbig Ae/Be star. Therefore in the following we will use I$_{\rm C}$\ which has a higher signal to noise ratio than I$^0_{\rm C}$ and I$^2_{\rm C}$. \section{Discussion} \label{sdis} In Paper~I we suggested the presence of a threshold for the appearence of clusters around Herbig stars of spectral type earlier than $\sim$B7. The results discussed in the previous section confirm the property that Be systems are associated with more conspicuous groups than the Ae systems. Although the identification of a critical spectral type (or mass) is no longer evident in the complete sample, it is worth noting that no stars later than B9 show large groups, i.e. their I$_{\rm C}$ is ${_{<}\atop^{\sim}} 10$. This general result raises some important issues: (1) is this difference between Be and Ae system related to a different location in the galaxy? (2) since Ae systems are generally older than Be systems, could the disappearence of the cluster be an evolutionary effect? (3) what is the typical environment in which intermediate mass stars form? (4) does the standard accretion scenario for the formation of Herbig stars need to be replaced by formation in clusters? The first question is easy to answer. We do not find evidence for a dependence of the clustering properties with the galactic position of the target star, shown in Paper~II. We must caution, however, that most of the observed stars lie in the outer regions of the Galaxy, due to the selection effect introduced by our observing sites in the northern hemisphere. Although we do not expect an opposite result toward the inner Galaxy, we must await the results of a similar study on a southern sample before generalizing our conclusion. \subsection{Age effect} \label{sage} Following the age estimates given in Paper~II, our sample of Herbig~Ae/Be stars covers the range of ages from $\sim 0.05$ to 10~Myr. As noted above, Ae stars tend to be older than Be stars. Thus, it is possible that the variation of the {\it richness indicators} from Be to Ae systems is caused by some evolutionary effect. It is possible that a stellar group composed by few tens or hundreds of objects could be dispersed on a timescale of a few million years and become undetectable (i.e. confused with the field stars) in older systems. In order to explore this possibility more quantitatively, we show in Fig.~\ref{fage} the run of I$_{\rm C}$ as a function of the age for the 25 fields with an age determination of the Herbig star (see Table~2 of paper~II). The lack of stars with high values of I$_{\rm C}$ for ages greater than 2~Myr is related to the fact that our sample does not contain Be stars of that age. However, it is worth noting that for Be systems (filled circles in Fig.~\ref{fage}) younger than 2~Myr we find high and low values of I$_{\rm C}$ at any age. \begin{figure} \centerline{\psfig{figure=7985f6.ps,width=8.8cm}} \caption[]{I$_{\rm C}$ versus age. Only the fields with age determination from Paper~II are presented. The arrow represents the field around LkH$\alpha$~198 (I$_{\rm C}=-10.6$). filled circles: Be systems; open circles: Ae systems.} \label{fage} \end{figure} From the figure we see that it does not seem to exist a correlation between the age of the Herbig star and the presence of a cluster. The correlation is only with the spectral type (or mass). Stars earlier than B7 have clusters or not independently of their age; stars later than B7 do not have clusters and are on average older than the rest. If we say for convenience that a cluster is detected for $I_{\rm C}{_{>}\atop^{\sim}} 10$, the oldest star surrounded by a cluster is XY~Per with an age of 2 Myr. Since stars of age greater than $\sim$4 Myr do not have clusters, there are two possibilities to account for such property: these stars did form in relative isolation or the attending cluster has by now disappeared. The former explanation is obvious and does not require any further comment. The second alternative is theoretically more interesting and we have performed several N-body simulations to verify under which conditions the dynamical evolution of a small cluster can evolve so rapidly to dissipate the majority of its members in a time scale less than $\sim$4~Myr. We have considered models containing $N=$100, 150 and 200 stars starting from different initial conditions (virialized and non virialized clusters) and with varying frequency distributions of stellar masses (Salpeter, Scalo, Kroupa and uniform mass). The half-mass radius was varied between 0.4 and 0.7 pc and the typical velocity dispersion was 2 km s$^{-1}$. Thus, the clusters are initially richer and larger than those actually observed around Herbig stars characterized by I$_{\rm C}\sim$30-40 and a radius of $\sim$0.2 pc. We have also varied the upper limit of the most massive object from 20 M$_\odot$, corresponding to a O9 star, to 2 M$_\odot$, corresponding to an A5 star. It is well known that the dynamical evolution of cluster models is very dependent on the choice of the IMF, especially in what concerns the effects of mass segregation at early times (e.g. de la Fuente Marcos 1995, 1997). We have not considered the presence of primordial binaries in the initial population. For most runs, we have found that it is very difficult to loose a significant fraction of stars in the initial few crossing times, as required by the age constraints of the Herbig stars. The only cases where the stellar population decreases to less than half the initial value in about ten crossing times are those characterized by the lowest number of stars ($N=$100 in our models) distributed in mass according to a Salpeter IMF. The first point is a critical one, since the evaporation time increases very rapidly with the number of stars. Even models with $N=$150 do not evolve sufficiently rapidly. As for the IMF, the result is acceptable even though we know that a single power-law approximation must break down at subsolar values. More important, however, is the sensitivity of the evolution to the upper limit of the mass distribution. Only if the most massive star exceeds 5~M$_\odot$, the evolution is fast enough that the system loses many members in several crossing times. Otherwise, it is impossible to modify the cluster composition in a few million years. Now, since a 5~M$_\odot$ corresponds to a B6 ZAMS star, it is clear that Herbig stars of later types which were formed in relatively small clusters would have retained the original population of companions at the observed age. {\it Therefore, we conclude that the fact that we do not see evidence for the presence of clusters around stars of type later than about B7-B8 is an imprint of the stellar formation mode rather than the consequence of the dynamical evolution of a presently dispersed cluster. On the other hand, the absence of clusters around some of the early type B stars could result from the dynamical evolution, especially in case of a relatively small initial population}. \subsection{The transition from loose to dense groups} \label{srho} \begin{figure*} \centerline{\psfig{figure=7985f7a.ps,height=8cm} \hskip 0.3cm \psfig{figure=7985f7b.ps,height=8cm}} \caption[]{\label{frho} Stellar volume densities derived from ${\cal N}_{\rm K}$ ({\it left}) and from I$_{\rm C}$ ({\it right}) versus spectral type of the central star. Stars with I$_{\rm C}<0$ have been excluded. The heavy vertical line at O6 represents the range of stellar densities found in the Trapezium cluster, whereas that at G/K (not to scale) represents the densities of stellar groups in Taurus-Auriga.} \end{figure*} The values of ${\cal N}_{\rm K}$ and I$_{\rm C}$ can be transformed into star (or effective star) volume densities using the average cluster radius of 0.2 pc and assuming spherical clusters. The logarithm of the stellar densities in sources per cubic parsec are reported in the last two columns of Table~\ref{trind}. This estimates of the stellar densities are affected by large uncertainties (as high as 30\%, Hillenbrand~\cite{H95}), due to the unknown depth of the clusters along the line of sight and the neglection of concentration effects, but it is a useful approximation to be used for comparing the various regions. The resulting distributions shown in Fig.~\ref{frho} confirm the apparent trend discussed in Sect.~2 and put on a firmer ground the suggestion of Hillenbrand (1995) of a physical relationship between cluster density and the maximum mass of the Herbig star. For comparison, we also show in Fig.~\ref{frho} the stellar density range of the T Tauri aggregates found in Taurus-Auriga by Gomez et al.~(\cite{Gea93}) (shown as the solid line at G/K, not to scale), and the same quantity for the Trapezium cluster (McCaughrean \& Stauffer~\cite{McCS94}; Hillenbrand~\cite{H97}). The conclusion is evident: {\it intermediate mass stars mark the transition from low density aggregates of ${_{<}\atop^{\sim}} 10$ stars per cubic parsec of T Tauri stars to dense clusters of ${_{>}\atop^{\sim}} 10^3$ stars per cubic parsec associated with early-type stars}. \subsection{Implications for star formation} \label{sisf} The conventional theory of protostellar infall successfully accounts for the formation and early stellar evolution of low- and intermediate-mass stars up to about 10 M$_\odot$. The location of the stellar birthline and the distribution of the observed T Tauri and Herbig Ae/Be stars in the HR diagram are in excellent agreement with the theoretical predictions (Palla \& Stahler 1990, 1993). Protostars more massive than about 10 M$_\odot$ burn hydrogen while still in the accretion phase and therefore join the main sequence, implying that stars of even higher mass have no contracting PMS phase. Observations of clusters such as NGC~6611 and the dense regions of the Trapezium cluster support this finding (e.g. Hillenbrand 1993, 1998). However, the accretion scenario of isolated protostars fails to explain the existence of massive stars. Radiation pressure by photons produced at the stellar and disk surfaces on the infalling gas begins to become significant at about the critical mass of 10-15 M$_\odot$ (e.g. Larson \& Starrfield 1974; Yorke \& Krugel 1977). This limit can be increased by considering variations in the dust properties (abundances and sizes) or in the mass accretion rate (Wolfire \& Cassinelli 1987). But in either case the required conditions are so extreme (dust depletion by at least one order of magnitude and accretion rates of at least 10$^{-3}$~M$_\odot$~yr$^{-1}$) that cannot reasonably apply in all circumstances. Any departure of the infall from spherical symmetry can also help to shift the critical mass to very massive objects (e.g. Nakano 1989; Jijina \& Adams 1996). The fact that, as we have seen, stars form in groups and clusters rather than in isolation may offer an alternate mechanism to circumvent the problem of the high luminosity of massive stars and the negative feedback on the accreting gas. Accretion processes in clusters has long been advocated to explain the shape of the initial mass function (Zinnecker 1982; Larson 1992; Price and Podsiadlowski 1995), and to account for the location of massive stars in the cluster centers (Bonnell et al. 1997). Very recently, Bonnell et al. (1998) have presented model calculations of the dynamical evolution of rich, young clusters that show that accretion-induced collisions of low- and intermediate-mass stars (formed in the standard accretion mode) can result in the formation of more massive objects in time scales less than 10$^6$ years. However, the process requires a critical stellar/protostellar density so that collisions and merging are frequent. For the formation of a star of mass of $\sim$50 M$_\odot$, an initial density of $\rho_\ast>$10$^4$ stars pc$^{-3}$ is required. This is the typical density observed in the central regions of large clusters such as the Orion Nebula cluster where stars so massive are indeed present, but it is hardly the case for the stars of our sample. As shown in Fig.~\ref{frho}, the highest stellar volume densities are limited to about 2$\times$10$^3$ stars pc$^{-3}$, almost an order of magnitude lower than the minimum value for the efficient collisional build-up of massive stars. These densities are found for stars of spectral type B0-B2, i.e. with ZAMS masses of about 20 to 10 M$_\odot$. Thus, these stars are at the borderline between the conditions of isolated and collisional accretion mechanisms. It would be extremely interesting to extend the Bonnell et al. (1998) models to less extreme conditions on the stellar density in order to explore the dynamical evolution of clusters resembling those observed around Herbig Be stars. Based on the observational evidence, we may conclude that {\it in most cases, the formation of Herbig stars can be understood in terms of the conventional accretion scenario, whereas for the most massive members of this group dynamical interactions in the cluster core and residual gas accretion can be of importance in determining the final mass, even though the observed stellar density is never extremely high}. \section{Conclusions} \label{scon} Using the techniques described in Paper~I, we have analyzed the near-infrared observations of the fields surrounding Herbig Ae/Be stars presented in Paper~II with the goal of identifying and characterizing the presence of (partially) embedded clusters formed around the intermediate-mass stars. We have examined 44 fields around stars ranging in spectral types from A7 to O9. The main results of these studies can be summarized as follows. We have confirmed and extended the correlation between the spectral type (or mass) of the Herbig stars and the number of nearby, lower mass objects, first noted by Hillenbrand (1995). Rich clusters with densities up to 10$^3$ pc$^{-3}$ only appear around stars earlier than B5, corresponding to a mass of about 6 M$_\odot$. Conversely, A-type stars are never accompanied by conspicuous groups and the typical density is less than 50 pc$^{-3}$. However, the transition between formation in loose groups and in clusters does not occur sharply around spectral type B7 as we suggested in Paper~I. The appearence of denser stellar groups is quite smooth moving from Ae to Be systems, thus suggesting that intermediate-mass stars fill naturally the gap between the low-density, low-mass aggregates and the high-density, high-mass clusters. Using a richness indicator, I$_{\rm C}$, based on stellar surface density profiles, we have identified three regimes for the distribution of stars around Herbig Ae/Be stars: rich clusters characterized by values of I$_{\rm C}{_{>}\atop^{\sim}}$40; small clusters or aggregates with $10{_{<}\atop^{\sim}}$I$_{\rm C}{_{<}\atop^{\sim}} 40$, and small aggregates or background stars for lower values of I$_{\rm C}$. Herbig stars with rich clusters are rare: only three stars of the sample definitely belong to the first group, whereas in other cases we have indication that the star counts may severely underestimate the actual stellar density. The typical cluster size is $\sim$0.2 pc irrespective of the richness of the clusters. This value is remarkably close to the dimension of dense cores in molecular clouds. The majority of Herbig stars (65\%) are found in the last regime, showing no enhancement above the background stellar density. All the stars with spectral types later than B9 belong to this regime and are generally older than Be stars. This result is not affected by different sensitivities to the lowest masses in different fields. From dynamical considerations, we conclude that the absence of clusters around late-Be and Ae stars is an imprint of stellar formation in relatively isolated cores (as in the case of T Tauri stars), and not the result of rapid cluster evolution and dispersion. The fact that the most massive Herbig stars have clusters strongly supports the notion that their formation has been influenced by dynamical interactions with lower mass stars and/or protostellar cores. It is no coincidence that the onset of clusters manifests at a mass of about 8-10 M$_\odot$ where the conventional accretion scenario for isolated protostars faces severe theoretical problems. Future studies of the formation and evolution of these stars should take into account the observational evidence for high stellar density environments. \begin{acknowledgements} It is a pleasure to thank Alessandro Navarrini for his help with the N-body simulations described in this paper. This project was partly supported by ASI grant ARS 96-66 and by CNR grant 97.00018.CT02 to the Osservatorio di Arcetri. Support from CNR-NATO Advanced Fellowship program and from NASA's {\it Origins of Solar Systems} program (through grant NAGW--4030) is gratefully acknowledged. \end{acknowledgements}
2024-02-18T23:40:14.231Z
1998-11-13T04:09:28.000Z
algebraic_stack_train_0000
1,765
7,059
proofpile-arXiv_065-8641
\section{Introduction} {\it To disavow an error is to invent retroactively.}\\ \hspace*{2in} ---Johann Wolfgang von Goethe In a classical information system the basic error is represented by a $0$ becoming a $1$ or vice versa. The characterization of such errors is in terms of an error rate, $\epsilon$, associated with such flips. The correction of such errors is achieved by appending check bits to a block of information bits. The redundancy provided by the check bits can be exploited to determine the location of errors using the method of syndrome decoding. These codes are characterized by a certain capacity for error-correction per block. Errors at a rate less than the capacity of the code are {\it completely} corrected. Now let us look at a quantum system. Consider a single cell in a quantum register. The error here can be due to a random unitary transformation or by entanglement with the environment. These errors cannot be defined in a graded sense because of the group property of unitary matrices and the many different ways in which the entanglements can be expressed. Let us consider just the first type of error, namely that of random unitary transformation. If the qubit is the state $| 0\rangle$, it can become $a |0\rangle + b | 1 \rangle$. Likewise, the state $| 00\rangle$ can become $a | 00\rangle + b | 01 \rangle + c | 10 \rangle + d | 11\rangle $. In the initialization of the qubit a similar error can occur\cite{Ka98b}. If the initialization process consists of collapsing a random qubit to the basis state $| 0\rangle$, the definition of the basis direction can itself have a small error associated with it. This error is analog and so, unlike error in classical digital systems, it cannot be controlled. In almost all cases, therefore, the qubits will have superposition states, although the degree of superposition may be very low. From another perspective, classical error-correction codes map the information bits into codewords in a higher dimensional space so that if just a few errors occur in the codeword, their location can, upon decoding, be identified. This identification is possible because the errors perturb the codewords, {\it locally}, within small spheres. Quantum errors, on the other hand, perturb the information bits, in a {\it nonlocal} sense, to a superposition of many states, so the concept of controlling all errors by using a higher dimensional codeword space cannot be directly applied. According to the positivist understanding of quantum mechanics, it is essential to speak from the point of view of the observer and not ask about any intrinsic information in a quantum state\cite{Ka98a}. Let's consider, therefore, the representation of errors by means of particles in a register of $N$ states. We could consider errors to be equivalent to either $n$ bosons or fermions. Bosons, in a superpositional state follow the Bose-Einstein statistics. The probability of each pattern will be given by \begin{equation} \frac{1}{\left( \begin{array}{c} N + n - 1 \\ n \end{array} \right) }. \end{equation} So if there are 3 states and 1 error particle, we can only distinguish between 3 states: $00,~01~or~10,~11$. Each of these will have a probability of $\frac{1}{3}$. To the extent this distribution departs from that of classical mechanics, it represents nonlocality at work. If the particles are fermions, then they are indistinguishable, and with $n$ error objects in $N$ cells, we have each with the probability \begin{equation} \frac{1}{\left( \begin{array}{c} N \\ n \end{array} \right) }. \end{equation} If states and particles have been identified, these statistics will be manifested by a group of particles. If the cells are isolated then their histories cannot be described by a single unitary transformation. Like the particles, the errors will also be subject to the same statistics. These statistics imply that the errors will not be independent, an assumption that is basic to the error-correction schemes examined in the literature. To summarize, important characteristics of quantum errors that must be considered are {\it component proliferation}, {\it nonlocal effects} and {\it amplitude error}. All of these have no parallel in the classical case. Furthermore, quantum errors are analog and so the system cannot be shielded below a given error rate. Such shielding is possible for classical digital systems. We know that a computation need not require any expenditure of energy if it is cast in the form of a reversible process. A computation which is not reversible must involve energy dissipation. Considering conservation of information+energy to be a fundamental principle, a correction of random errors in the qubits by unitary transformations, without any expenditure of energy, violates this principle. Can we devise error-correction coding for quantum systems? To examine this, consider the problem of protein-folding, believed to be NP-complete, which is, nevertheless, solved efficiently by Nature. If a quantum process is at the basis of this amazing result, then it is almost certain that reliable or fault-tolerant quantum computing must exist but, paying heed to the above-mentioned conservation law, it appears such computing will require some lossy operations. In this note we examine the currently investigated models of quantum error-correction from the point of view of their limitations. We also consider how quantum errors affect a computation in comparison with classical errors. \section{Representing quantum errors} {\it Sed fugit interea, fugit inreparabile tempus.\\ But meanwhile it is flying, irretrievable time is flying.}\\ \hspace*{2in} ---Virgil Every unitary matrix can be transformed by a suitable unitary matrix into a diagonal matrix with all its elements of unit modulus. The reverse also being true, quantum errors can play havoc. The general unitary transformation representing errors for a qubit is: \begin{equation} \frac{1}{\sqrt {||e_1||^2 + ||e_2||^2}} \left[ \begin{array}{cc} e_1^* & e_2^* \\ e_2 & -e_1 \\ \end{array} \right] . \end{equation} These errors ultimately change the probabilities of the qubit being decoded as a $0$ and as a $1$. From the point of view of the user, when the quantum state has collapsed to one of its basis states, it is correct to speak of an error rate. But such an error rate cannot be directly applied to the quantum state itself. Unlike the classical digital case, quantum errors cannot be completely eliminated because they are essentially analog in nature. The unitary matrix (1) represents an infinite number of cases of error. The error process is an analog process, and so, in general, such errors cannot be corrected. From the point of view of the qubits, it is a nonlocal process. If it is assumed that the error process can be represented by a small rotation and the initial state is either a $0$ or a $1$, then this rotation will generate a superposition of the two states but the relative amplitudes will be different and these could be exploited in some specific situations to determine the starting state. But, obviously, such examples represent trivial cases. The error process may be usefully represented by a process of quantum diffusions and phase rotations. Shor\cite{Sh95} showed how the decoherence in a qubit could be corrected by a system of triple redundancy coding where each qubit is encoded into nine qubits as follows: \[|0\rangle \rightarrow \frac{1}{2\sqrt 2} ( |000\rangle + | 111\rangle ) ( |000\rangle + | 111\rangle ) ( |000\rangle + | 111\rangle )\], \begin{equation} |1\rangle \rightarrow \frac{1}{2\sqrt 2} ( |000\rangle - | 111\rangle ) ( |000\rangle - | 111\rangle ) ( |000\rangle - | 111\rangle ). \end{equation} Shor considers the decoherence process to be one where a qubit decays into a weighted amplitude superposition of its basis states. In parallel to the assumption of independence of noise in classical information theory, Shor assumes that only one qubit out of the total of nine decoheres. Using a Bell basis, Shor then shows that one can determine the error and correct it. But this system does not work if more than one qubit is in error. Since quantum error is analog, each qubit will be in some error and so this scheme will, in practice, not be useful in {\it completely} eliminating errors. The question of decoherence, or error, must be considered as a function of time. One may use the exponential function $\lambda e^{-\lambda t}$ as a measure of the decoherence probability of the amplitude of the qubit. The measure of decoherence that has taken place by time $t$ will then be given by the probability, $p_t$: \begin{equation} p_t = 1 - \lambda e^{-\lambda t}. \end{equation} In other words, by time $t$, the amplitude of the qubit would have decayed to a fraction $(1 - \lambda e^{-\lambda t})$ of its original value. At any time $t$, there is a $100 \%$ chance that the probability amplitude of the initial state will be a fraction $\alpha_k < 1$ of the initial amplitude. If we consider a rotation error in each qubit through angle $\theta$, there exists some $\theta_k$ so that the probability \begin{equation} Prob ( \theta > \theta_k) \rightarrow 1. \end{equation} This means that we cannot represent the qubit error probability by an assumed value $p$ as was done by Shor in analogy with the classical case. In other words, there can be no guarantee of eliminating decoherence. \section{Recently proposed error-correction codes} {\it The fox knows many things---the hedgehog one {\em big} one.}\\ \hspace*{2in} ---Archilochus The recently proposed models of quantum error-correction codes assume that the error in the qubit state $a |0\rangle + b | 1 \rangle$ can be either a bit flip $ |0 \rangle \leftrightarrow | 1 \rangle$, a phase flip between the relative phases of $| 0\rangle$ and $| 1 \rangle $, or both \cite{St96,Sh96,Pr97}. In other words, the errors are supposed to take the pair of amplitudes $(a,b)$ to either $(b,a)$, $(a, -b)$, or $(-b,a)$. But these three cases represent a vanishingly small subset of all the random unitary transformations associated with arbitrary error. These are just three of the infinity of rotations and diffusions that the qubit can be subject to. The assumed errors, which are all local, do not, therefore, constitute a distinguished set on any physical basis. In one proposed error-correction code, each of the states $ |0\rangle $ or $ | 1 \rangle$ is represented by a 7-qubit code, where the strings of the codewords represent the codewords of the single-error correcting Hamming code, the details of which we don't need to get into here. The code for $| 0 \rangle$ has an even number of $1$s and the code for $|1 \rangle$ has an odd number of $1$s. \[| 0\rangle_{code} = \frac{1}{\sqrt8} (|0000000\rangle + |0001111\rangle+ |0110011\rangle + |0111100\rangle\\\] \begin{equation} + |1010101\rangle +|1011010\rangle +| 1100110\rangle + |1101001\rangle), \end{equation} \[|1\rangle_{code} = \frac{1}{\sqrt8} (|1111111\rangle + |1110000\rangle+ |1001100\rangle + |1000011\rangle \\\] \begin{equation} + |0101010\rangle +|0100101\rangle +| 0011001\rangle + |0010110\rangle). \end{equation} As mentioned before, the errors are assumed to be either in terms of phase-flips or bit-flips. Now further ancilla bits--- three in total--- are augmented that compute the syndrome values. The bit-flips, so long as limited to one in each group, can be computed directly from the syndrome. The phase-flips are likewise computed, but only after a change of the bases has been performed. Without going into the details of these steps, which are a straightforward generalization of classical error correction theory, it is clear that the assumption of single phase and bit-flips is very restrictive. In reality, errors in the 7-qubit words will generate a superposition state of 128 sequences, rather than the 16 sequences of equations (5) and (6), together with 16 other sequences of one-bit errors, where the errors in the amplitudes are limited to the phase-flips mentioned above. {\it All kinds of bit-flips}, as well as modificaitons of the amplitudes will be a part of the quantum state. We can represent the state, with the appropriate phase shifts associated with each of the 128 component states, as follows: \begin{equation} |\phi\rangle = e^{i \theta_{1} } a_1 |0000000\rangle + e^{i \theta_{2} } a_2 |0000001\rangle + . . . + e^{i \theta_{N} } a_N |1111111\rangle) \end{equation} While the amplitudes of the newly generated components will be small, they would, nevertheless, have a non-zero error probability. These components, cannot be corrected by the code and will, therefore, contribute to an residual error probability. The amplitudes implied by (7) will, for the 16 sequences of the original codeword after the error has enlarged the set, be somewhat different from the original values. So if we speak just of the 16 sequences the amplitudes cannot be preserved without error. Furthermore, the phase errors in (7) cannot be corrected. These phases are of crucial importance in many recent quantum algorithms. It is normally understood that in classical systems if error rate is smaller than a certain value, the error-correction system will correct it. In the quantum error-correction systems, this important criterion is violated. Only certain specific errors are corrected, others even if smaller, are not. In summary, the proposed models are based on a local error model while real errors are nonlocal where we must consider the issues of component proliferation and amplitude errors. These codes are not capable of completely correcting small errors that cause new entangled component states to be created. \section{The sensitivity to errors} The nonlocal nature of the quantum errors is seen clearly in the sensitivity characteristics of these errors. Consider that some data sets related to a problem are being simultaneously processed by a quantum machine. Assume that by some process of phase switching and diffusion the amplitude of the desired solution out of the entire set is slowly increased at the expense of the others. Nearing the end of the computation, the sensitivity of the computations to errors will increase dramatically, because the errors will, proportionately, increase for the smaller amplitudes. To see it differently, it will be much harder to reverse the computation if the change in the amplitude or phase is proportionally greater. This means that the ``cost'' of quantum error-correction will depend on the state of the computing system. Even in the absence of errors, the sensitivity will change as the state evolves, a result, no doubt, due to the nonlocal nature of quantum errors. These errors can be considered to be present at the stage of state preparation and through the continuing interaction with the environment and also due to the errors in the applied transformations to the data. In addition, there may exist nonlocal correlations of qubits with those in the environment. The effect of such correlations will be unpredictable. Quantum errors cannot be localized. For example, when speaking of rotation errors, there always exists some $\theta_k > 0$ so that $prob (\theta > \theta_k) \rightarrow 1$. When doing numerical calculations on a computer, it is essential to have an operating regime that provides reliable, fault-tolerant processing. Such regimes exist in classical computing. But the models currently under examiniation for quantum computing cannot eliminate errors completely. The method of syndrome decoding, adapted from the theory of classical error-correcting codes, appears not to be the answer to the problem of fault-tolerant quantum computing. New approaches to error-correction need to be investigated. \section{Conclusions} Nonlocality, related both to the evolution of the quantum information system and errors, defines a context in which error-correction based on syndrome decoding will not work. How should error-correction be defined then? Perhaps through a system akin to associative learning in spin glasses. \section*{References} \begin{enumerate} \bibitem{Ka98a} S. Kak, ``Quantum information in a distributed apparatus.'' {\it Foundations of Physics} 28, 1005 (1998). \bibitem{Ka98b} S. Kak, ``On initializing quantum registers and quantum gates.'' LANL e-print quant-ph/9805002. \bibitem{Pr97} J. Preskill, ``Fault-tolerant quantum computation.'' LANL e-print quant-ph/9712048. \bibitem{Sh95} P.W. Shor, ``Scheme for reducing decoherence in quantum computer memory,'' {\it Phys. Rev. A} 52, 2493 (1995). \bibitem{Sh96} P.W. Shor, ``Fault-tolerant quantum computation,'' LANL e-print quant-ph/9605011. \bibitem{St96} A.M. Steane, ``Error correcting codes in quantum theory,'' {\it Phys. Rev. Lett.} 77, 793 (1996). \end{enumerate} \end{document}
2024-02-18T23:40:14.330Z
1998-11-02T22:56:50.000Z
algebraic_stack_train_0000
1,770
2,714
proofpile-arXiv_065-8644
\section*{Acknowledgments} \noindent This research was supported by the National Science Council of the Republic of China under Contract Nos. NSC 87-2112-M-006-002 and NSC 88-2112-M-006-003.
2024-02-18T23:40:14.335Z
1998-11-19T04:03:27.000Z
algebraic_stack_train_0000
1,771
35
proofpile-arXiv_065-8695
\section{Introduction} \label{introduction} The purpose of this paper is to describe two algorithms for computing with word-hyperbolic groups. Both of them have been implemented in the second author's package {\sf KBMAG} \cite{KBMAG}. The first is a method of verifying that a group defined by a given finite presentation is word-hyperbolic, using a criterion proved by Papasoglu in \cite{PAP}, which states that all geodesic triangles in the Cayley graph of a group are thin if and only if all geodesic bigons are thin. This is very similar to an algorithm described in \cite{WAK}, but it contains a simplification which appears to improve the performance substantially. It also improves a less developed approach to the problem described in \cite{HOLT}. The second algorithm provides a method of estimating the constant of hyperbolicity of the group with respect to the given generating set. We do not know of any previously proposed general method for solving this problem that has any prospect of being practical. Our current implementation is experimental, and is very heavy on its use of memory for all but the most straightforward examples, but it does at least succeed on examples like the Von-Dyck triangle groups and the two-dimensional surface groups. Both of them follow a general philosophy of group-theoretical algorithms that construct finite state automata. This approach was originally proposed in the algorithms described in Chapter 5 of \cite{ECHLPT} for computing automatic structures, and employed in their implementations for short-lex structures described in \cite{HOLT}. The basic idea is first to find a method of constructing likely candidates for the required automata, which we shall call the {\it working} automata, The second step is to construct other (usually larger and more complicated) {\it test} automata of which the sole purpose is to verify the correctness of the working automata. In the case when this verification fails, it should be possible to use words in the language of the test automata to construct improved versions of the working automata. One practical difficulty with this approach is that experience shows that incorrect working automata and the resulting test automata are much larger than the correct ones, so it can be extremely important to find good candidates on the first pass. The two algorithms dealt with in this paper are described in Sections~\ref{verifying} and \ref{finding}. Some details of their performance on some examples are presented in Section~\ref{examples}. Unfortunately, we need to recall quite a lot of notation from related earlier works, and we do this in Section~\ref{notation}. There are a number of computational problems in which it is useful to know hyperbolic constants which are either the same as those in this paper, or are closely related to them. Some examples of such problems follow. In another paper we will describe a linear algorithm to put a word in a word-hyperbolic group into short-lex normal form. The linear estimate is due to Mike Shapiro (unpublished). Our method, which is a bit different, may be usable in practice, though the ideas have not yet been implemented. The standard algorithm for converting a word into normal form in an automatic group is quadratic, as shown in \cite{ECHLPT}. We also plan to show how to construct the automata which accept 1)~all bi-infinite geodesics, 2)~all pairs of asymptotic geodesics and 3)~all pairs of bi-infinite geodesics which are within a finite Hausdorff distance of each other. Such algorithms are necessary if one is to have any hope of a constructive description of the limit space of a word-hyperbolic group, starting with generators and relations of the group. Some of these automata are also needed in Epstein's $n\log(n)$ solution of the conjugacy problem (not yet published). \section{Notation} \label{notation} Throughout the paper, $G$ will denote a group with a given finite generating set $X$. The identity element of $G$ will be denoted by $1_G$. Let $A = X \cup X^{-1}$, and let $A\uast$ be the set of all words in $A$. For $u,v \in A\uast$, we denote the image of $u$ in $G$ by $\overline {u}$, and $u =_G v$ will mean the same as $\overline {u} = \overline {v}$. For a word $u \in A\uast$, $l(u)$ will denote the length of $u$ and $u(i)$ will denote the prefix of $u$ of length $i$, with $u(i) = u$ for $i \geq l(u)$. Let $\Gamma = \Gamma_X(G)$ be the Cayley graph of $G$ with respect to $X$. We make $\Gamma$ into a metric space in the standard manner, by letting all edges have unit length, and defining the distance $\partial(x,y)$ between any two points of $\Gamma$ to be the minimum length of paths connecting them. (The points of $\Gamma$ include both the vertices, and points on the edges of $\Gamma$.) This makes $\Gamma$ into a {\it geodesic} space, which means that for any $x,y \in \Gamma$ there exist geodesics (i.e. shortest paths) between $x$ and $y$. For $g \in G$, $l(g)$ will denote the length of a geodesic path from the base vertex $1_G$ of $\Gamma$ to $g$. A {\it geodesic triangle} in $\Gamma$ consists of three not necessarily distinct points $a,b,c$ together with three directed geodesic paths $u,v,w$ joining $bc$, $ca$ and $ab$, respectively. The vertices $a,b,c$ of the triangle are not necessarily vertices of $\Gamma$; they might lie in the interior of an edge of $\Gamma$. There are several equivalent definitions of word-hyperbolicity. The most convenient for us is the following. Let $\Delta$ be a geodesic triangle in $\Gamma$ with vertices $a,b,c$ and sides $u$, $v$, $w$ as above. (Hence $l(u) = \partial(b,c)$, etc.) Let $\rho(a) = (l(v)+l(w)-l(u))/2$ and define $\rho(b), \rho(c)$ correspondingly. Then $\rho(b)+\rho(c)=l(u)$, so any point $d$ on $u$ satisfies either $\partial(d,b) \leq \rho(b)$ or $\partial(d,c) \leq \rho(c)$, and similarly for $v$ and $w$. The points $d,e,f$ on $u,v,w$ with $\partial(d,b) = \rho(b)$ and $\partial(d,c) = \rho(c)$, etc., are known as the {\it {meeting point\xspace}s} of the triangle. In a constant curvature geometry (the euclidean plane, the hyperbolic plane or the sphere), the {meeting point\xspace}s of a triangle are the points where the inscribed circle meets the edges. In more general spaces, such as Cayley graphs, the term {\it inscribed circle} has no meaning, but the {meeting point\xspace}s can still be defined. \myfig{inscribed}{This picture shows the {meeting point\xspace}s as the intersection of the inscribed circle with the edges of the triangle in the case of constant curvature geometry.} Suppose $a$, $b$ and $c$ are vertices in the Cayley graph. Then the {meeting point\xspace}s are also vertices if and only if the perimeter $l(u) + l(v) + l(w)$ is even. Let $\delta \in \R_+$. Then we say that $\Delta$ is {\it $\delta$-thin} if, for any $r \in \R$ with $0 \leq r \leq \rho(x)$, the points $p$ and $q$ on $v$ and $w$ with $\partial(p,a)=\partial(q,a)=r$ satisfy $\partial(p,q) \leq \delta$, and similarly for the points within distance $\rho(b)$ of $b$ and $\rho(c)$ of $c$. We call such points $p$ and $q$ {\it $\Delta$-companions}, or, if $\Delta$ is understood, just {\it companions}. Note that the definition makes sense even when the triangle $\Delta$ is not geodesic---we measure distances along the edges of the triangle. Normally companions are distinct, but there can be many situations where they coincide---for example two geodesics sides of a triangle could have an intersection consisting of a disjoint union of three intervals. Mostly points on the triangle have exactly one companion, but the {meeting point\xspace}s normally have two companions---once again, in degenerate situations two or all three of the {meeting point\xspace}s may coincide. The group $G$ is called word-hyperbolic if there exists a $\delta$ such that all geodesic triangles in $\Gamma$ are $\delta$-thin. (It turns out that this definition is independent of the generating set $X$ of $G$, although the minimal value of $\delta$ does depend on $X$.) The multi-author article \cite{ALO} is a good reference for the basic properties of word-hyperbolic groups. We also need to recall some terminology concerning finite state automata. This has been chosen to comply with that used in \cite{HOLT} as far as possible. The reader should consult \cite{ECHLPT} for more details on the definitions and basic results relevant to the use of finite state automata in combinatorial group theory. Let $W$ be a finite state automaton with input alphabet $A$. We denote the set of states of $W$ by $\mathcal{S}(W)$ and the set of initial and accepting states by $\mathcal{I}(W)$ and $\mathcal{A}(W)$ respectively. In a non-deterministic automaton there may be more than one transition with a given source and label, and some transitions, known as $\varepsilon$-transitions, may have no label. In a deterministic automaton, there are no $\varepsilon$-transitions, at most one transition with given source and label, and $W$ has only one initial state. (This type of automaton is named {\it partially deterministic} in \cite{ECHLPT}.) In this case, we denote the unique initial state by $\sigma _0(X)$ and, for each $x \in A$ and $\sigma \in \mathcal{S}(X)$, we denote the target of a transition from $\sigma$ with label $x$ by $\sigma ^x$ if it exists. We can then define $\sigma ^u$, for $\sigma \in \mathcal{S}(X)$ and $u \in A\uast$, in the obvious way whenever all of the required transitions exist. The automata in this paper can be assumed to be deterministic unless otherwise stated. The automata that we consider may be one-variable or two-variable. In the latter case, two words $u$ and $v$ in $A\uast$ are read simultaneously, and at the same speed. This creates a technical problem if $u$ and $v$ do not have the same length. To get round this, we introduce an extra symbol $\$$, which maps onto the identity element of $G$, and let $A^\dagger = A \cup \{\$\}$. Then if $(u,v)$ is an ordered pair of words in $A$, we adjoin a sequence of $\$$'s to the end of the shorter of $u$ and $v$ if necessary, to make them both have the same length. The resulting pair will be denoted by $(u,v)^\dagger$, and can be regarded as an element of $(A^\dagger \times A^\dagger)\uast$. Such a pair has the property that the symbol $\$$ occurs in at most one of $u$ and $v$, and only at the end of that word, and it is known as a {\it padded} pair. We shall assume from now on, without further comment, that all of the two-variable automata that arise have input language $A^\dagger \times A^\dagger$ and accept only padded pairs. Note that if $M$ is a two-variable automaton, then we can form a non-deterministic automaton with language equal to $\{u \myvert \exists v \mycolon (u,v)^\dagger \in L(M)\}$ simply by replacing the label $(x,y)$ of each transition in $M$ by $x$. (This results in an $\varepsilon$-transition in the case $x = \$$.) We call this technique {\it quantifying over the second variable} of $M$. Following \cite{HOLT}, we call a two-variable automaton $M$ a {\it word-difference automaton} for the group $G$, if there is a function $\alpha:\mathcal{S}(M) \rightarrow G$ such that \begin{mylist} \item[(i)] $\alpha(\sigma_0(M)) = 1_G$, and \item[(ii)] for all $x,y \in A^\dagger$ and $\tau \in \mathcal{S}(M)$ such that $\tau ^{(x,y)}$ is defined, we have $\alpha(\tau ^{(x,y)}) = x^{-1}\alpha(\tau)y$. \end{mylist} We shall assume that all states $\tau$ in a word-difference automaton $M$ are accessible; that is, there exist words $u,v$ in $A\uast$ such that $\sigma _0(M)^{(u,v)^\dagger} = \tau$. It follows from properties (i) and (ii) that $\alpha(\tau) = \overline {u^{-1}v}$, and so the map $\alpha$ is determined by the transitions of $M$. Conversely, given a subset $\mathcal{D}$ of $G$ containing $1_G$, we can construct a word-difference automaton $D$ with $\mathcal{S}(D) = \mathcal{D}$, $\sigma_0(D)=1_G$ and, for $d,e \in \mathcal{D}$ and $x,y \in A^\dagger$ a transition $d \rightarrow e$ with label $(x,y)$ whenever $x^{-1}dy =_G e$. The map $\alpha$ is the identity map. We call this the word-difference automaton associated with $\mathcal{D}$. (We have chosen not to specify the set $\mathcal{A}(D)$ of accepting states of $D$, because this may depend on the context.) Having constructed the automaton, we throw away the elements of $\mathcal{D}$ which are not accessible from the initial state $\sigma_0(D)$. If $u,v \in A\uast$, then we call the set $\mathcal{D} = \{\overline {u(i)}^{-1}\overline {v(i)} \myvert i \in \Z, i \geq 0 \}$ the set of word-differences arising from $(u,v)$. Then $(u,v)$ is in the language of the associated word-difference machine $D$ provided that $\overline {u}^{-1}\overline {v} \in \mathcal{A}(D)$. The group $G$ is said to be automatic (with respect to $X$), if it has an automatic structure. This consists of a collection of finite state automata. The first of these, denoted by $W$, is called the {\it word-acceptor}. It has input alphabet $A$, and accepts at least one word in $A$ mapping onto each $g \in G$. The remaining automata $M_x$, are called the {\it multipliers}. There is one of these for each generator $x \in A$, and also one for $x = 1_G$. These are two-variable, and accept ($w_1,w_2)^\dagger$ for $w_1, w_2 \in A\uast$, if and only if $w_1, w_2 \in L(W)$ and $w_1x =_G w_2$. See \cite{ECHLPT} for an exposition of the basic properties of automatic groups. It is proved in Theorem 2.3.4 of that book that there is a natural construction of the multipliers $M_x$ of an automatic structure as word-difference machines. Now we fix a total order on the alphabet $A$. The automatic structure is called {\it short-lex} if the language $L(W)$ of the word-acceptor consists of the short-lex least representatives of each element $g \in G$; that is the lexicographically least among the shortest words in $A\uast$ that map onto $g$. The existence of such a structure for a given group $G$ depends in general on the generating set $X$ of $G$, but word-hyperbolic groups are known to be short-lex automatic for any choice of generators. (This is Theorem 3.4.5 of \cite{ECHLPT}.) The group $G$ is called {\it strongly geodesically} automatic with respect to $X$ if there is an automatic structure in which $L(W)$ is the set of all geodesic words from $1_G$ to $g$ for $g \in G$. It is proved in Corollary 2.3 of \cite{PAP} that this is the case if and only if $G$ is word-hyperbolic with respect to $X$ (from which it follows that this property is independent of $X$). This result will be the basis of our test for verification of word-hyperbolicity in Section~\ref{verifying}; the procedure that we describe will verify strong geodesic automaticity. We shall assume throughout the paper that our group $G = \langle X \rangle$ is short-lex automatic with respect to $X$, and that we have already computed the corresponding short-lex automatic structure $\{W, M_x \myvert x \in A \cup \{1_G\} \}$. We assume also that the set $\mathcal{D}_M$ of all word-differences that arise in the multipliers $M_x$ together with the associated word-difference machine $D_M$ in which $1_G$ is the unique accepting state has been computed. These automata can be used to reduce (in quadratic time) words $u \in A\uast$ to their short-lex equivalent word in $G$ and so, in particular, we can solve the word problem efficiently in $G$. The above computations can all be carried out using the {\sc KBMAG} package \cite{KBMAG}. \section{Verifying hyperbolicity} \label{verifying} Papasoglu (\cite{PAP}) has shown that a necessary and sufficient condition for a group to be word-hyperbolic is as follows. Let $\Gamma$ be the Cayley graph with respect to some set of generators. The condition is that there is a number $c_P$, such that, for any two geodesic paths $u,v:[0,\ell]\rightarrow \Gamma$ parametrised by arclength, if $u(0)=v(0)$ and $u(\ell) = v(\ell)$, then, for all $t$ satisfying $0\le t \le \ell$, $d_\Gamma(u(t),v(t)) \le c_P$. The least possible value of $c_P$ is called {\it Papasoglu's constant}. Such a configuration of $u$ and $v$ is called a {\it geodesic bigon}. In order to know that all geodesic bigons have uniformly bounded width, there is no loss of generality in restricting to the case where $u(0)=v(0) $ is a vertex of the Cayley graph. We can also restrict to the case where $u(\ell)=v(\ell)$ is either a vertex of the Cayley graph or the midpoint of an edge. It is unknown whether the uniform thinness of such geodesic bigons follows from the uniform thinness of the more special geodesic bigons with both ends vertices. Our algorithm not only verifies word-hyperbolicity for a given group, but also gives a precise computation of the smallest possible value of Papasoglu's constant. In fact, it gives even more precise information, namely the set of word-differences $\overline{u(i)}^{-1}\overline{v(i)}$, where $u$ and $v$ vary over all geodesic bigons with $u(0) = v(0)$ a vertex and $i$ varies over all positive integers. In all the examples we have looked at, we have observed that the number of such group elements is very much smaller than the number of group elements of length at most $c_P$. This is important in practical computations. The algorithm for verifying word-hyperbolicity proceeds by constructing sequences $WD_n$, $ GE_n$, $GW_n$, $T_n$, for $(n > 0)$, of finite state automata. (These letters stand respectively for `word-difference', `geodesic-equality', `geodesic word acceptor' and `test'.) In general, for $n>0$, $\mathcal{WD}_n$ will be a set of elements of $G$ containing $\{1_G\}$, and $WD_n$ will be the associated word-difference machine in which $1_G$ is the only accepting state. We define $\mathcal{WD}_1$ to be the set $\mathcal{D}_M$ defined at the end of Section~\ref{notation}, and let $W$ be the short-lex word-acceptor. Then, when $n \geq 1$, we define $GE_n$, $GW_n$ and $T_n$ as follows. We define the language $$L(GE_n) = \{(u,v) \in A\uast \times A\uast \myvert (u,v) \in L(WD_n), v \in L(W), l(u)=l(v)\}.$$ Recall that all elements of $L(W)$ are geodesics and that the only accept state of $WD_n$ is $1_G$. It follows that in the previous definition, $\overline u = \overline v$ and that $u$ and $v$ are both geodesics. Now we define the language $$L(GW_n) = \{u \in A\uast \myvert \exists v \in A\uast \mycolon (u,v) \in L(GE_n) \}.$$ Again, $u$ must be a geodesic. Then we define the language \begin{eqnarray*} \lefteqn{L(T_n) = \{w \in A\uast \setminus L(GW_n) \myvert}\\ &&\exists u\mycolon (w,u) \in L(WD_n), u \in L(GW_n), l(u)=l(w)\}. \end{eqnarray*} Again, $u$ and $w$ are both geodesics in the previous definition. \myfig{verify}{This illustrates the geodesic paths $u$, $v$ and $w$ described in Section~\ref{verifying}.} If $L(T_n)$ is empty for some $n$, then the procedure halts. Otherwise, we find a geodesic word $w \in L(T_n)$, reduce it to its short-lex least representative $v$, and define $\mathcal{WD}_{n+1}$ to be the union of $\mathcal{WD}_n$ and the set of word-differences arising from $(w,v)$. Then we can define the automaton $WD_{n+1}$ and construct the other automata for the next value of $n$. \begin{theorem} The above procedure halts if and only if $G$ is strongly geodesically automatic with respect to $X$. \end{theorem} \begin{proof} First note that, if $L(T_n)$ is non-empty for some $n$, and contains the word $w$ reducing to $v \in L(W)$, then the word-differences arising from $(w,v)$ cannot all lie in $\mathcal{WD}_n$. For otherwise we would have $(w,v) \in L(WD_n)$, and hence $(w,v) \in L(GE_n)$ and $w \in L(GW_n)$. But $w$ has been chosen so that it is not in $L(GW_n)$. Hence $\mathcal{WD}_{n+1}$ strictly contains $\mathcal{WD}_n$. Thus, if the procedure does not halt, then the number of word-differences arising from pairs of geodesics $(u,v)$ with $u =_G v$ cannot be finite, and so by Theorem 2.3.5 of \cite{ECHLPT} $G$ cannot be strongly geodesically automatic. Conversely, suppose that the procedure does halt, and that $L(T_n)$ is empty for some $n \geq 1$. We claim that $L(GW_n)$ is equal to the set of all geodesic words. If we can show this, then it would follow from the definition that $L(GE_n)$ is equal to the set of pairs of geodesic words $(u,v)$ such that $v \in L(W)$ and $u =_G v$. But then, if $u_1,u_2 \in A\uast$ are geodesics with $l(\overline{u_1}^{-1}\overline{u_2}) \leq 1$ and $u_1,u_2$ reduce to $v_1,v_2 \in L(W)$, then $(u_1,v_1), (u_2,v_2) \in L(WD_n)$ whereas $(v_1,v_2) \in L(D_M)$. This would show that the word-differences arising from $(u,v)$ have bounded length, and so $G$ is strongly geodesically automatic by Theorem 2.3.5 of \cite{ECHLPT}. Thus it suffices to establish the claim that $L(GW_n)$ is equal to the set of all geodesic words. Suppose that the claim is false, and let $u$ be a minimal length geodesic word not contained in $L(GW_n)$. Since $L(GW_n)$ contains the empty word, $u=wx$ for some word $w$ and some $x \in A$. Then $w$ is a geodesic word and $l(w) = l(u)-1$. Suppose that $w$ reduces to $v \in L(W)$. Then $w \in L(GW_n)$ and $(w,v) \in L(GE_n)$. So $(w,v) \in L(WD_n)$. Therefore $(u,vx) = (wx,vx) \in L(WD_n)$. If $vx \in L(W)$, then by definition of $GE_n$ we get $(u,vx) \in L(GE_n)$, and so $u \in L(GW_n)$, a contradiction. On the other hand, if $vx \notin L(W)$ and $vx$ reduces to $v' \in L(W)$, then $(v,v') \in L(M_x)$. From the definition of $WD_1$ we have $(vx,v') \in L(GE_1)$, which implies $vx \in L(GW_r)$ for all $r \geq 1$. But then $u \in L(T_n)$, contrary to assumption. \end{proof} The above procedure is very similar to that described in \cite{WAK}. The principal difference is that our definition of the test-machines $T_n$ is rather simpler. Furthermore, in our implementation, we do not construct the non-deterministic automaton resulting from the quantification over the second variable in the definition of $L(T_n)$. Instead, we construct the two-variable automaton with language $$ \{(w,u) \myvert \ (w,u) \in L(WD_n),\ u \in L(GW_n), \ l(u)=l(w)\},$$ and check during the course of the construction whether there are any words $w \notin L(GW_n)$ that arise. If the construction completes and we find no such $w$, then $L(T_n)$ is empty. Otherwise we abort after finding some fixed number of words (such as 500) $w \notin L(GW_n)$ and use all of these words with their short-lex reductions to generate new word-differences. So the procedure stops if and only if $G$ is word-hyperbolic. If it stops at the $n$-th stage, then $\mathcal{WD}_n$ is a finite set of word-differences which gives the best possible value of Papasoglu's constant for the particular choice of generators. More precisely, the procedures described above find the set of all word-differences for all pairs of geodesics $(w,u)$ which start at the same vertex of the Cayley graph and end at vertices at distance at most one apart, and where the word $u$ is short-lex minimal. But by using a standard composite operation on two-variable automata as defined in~\cite{HOLT} for example, we can easily compute from this the word-differences for general geodesic bigons in which at least one of the vertices is a vertex of the Cayley graph. Moreover, we have constructed an automaton whose language is the set of all geodesics in the Cayley graph that start and end at vertices of the Cayley graph. \section{Finding the constant of hyperbolicity} \label{finding} Throughout this section, we assume that $G = \langle X \rangle$ is a word-hyperbolic group and that $\delta>0$ is a constant such that all geodesic triangles in $\Gamma_X(G)$ are $\delta$-thin. The aim is to devise a practical algorithm to find such a $\delta$, which should of course be as small as possible. As before, we assume that we have already calculated the short-lex automatic structure for $G$ with respect to $X$. \subsection{The reverse of a finite state automaton} \label {reverse} Our procedure makes use of reversed automata, and so we start with a brief discussion of this topic. If $w \in A\uast$ is a word, then we denote the reversed word by $w^R$. Let $M$ be a finite state automaton with alphabet $A$. We want to form the reversed automaton $M^R$ with language $\{w^R \myvert w \in L(M)\}$. We can define a non-deterministic version $NM^R$ of $M^R$, simply by reversing the arrows of all transitions, and interchanging the sets of initial and accepting states. Then we can build a deterministic machine $M^R$ with the same language in the standard way, by replacing the set $\Sigma = \mathcal{S}(NM^R)$ of states of $NM^R$ with its power set $\mathcal{P}(\Sigma)$, and, for ${\mathrm T},\Upsilon \in \mathcal{ P}(S)$, defining a transition in $M^R$ with a label $x$ from ${\mathrm T}$ to $\Upsilon$, if $\Upsilon$ is the set of all states $\upsilon$ of $M$ from which there exists an arrow labelled $x$ in $M$ with target some $\tau\in {\mathrm T}$. If we think of $x$ as a partial map $p_x$ from $\mathcal S (M)$ to itself, then the existence of an arrow in $M^R$ from ${\mathrm T}$ to $\Upsilon$ is equivalent to saying that $\Upsilon$ is the full inverse image of ${\mathrm T}$ in $\mathcal S (M)$ under $p_x$. The initial state of $M^R$ is the set of all accepting states of $M$ and a state of $M^R$ is accepting whenever it contains an initial state of $M$. In practice, we do not need to use the complete power set $\mathcal{P}(\Sigma)$. We start with the set of accepting states of $M$ as initial state of $M^R$, and then construct the accessible states and transitions of $M^R$ as the orbit of the initial state under the action of $A\uast$. Let $\sigma_0(M)$ and $\sigma_0(M^R)$ be the initial states of $M$ and $M^R$ respectively. Suppose that $v,w \in A\uast$, $\sigma_0(M)^v = \tau$, and $\sigma_0(M^R)^w = {\mathrm T} \subseteq \mathcal{P}(\Sigma)$. Then, from the construction above, we see that $\tau \in {\mathrm T}$ if and only if $vw^R \in L(M)$. We shall need this property below, and when we compute the reverse of an automaton we need to remember the subsets of $\Sigma$ that define the states of $M^R$. (This means that we cannot minimise $M^R$, but in practice this does not appear to be a problem because, at least for word-acceptors of automatic group, $M^R$ does not seem to have many more states than $M$.) \subsection{Reduction to short-lex geodesic triangles} The reader should now recall the notation for $\delta$-thin hyperbolic triangles with vertices $a,b,c$ and edges $u,v,w$ in the Cayley graph $\Gamma = \Gamma_X(G)$ defined in Section~\ref{notation}. \myfig{triangle}{A triangle with short-lex sides, annotated as in the text.} We shall call a geodesic triangle in $\Gamma$ {\it short-lex geodesic}, if its vertices are vertices of $\Gamma$ and if the words $A\uast$ corresponding to the edges of the triangle (which we shall also denote by $u,v,w$) all lie in $L(W)$; that is, they are all short-lex minimal words. It is important to work with short-lex triangles, because in general there are far more geodesic triangles and consideration of all of these is likely to make an already difficult computational problem impossible. Our algorithms are designed to compute the minimal $\delta \in \mathbb N$ such that all short-lex geodesic triangles are $\delta$-thin. In fact they do considerably more than this, because they compute the set of word-differences that arise from the word-pairs $(u,v)$, where $u$ and $v$ are the words from the two sides of such a triangle that go from a vertex of the triangle as far as the {meeting point\xspace}s on the two sides. From the following proposition, we derive a bound on the value of the thinness constant of hyperbolicity for arbitrary geodesic triangles. As before, let $\mathcal{D}_M$ be the set of word-differences associated with the multiplier automata $M_x$ in the short-lex automatic structure of $G$, and let $\gamma$ be the length of the longest element in $\mathcal{D}_M$. Let $\gamma'$ be the length of the longest element in the final stable set $\mathcal{WD}_n$ of word-differences defined in Section~\ref{verifying}. Although the bound in the next proposition will in general be too large, the fact that $\gamma$ and $\gamma'$ are usually smaller than $\delta$ means that it is likely only to be wrong by a small constant factor. \begin{proposition}\label{short-lex thin implies thin} Suppose that all short-lex geodesic triangles in $\Gamma$ are $\delta$-thin. Then all geodesic triangles in $\Gamma$ are $(\delta+2(\gamma+\gamma')+3)$-thin. \end{proposition} \begin{proof} Let $\Delta$ be any geodesic triangle in $\Gamma$. We fix a direction around the triangle which we call {\it clockwise}. We define a new triangle $\Delta'$, which is equal to $\Delta$ except that its vertices are (if necessary) moved clockwise around $\Delta$ to the nearest vertex of $\Gamma$. Thus each vertex is moved through a distance less than one 1, and it follows easily that the {meeting point\xspace}s of the triangle move through a distance less than 2. It is not difficult to see that if $p$ and $q$ are $\Delta$-companions (see Section~\ref{notation} for definition), then there exist $\Delta'$-companions $p'$ and $q'$ with $\partial(p,p') + \partial(q,q') < 2$. (A careful argument is needed if $p$ and $q$ are near {meeting point\xspace}s.) It follows that, if $\Delta'$ is $\delta'$-thin for some $\delta'$, then $\Delta$ is ($\delta'+2$)-thin. Let $a,b,c$ be the vertices of $\Delta'$ in clockwise order. Let $a'$ be the vertex on the union of the three sides of $\Delta'$, adjacent to $a$ in an anticlockwise direction, and similarly for $b'$ and $c'$. (So the vertices of $\Delta$ lie on the edges $(a',a), (b',b)$ and $(c',c)$.) We will assume that the six vertices $\{a,a',b,b',c,c\}$ are distinct, and leave to the reader the minor modifications needed for the cases where two or more of them coincide. Let $u$, $v$ and $w$ be the paths from $b$ to $c$, $c$ to $a$ and $a$ to $b$, respectively, defining short-lex reduced words $u$, $v$ and $w$ in $L(W)$, and let $\Delta''$ be the resulting short-lex geodesic triangle with vertices $a$, $b$ and $c$. The triangle $\Delta'$ is not necessarily geodesic, but the paths $u',v',w'$ on $\Delta'$ from $b$ to $c'$, $c$ to $a'$ and $a$ to $b'$, respectively, are geodesic, because they are subpaths of the sides of $\Delta$. Let $u''$, $v''$ and $w''$ be the paths from $b$ to $c'$, $c$ to $a'$ and $a$ to $b'$, respectively, defining short-lex reduced words in $L(W)$. The situation is illustrated in Figure~\ref{sltriangle}. \myfig{sltriangle}{This diagram illustrate the situation described in the proof of Proposition~\ref{short-lex thin implies thin}. } Then $u$ and $u''$ are words in $L(W)$ that have the same starting point in $\Gamma$ and end a distance one apart. So the word-differences arising from $(u,u'')$ lie in $\mathcal{D}_M$. Since the paths $u'$ and $u''$ have the same starting and end points, $u'$ is a geodesic and $u'' \in L(W)$, the word-differences arising from $(u',u'')$ lie in $\mathcal{WD}_n$. It follows that the word-differences arising from $(u,u')$ have length at most $\gamma + \gamma'$, and similarly for $(v,v')$ and $(w,w')$. \myfig{corner}{These illustrates the origin of $\delta + 2(\gamma+\gamma')+1$ in the expression bounding $\partial(p',q')$. In the diagram on the left, $l(ca) = l(ca') + 1$, and in the diagram on the right, $l(ca) = l(ca')$. The shapes of the curves are due to the fact that we are sometimes looking at points which are equidistant from $a$ and sometimes at points which are equidistant from $c$. Interested readers are left to work out the details for themselves. } The triangles $\Delta'$ and $\Delta''$ have the same vertices $a$, $b$ and $c$. Each side of $\Delta'$ is at least as long as the corresponding side of $\Delta''$, but no more than one unit longer---this can be deduced from the fact that the sides of the original triangle $\Delta$ are geodesic. It follows that the distance from a vertex of $\Delta'$ to its two adjacent $\Delta'$-{meeting point\xspace}s is within one unit of its distance to its two adjacent $\Delta''$-{meeting point\xspace}s. We then see that, if all pairs $p,q$ of $\Delta''$-companions satisfy $\partial(p,q) \leq \delta$, then all pairs $p',q'$ of $\Delta'$-companions satisfy $\partial(p',q') \leq \delta + 2(\gamma+\gamma')+1$. The result now follows. Part of the argument is illustrated in Figure~\ref{corner} \end{proof} \subsection{The automata $FRD$ and $FRD^3$ } In this section, we describe a finite state automaton which we shall call $FRD$, which stands for `forward, reverse, difference'. Roughly speaking, it is a two-variable machine which reads the two sides emerging from a vertex of a short-lex geodesic triangle as far as the {meeting point\xspace}s on those two sides. We also describe an associated automaton $FRD^3$ which consists of three copies of $FRD$. The three pairs of words read by $FRD^3$ will be accepted when they are the three pairs of edges emerging from the three vertices of some short-lex geodesic triangle, ending and meeting at the {meeting point\xspace}s of the triangle. \myfig{intv}{This diagram shows the meeting vertices\xspace, denoted by $d$, $e$ and $f$. Companions have been joined by a line. The diagram on the left shows the situation where the perimeter of the triangle is even and the diagram on the right where the perimeter is odd.} When the perimeter $l(u)+l(v)+l(w)$ of a short-lex geodesic triangle is even, the {meeting point\xspace}s $d,e,f$ are vertices of $\Gamma$ which lie on $u=bc$, $v=ca$ and $w=ab$ respectively. When the perimeter is odd, however, each {meeting point\xspace} lies in the middle of an edge of $\Gamma$, which is not convenient for us. We therefore move them to a neighbouring vertex and re-define $d,e,f$ to be the points on $u,v,w$ at distance $\rho(b)+1/2$, $\rho(c)+1/2$ and $\rho(a)+1/2$ from $b,c$ and $a$, respectively, and call $d,e,f$ the {\it meeting vertices\xspace}. Each of the three vertices of the triangle has two incident sides, one in the clockwise direction and the other in the anti-clockwise direction, starting from the vertex. In these terms, if we start at a vertex of the triangle and move away from this vertex along the two sides of the triangle emerging from it, one edge at a time, then we have to move one extra edge along the clockwise side than the anti-clockwise side to reach the meeting vertices\xspace on the two sides. Let $W$ be the word acceptor for the short-lex automatic structure of $G$. We assume that the reverse $W^R$ of $W$ has been computed, as described in~\ref{reverse}. For a short-lex geodesic triangle with meeting vertices\xspace $d$, $e$ and $f$ defined as above, we denote the elements of $G$ corresponding to paths from $d$ to $f$, $e$ to $d$, and $f$ to $e$, by $r$, $s$ and $t$, respectively. This is illustrated in Figure~\ref{intv}. Let $\mathcal{D}_1$ be the set of elements $r$ of $G$ that arise by considering all such triangles. (By symmetry, this set includes all of the elements $s$ and $t$ as well.) Let $\mathcal{D}_2$ denote the set of all elements of $G$ of the form $$\{\overline {w(i)}^{-1}\overline {v^R(i)} \myvert i \in \Z, 0 \leq i \leq \rho(a)\},$$ for all triangles under consideration; that is, the set of word-differences arising from reading the two edges of the triangle from $a$ up to the {meeting point\xspace}s. (For triangles with even perimeter, the elements $r,s,t$ lie in both $\mathcal{D}_1$ and $\mathcal{D}_2$.) Let $\mathcal{D}_T = \mathcal{D}_1 \cup \mathcal{D}_2$. Then our assumption that geodesic triangles are $\delta$-thin implies that $\mathcal{D}_T$ is finite, and that its elements have length at most $\delta + 1$. The automaton $FRD$ is defined as follows. Its states are triples $(\sigma,\Sigma,g)$ with $\sigma \in \mathcal{S}(W)$, $\Sigma \in \mathcal{S}(W^R)$ and $g \in \mathcal{D}_T$. (This explains the name $FRD$.) For $x \in A$ and $y \in A^\dagger$, a transition is defined with label $(x,y)$ from $(\sigma,\Sigma,g)$ to $(\sigma',\Sigma',g')$ if and only if $\sigma^x = \sigma'$, $\Sigma^y = \Sigma'$ and $x^{-1}gy = _G g'$. (We allow $y$ but not $x$ to be the padding symbol because, in the case of triangles with odd perimeter, we need to read one extra generator from the left hand edge than the right hand edge to get to the meeting vertex\xspace. We define $\Sigma^\$ = \Sigma$ for all $\Sigma$.) The initial state is $(\sigma_0(W),\sigma_0(W^R),1_G)$. The accepting states of $FRD$ are those with the third component $g \in \mathcal{D}_1$. The above description is not quite correct. In fact, each state has a fourth component which is either 0 or 1, and is 1 only when a pair $(x,\$)$ has been read. There are no transitions from such states. We also need to consider an automaton $FRD^3$, which consists of the product of three independent copies $FRD_a$, $FRD_b$ and $FRD_c$ of $FRD$. Its input consists of sextuples of words $(u_a,v_a,u_b,v_b,u_c,v_c) \in (A\uast)^6$, where $(u_a,v_a)$, $(u_b,v_b)$ and $(u_c,v_c)$ are input to the three copies of $FRD$. A state of $FRD^3$ consists of a triple $(\tau_a,\tau_b,\tau_c)$, where $\tau_a = (\sigma_a,\Sigma_a,g_a)$ is a state of $FRD_a$, etc. The initial state of $FRD^3$ is the triple consisting of the initial states of the three copies. To specify the set $\mathcal{A}(FRD^3)$ of accepting states, we recall that a state $\Sigma$ of $W^R$ is a subset of the set $\mathcal{S}(W)$ of states of $W$. We have $(\tau_a,\tau_b,\tau_c) \in \mathcal{A}(FRD^3)$ if and only if $\sigma_a \in \Sigma_b$, $\sigma_b \in \Sigma_c$, $\sigma_c \in \Sigma_a$, and $g_cg_bg_a = 1_G$. The proof of the following lemma should now be clear. \begin{lemma} The triple $(u_a,v_a,u_b,v_b,u_c,v_c)$ is accepted by $FRD^3$ if and only if $u_bv_c^R$, $u_cv_a^R$ and $u_av_b^R$ all lie in $L(W)$ and form the three sides of a short-lex geodesic triangle in $\Gamma$. \end{lemma} \subsection{Proving correctness of $FRD$} If we can construct $FRD$, then we can compute the value of the hyperbolic thinness constant $\delta$ for short-lex geodesic triangles as the maximum of the lengths of the words in $\mathcal{D}_T$. (Or more exactly, this number plus 1, because of our re-definition of the {meeting point\xspace}s of the triangles.) Conversely, we have already computed $W$ and $W^R$, so the construction of $FRD$ only requires us to find the set $\mathcal{D}_T$. The idea is to construct a candidate for $FRD$, and then to verify that it is correct. In one of our short-lex geodesic triangles, two of the edge-words $u,v$ can be chosen as arbitrary elements of $L(W)$, and the third is then determined as the representative in $L(W)$ of $(\overline {uv})^{-1}$. So we can proceed by choosing a large number of random pairs of words in $u,v \in L(W)$ (for example we might choose 10000 such pairs of length up to 50), computing $w$ as described, and then computing the set $\mathcal{D}_T$ of word-differences arising from all of these triangles. We do this several times until the set $\mathcal{D}_T$ appears to have stopped growing in size. (We are assuming that $G$ is word-hyperbolic, so we know that the correct set $\mathcal{D}_T$ is finite.) We then proceed to the verification stage, which is computationally the most intensive. The idea is to compute a two-variable finite state automaton $GP$ (geodesic pairs) of which the language is the subset of $A\uast \times A\uast$ defined by the expression \begin{eqnarray*} \lefteqn{\{(w_1,w_2)^\dagger \in A\uast \times A\uast \myvert}\\ &&\exists (u_a,v_a,u_b,v_b,u_c,v_c) \in L(FRD^3) \mycolon w_1 = u_av_b^R, w_2 = v_au_c^R \}. \end{eqnarray*} Then $GP$ accepts the set of pairs of sides $(w,v^R)$ emerging from the vertex $a$ in the triangles that are accepted by the current version of $FRD^3$. (During the course of the program, $FRD$ changes as more information is incorporated into it.) Thus $FRD$ is correct if and only if $L(GP) = L(W) \times L(W^R)$. Since checking for equality of the languages of minimised deterministic automata is easy, we can perform this check provided that we can construct $GP$. Furthermore, if the check fails then our definition of $GP$ ensures that $L(GP) \subset L(W) \times L(W^R)$. So we can find one or more specific words $ (w_1,w_2) \in L(W) \times L(W^R) \setminus L(GP)$ and then compute the word-differences arising from the short-lex geodesic triangle having $w_1$ and $w_2^R$ as two of its sides. We can then adjoin these to $\mathcal{D}_T$ and return to the construction of $FRD$. The construction of $GP$ can be carried out in principle, but because of the large number of quantified variables involved in the above expression, a naive implementation would be hopelessly expensive in memory usage. We now give a second and more detailed version of our implementation of this construction, in a way that makes the computation easier to carry out. It remains heavy in its memory usage, but it does at least work for easy examples. The basic objects collected during the course of the computation are the word-differences $\mathcal{D}_T$ referred to above, which are used in constructing $FRD$, and the triples $(r,s,t)$ of small triangles in the Cayley graph whose vertices are the meeting vertices\xspace of some short-lex geodesic triangles. The situation is shown in Figure~\ref{intv}. The general idea is to map out a short-lex geodesic triangle by advancing from one vertex, say $a$, using $FRD$. Then, at a certain moment, the machine `does the splits', with a non-deterministic jump corresponding to one of the small $(r,s,t)$-triangles. To complete the triangle, the two legs have to be followed to the other vertices $b$ and $c$, respectively, of the large triangle. Each of the legs follows the reverse of $FRD$. Here is a third, even more explicit, version of the construction, which can be skipped by readers who are only interested in the conceptual description of the program. We assume that we have constructed a candidate for $FRD$ explicitly. We do not construct $FRD^3$ explicitly, but we do make a list of all triples $(r,s,t)$ such that $r,s,t \in \mathcal{S}(FRD)$ and $(r,s,t) \in \mathcal{A}(FRD^3)$. Having done that, we can forget the structure of the states of $FRD$ as triples, and simply manipulate them as integers. We also need a version of an automaton $FRD^R$ that accepts the reverse language of $FRD$. In this case it is convenient to work with a partially non-deterministic version---that is, it is deterministic except that there are many initial states, not just one. The states are subsets of $\mathcal{S}(FRD)$ as described in \ref{reverse} and the transitions and accepting states are also as described there. But instead of having a unique initial state, for each accepting state $\tau$ of $FRD$ we make the singleton subset $\{\tau\}$ into an initial state of $FRD^R$. Note also that if $(u,v)^\dagger \in L(FRD)$ with $l(u)=l(v)+1$, then the reversed pair accepted by $FRD^R$ is $(u^R,\$v^R)$; that is, the padding symbol comes at the beginning rather than the end. In general, let us denote the word-pair formed by inserting the padding symbol at the beginning by $^\dagger(u,v)$. We shall now describe a non-deterministic version $NGP$ of $GP$ (geodesic pairs). Subsequent to its construction, it can be determinised using the usual subset construction, minimised, and its language compared with $L(W) \times L(W)^R$. The precise description of $NGP$ is rather technical, so we shall first attempt to explain its operation. A (padded) pair of words $(w_1,w_2)^\dagger$ is to be accepted if and only it satisfies the expression above; that is, $(w_1,w_2) = (u_av_b^R,v_au_c^R)$, where $(u_a,v_a)^\dagger \in L(FRD)$, and there exist words $u_b,v_c \in A\uast$ with $(u_b,v_b)^\dagger, (u_c,v_c)^\dagger \in L(FRD)$ and $(u_a,v_a,u_b,v_b,u_c,v_c) \in L(FRD^3)$. Equivalently, (writing the reverses of $u_b$, $u_c$, $v_b$ and $v_c$ by capitalising $u$ and $v$), $(w_1,w_2)^\dagger$ is accepted if and only of $(w_1,w_2) = (u_aV_b,v_aU_c)$ where $(u_a,v_a)^\dagger \in L(FRD)$, and there exist words $U_b$, $V_c \in A\uast$ with $^\dagger(U_b,V_b)$, $^\dagger(U_c,V_c) \in L(FRD^R)$ and $$(u_a,v_a,U_b^R,V_b^R,U_c^R,V_c^R) \in L(FRD^3).$$ \myfig{complex}{This shows the path in the automaton, broken into a first part which is in the automaton $FRD$ and a second part which is in two copies of the automaton $FRD^R$.} So the accepting path of $(w_1,w_2)$ through $NGP$ will be in two parts, the first $(u_a,v_a)$ and the second $(V_b,U_c)$. A picture of this is shown in Figure~\ref{complex}. Furthermore, we have either $l(u_a) = l(v_a)$ or $l(u_a) = l(v_a)+1$, depending on whether the perimeter of the geodesic triangle which has $w_1$ and $w_2$ as two of its sides is even or odd. In the first case, it is possible for $(V_b,U_c)$ to be empty, which occurs when the vertices $b$ and $c$ of the geodesic triangle coincide. The first part of the path through $NGP$ is simply a path through $FRD$, ending at a state $\sigma \in \mathcal{A}(FRD)$. In Figure~\ref{complex}, an intermediate state is also denoted by $\sigma$. The second part corresponds to the two paths $(U_b,V_b)$ and $(U_b,U_c)$ through $FRD^R$. These paths must end at the unique accepting state of $FRD^R$. This part of $NGP$ is non-deterministic, because we need to quantify over their second variables as described in Section~\ref{notation}. The initial states $\pi_1 = \{\sigma_1\}$ and $\pi_2 = \{\sigma_2\}$ of these two paths through $FRD^R$ must be such that $(\sigma,\sigma_1,\sigma_2) \in \mathcal{A}(FRD^3)$. This is equivalent to $(u_a,v_a,U_b^R,V_b^R,U_c^R,V_c^R) \in L(FRD^3)$. In Figure~\ref{complex} intermediate states are denoted by $(\rho_1,\rho_2)$. In our implementation of $NGP$, we prefer to avoid $\varepsilon$-transitions, and so the non-deterministic jump from the first to the second part of the path is combined with the first transitions in the second part of the path. In the case where there is a padding symbol, the last transition in the first part of the path is combined with the first transitions in the second part. An advantage of this is that we can eliminate the use of the padding symbol in the middle of a word, which can otherwise be quite troublesome to deal with (in terms of writing special code to take the unnecessary padding symbols into account). The jump also introduces a large amount of non-determinism into $NGP$. The states of $NGP$ are triples $(\sigma,\rho_1,\rho_2)$, where $\sigma \in \mathcal{S}(FRD) \cup \{\infty\}$ and $\rho_1, \rho_2 \in \mathcal{S}(FRD^R) \cup \{0,\infty\}$. For each such state either $\rho_1=\rho_2=0$ and $\sigma \neq \infty$. or $\sigma=\infty$ and $\rho_1 \neq 0 \neq \rho_2$. Informally, $0$ and $\infty$ as just introduced have the following significance. In the course of accepting a string, the three components $\sigma$, $\rho_1$ and $\rho_2$ each have to pass through $FRD$ exactly once. More precisely, the component $\sigma$ passes once through $FRD$ during the first part of the path and the components $\rho_1$ and $\rho_2$ pass through $FRD^R$ once during the second part of the path, each time moving from an initial state to an accept state of $FRD$ or $FRD^R$ respectively. It is convenient to assign the name $\infty$ to the state $\sigma$ after it has completed its passage through $FRD$. We need to attach names to the states $\rho_i$ ($i=1,2$) during the first part of the path, and we attach the name $0$ to remind us that $\rho_i$ has not yet started its passage through $FRD^R$. When a padding symbol is read in $w_i$, the state $\rho_i$ is set to $\infty$ to remind us that the passage of $\rho_i$ through $FRD^R$ is now complete. In other words, the state is set to $\infty$ the next move after arriving at the vertex of the triangle. We do not allow $\rho_1=\rho_2=\infty$, because we will stop if we reach both vertices $b$ and $c$ simultaneously. We can save space by storing the triple $(\sigma,0,0)$ as a pair $(\sigma,0)$ and the triple $(\infty,\rho_1,\rho_2)$ as a pair $(\rho_1,\rho_2)$. It is easy to see that this captures all the information. In this discussion, we continue to use more revealing triples, rather than more concise pairs. The unique initial state is $(\sigma_0(FRD),0,0)$. The accepting states are $(\infty,\rho_1,\rho_2)$ where either $\rho_1=\infty$ or $\rho_1$ lies in $\mathcal{A}(FRD^R)$ and the same is true for $\rho_2$. The reader may like to be reminded that a state of $\mathcal{A}(FRD^R)$ is an accept state of $FRD^R$, and that this is a subset of the set of states of $FRD$ which contains the initial state $\sigma_0(FRD)$. There is also another kind of accept state, corresponding to the situation $b=c$, that is, that there are two different geodesics from $a$ to $b=c$. This means that $w_1=_G w_2$. Since we are dealing with short-lex geodesics, $w_1$ will be short-lex from $a$ to $b$ and $w_2$ will be the reverse of a short-lex geodesic from $c$ to $a$. Such an accept state has the form $(\sigma,0)$ where $(\sigma,\sigma_0(FRD),\sigma_0(FRD)) \in \mathcal{A}(FRD^3)$. There are other degenerate situations, but the others are all covered by the main description, as the reader can easily verify. There are three types of transitions of $NGP$, which we shall now describe. In general, we denote the label of such a transition by $(x_{ab},x_{ac})$, where $x_{ab},x_{ac} \in A^\dagger$. (The idea is that $x_{ab}$ represents a variable generator in the path from $a$ to $b$.) The first two types of transition correspond to the transitions of the first and second parts of the accepting path of $(w_1,w_2)$ through $NGP$, and the third type of transition corresponds to a jump from the first to the second part. Transitions of the first type have the form $(\sigma,0,0) \rightarrow (\tau,0,0)$ where there is a transition $\sigma \rightarrow \tau$ of $FRD$ with the same label. However, we must have $x_{ab},x_{ac} \in A$; that is, $x_{ac}$ is not allowed to be the padding symbol $\$$. (Any such transition $(x,\$)$ of $FRD$ will be absorbed into the jump, and combined with the first transitions of the two copies of $FRD^R$ after the jump, which are bound to have labels of the form $(y,\$)$ as we see from Figure~\ref{intv}. Recall that we do not want the padding symbol to occur in the middle of either of the words $w_1$, $w_2$. The strategy explained here avoids that danger.) Transitions of the second type have the form $(\infty,\pi_1,\pi_2) \rightarrow (\infty,\rho_1,\rho_2)$. They occur whenever there exist $x_{db},x_{dc} \in A^\dagger$ and transitions $\pi_1 \rightarrow \rho_1$ and $\pi_2 \rightarrow \rho_2$ of $FRD^R$ with labels $(x_{db},x_{ab})$ and $(x_{ac},x_{dc})$, respectively. There is the further restriction that $x_{ab}=\$$ if and only if $x_{db}=\$$. This means that our path has previously arrived at the vertex $b$---see Figure~\ref{complex}. In this case we have $\rho_1 = \infty$ and either $\pi_1 = \infty$ or $\pi_1 \in \mathcal{A}(FRD^R)$. Similarly, $x_{ac}=\$$ if and only if $x_{dc}=\$$. In this case $\rho_2 = \infty$ and either $\pi_2 = \infty$ or $\pi_2 \in \mathcal{A}(FRD^R)$. As explained above, we cannot have $x_{ab}=x_{ac} = \$$. In other words, padding symbols occur only at the end of at most one of the words $w_1,w_2$. The transitions of the third type are jumps from $(\sigma,0,0)$ to $(\infty,\rho_1,\rho_2)$. These are of two subtypes, depending on whether the geodesic triangle defined by the accepting paths that pass through them has perimeter of even or odd length. Those of even perimeter subtype occur when there exist $x_{db},x_{dc} \in A$ and initial states $\pi_1 = \{\sigma_1\}$, $\pi_2 = \{\sigma_2\}$ of $FRD^R$ with the property that $(\sigma,\sigma_1,\sigma_2) \in \mathcal{A}(FRD^3)$. Furthermore there are transitions $\pi_1 \rightarrow \rho_1$ and $\pi_2 \rightarrow \rho_2$ with labels $(x_{db},x_{ab})$ and $(x_{ac},x_{dc})$ respectively. There is the further restriction that $x_{ab}=\$$ if and only if $x_{db}=\$$, and in this case we have $\rho_1 = \infty$ and $\pi_1 \in \mathcal{A}(FRD^R)$. Similarly, $x_{ac}=\$$ if and only if $x_{dc}=\$$, and in this case $\rho_2 = \infty$ and $\pi_2 \in \mathcal{A}(FRD^R)$. Those of the odd perimeter subtype arise only for $x_{ab},x_{ac} \in A$, and they occur when there is a transition $\sigma \rightarrow \sigma'$ with label $(x_{ab},\$)$ of $FRD$. Furthermore, there exists $x_{dc} \in A$ and initial states $\pi_1 = \{\sigma_1\}$, $\pi_2 = \{\sigma_2\}$ of $FRD^R$ with the property that $(\sigma',\sigma_1,\sigma_2) \in \mathcal{A}(FRD^3)$. Also, there are transitions $\pi_1 \rightarrow \rho_1$ and $\pi_2 \rightarrow \rho_2$ with labels $(x_{db},\$)$ and $(x_{ac},\$)$ respectively. \section{Examples} \label{examples} In this final section, we describe the performance of these algorithms on the following four examples. \begin{eqnarray*} G_1 &=&\langle a,b,c,d \myvert a^{-1}b^{-1}abc^{-1}d^{-1}cd = 1 \rangle,\\ G_2 &=& \langle a,b \myvert a^2 = b^3 = (ab)^7 = 1 \rangle,\\ G_3 &=& \langle a,b \myvert (b^{-1}a^2ba^{-3})^2 = 1 \rangle \mbox{ and } \end{eqnarray*} \begin{eqnarray*} \lefteqn{G_4 = \langle a,b,c,d,e,f \myvert a^4=b^4=c^4=d^4=e^4=f^4=}\\ &&aba^{-1}e = bcb^{-1}f = cdc^{-1}a = ded^{-1}b = efe^{-1}c = faf^{-1}d=1 \rangle. \end{eqnarray*} Of these, $G_1$ is a surface group of a genus two torus, $G_2$ is the $(2,3,7)$-von Dyck group, $G_3$ is obtained from one of the well-known family of non-Hopfian Baumslag-Solitar groups by squaring the single relator, and $G_4$ is the symmetry group of a certain tessellation by dodecahedra of hyperbolic 3-space as featured in the video `Not Knot' (\cite{NotKnot}). They are all word-hyperbolic groups. For all four the verification of hyperbolicity, as described in Section~\ref{verifying}, was relatively easy, with the first three examples completing in a few seconds, and $G_4$ requiring about 20 seconds cpu-time. We present details of these calculations in Table 1. The automata $WD_n$, $GE_n$, $GW_n$ and $T_n$ are as described in Section~\ref{verifying}, and the constants $\gamma$ and $\gamma^\prime$ are as defined before Proposition~\ref{short-lex thin implies thin}. The notation $m \rightarrow n$ in the table means that an automaton had $m$ states when it was first constructed, and it was then minimised to an equivalent automaton with $n$ states. The last example demonstrates the phenomenon that the automata involved are smaller when the data is correct. \begin{table}[htbp] \caption{Verifying Hyperbolicity} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Grp & $n$ & $\mathcal{S}(WD_n)$ & $\mathcal{S}(GE_n)$ & $\mathcal{S}(GW_n)$ & $\mathcal{S}(T_n)$ & $\gamma$ & $\gamma^\prime$\\ \hline $G_1$ & 1 & 33 & $121 \rightarrow 49$ & $49 \rightarrow 49$ & 265 &4&4\\ \hline $G_2$ & 1 & 30 & $627 \rightarrow 92$ & $80 \rightarrow 52$ & 936 &&\\ & 2 & 32 & $664 \rightarrow 94$ & $78 \rightarrow 54$ & 769 &7&7\\ \hline $G_3$ & 1 & 55 & $689 \rightarrow 136$ & $152 \rightarrow 96$ & 1270 &6&6\\ \hline $G_4$ & 1 & 75 & $896 \rightarrow 284$ & $454 \rightarrow 409$ & 10635 &&\\ & 2 & 97 & $1135 \rightarrow 309$ & $443 \rightarrow 378$ & 12407 &&\\ & 3 & 103 & $1211 \rightarrow 318$ & $424 \rightarrow 63$ & 1713 &4&4\\ \hline \end{tabular} \end{center} \end{table} In Table 2, we present details of the calculation of the thinness constant for short-lex geodesic hyperbolic triangles in the first three of the examples. The set $\mathcal{D}$ and the automata $FRD$, $FRD^3$, $NGP$ and $GP$ are as defined in Section~\ref{finding} (where $NGP$ is the non-deterministic version of $GP$). The separate lines of data for each group represent successive attempts at the computation, with the last line representing the correct data. After each attempt, the automaton with language $L = L(W) \times L(W^R) \setminus L(GP)$ was constructed and, when it was nonempty, words $ (w_1,w_2) \in L$ were found and used to find an improved set $\mathcal{D}$. The language $L$ was found to be empty after the final computation for each group, thereby proving correctness of the data. The behaviour of $G_3$, which is the most difficult example for which we have successfully completed the calculations, is probably the best indicator of the way in which more difficult examples are likely to behave. For example, the largest and most memory intensive part of the computation is the determinisation of $NGP$, and many parts of the calculations are significantly more expensive on the earlier passes, when the data is incorrect, than in the final correct stage. These computations were carried out using a maximum of 256 megabytes of core memory and about the same amount of swap space. We have not yet been able to complete the calculations for $G_4$; indeed we have not progressed further than the first construction of $NGP$, which has several million states. We will need more memory (probably more than a gigabyte) if we are to proceed to construct the determinised version $GP$. \begin{table}[htbp] \caption{Finding the Constant of Hyperbolicity} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Grp & $ \mathcal{D} $ & $\mathcal{S}(FRD)$ & $\mathcal{A}(FRD^3)$ & $\mathcal{S}(NGP)$ & $\mathcal{S}(GP)$ & $\delta$\\ \hline $G_1$ & 25 & 137 & 65785 & 12249 & $8049 \rightarrow$ 2185 & \\ & 49 & 169 & 65857 & 12281 & $5457 \rightarrow 625$ & 4 \\ \hline $G_2$ & 104 & 1174 & 73822 & 89802 & $35824 \rightarrow 4904$ & \\ & 111 & 1199 & 74047 & 90450 & $31374 \rightarrow 1508$ & 7 \\ \hline $G_3$ & 71 & 755 & 795436 & 274186 & $1872679\rightarrow 531434 $ & \\ & 195 & 1430 & 801745 & 280240 & $1695944\rightarrow 443570 $ & \\ & 241 & 1741 & 806923 & 284328 & $1237158\rightarrow 85044 $ & \\ & 257 & 1845 & 807136 & 284478 & $676645 \rightarrow 3803 $ & 8 \\ \hline \end{tabular} \end{center} \end{table} \FloatBarrier
2024-02-18T23:40:14.474Z
1998-11-03T19:34:09.000Z
algebraic_stack_train_0000
1,784
10,294
proofpile-arXiv_065-8747
\section{#1}} \newcommand{\reff}[1]{eq.~(\ref{#1})} \vskip.3in Kondo problem describing the effect due to the exchange interaction between the magnetic impurity and the conduction electrons plays a very important role in condensed matter physics \cite {K64}. Wilson \cite {W75} developed a very powerful numerical renormalization group approach ,and the model was also solved by the coordinate Bethe ansatz method \cite {W83,A83} which gives the specific heat and magnetization. More recently,a conformal field theory approach was developed by Affleck and Ludwig \cite {AL} based on a previous work by Nozi{\`e}res \cite {N74}. In the conventional Kondo problem, the interaction between the conduction electrons is dicarded, due to the fact that the interacting electron system can be described by Fermi liquid. Recently,much attention has been paid to the study of the theory of magnetic impurities in Luttinger liquids (see eg. refs\cite {LT92,FN94,FJ95}). Although some powerful methods,such as the bosonization method,boundary conformal field theory,and the density matrix renormalization group method,are available to help us gain an understanding of the critical behaviour of Kondo impurities coupled to a Fermi or Luttinger liquid,some simple integrable models which allow exact solutions are still desirable. Several integrable magnetic or nonmagnetic impurity problems describing a few impurities embedded in some correlated electron systems have so far appeared in the literature. Among them are several versions of the supersymmetric $t$-$J$ model with impurities \cite {BAR94,BEF97,SZ97,LF98}. Such an idea to incorporate an impurity into a closed chain may date back to Andrei and Johannesson \cite {AJ84}(see also \cite {LS88,ZJ89}). However,the model thus constructed suffers the lack of backward scattering and results in a very complicated Hamiltonian which is difficult to be justified on the physical ground. Therefore, as observed by Kane and Fisher \cite {KF92},it seems to be advantageous to adopt open boundary conditions with the impurities situated at the ends of the chain when studying Kondo impurities coupled to integrable strongly correlated electron systems \cite {Z97,PW97,ZM98}. In this Letter, an integrable Kondo problem in the 1D supersymmetric extended Hubbard model is studied . It should be emphasized that the new non-c-number boundary K matrices arising from our approach are highly nontrivial, in the sense that they can not be factorized into the product of a c-number boundary K matrix and the corresponding local monodromy matrices. The model is solved by means of the algebraic Bethe ansatz method and the Bethe ansatz equations are derived. Let $c_{j,\sigma}$ and $c_{j,\sigma}^{\dagger}$ denote electronic creation and annihilation operators for spin $\sigma$ at site $j$, which satisfy the anti-commutation relations $\{c_{i,\sigma}^\dagger, c_{j,\tau}\}=\delta_{ij}\delta_{\sigma\tau}$, where $i,j=1,2,\cdots,L$ and $\sigma,\tau=\uparrow,\;\downarrow$. We consider the following Hamiltonian which describes two impurities coupled to the supersymmetric extended Hubbard open chain , \begin{eqnarray} H&=&-\sum _{j=1,\sigma}^{L-1} (c_{j,\sigma}^\dagger c_{j+1,\sigma}+{\rm H.c.}) (1-n_{j,-\sigma}-n_{j+1,-\sigma}) \nonumber\\ & &-\sum ^{L-1}_{j=1}(c_{j,\uparrow}^\dagger c_{j,\downarrow}^\dagger c_{j+1,\downarrow}c_{j+1,\uparrow} +{\rm H.c}) +2\sum ^{L-1}_{j=1}({\bf S}_j\cdot {\bf S}_{j+1} -\frac{1}{4}n_jn_{j+1})\nonumber\\ & & +J_a {\bf S}_1 \cdot {\bf S}_a +V_a n_1 +U_a n_{1\uparrow} n_{1\downarrow} +J_b {\bf S}_L \cdot {\bf S}_b +V_b n_L +U_b n_{L\uparrow} n_{L\downarrow}, ,\label{ham} \end{eqnarray} where $J_g,V_g $ and $U_g (g=a,b)$ are the Kondo coupling constants ,the impurity scalar potentials and the boundary Hubbard-like interaction constants,respectively; ${\bf S}_j=\frac {1}{2}\sum _{\sigma,\sigma'}c^\dagger_{j\sigma}{\bf \sigma}_{\sigma\s'}c_{i\sigma'}$ is the spin operator of the conduction electrons; ${\bf S}_{g} (g = a,b)$ are the local moments with spin-$\frac {1}{2}$ located at the left and right ends of the system respectively; $n_{j\sigma}$ is the number density operator $n_{j\sigma}=c_{j\sigma}^{\dagger}c_{j\sigma}$, $n_j=n_{j\uparrow}+n_{j\downarrow}$. The supersymmetry algebra underlying the bulk Hamiltonian of this model is $gl(2|2)$,and the integrability of the model on a closed chain has been extensively studied by Essler , Korepin and Schoutens \cite {EKS92} . It is quite interesting to note that although the introduction of the impurities spoils the supersymmetry,there is still a remaining $u (2)\otimes u(2)$ symmetry in the Hamiltonian (\ref {ham}). As a result,one may add some terms like the Hubbard interaction $U \sum _{j=1}^L n_{j\uparrow}n_{j\downarrow}$,the chemical potential term $\mu \sum ^L_{j=1}n_j$ and the external magnetic field $h \sum _{j=1}^L (n_{j\uparrow}-n_{j\downarrow})$ to the Hamiltonian (\ref {ham}),without spoiling the integrability. This explains why the model is so named (also called the EKS model). Below we will establish the quantum integrability of the Hamiltonian (\ref{ham}) for a special choice of the model parameters $J_g$, $V_g$,and $U_g$ \begin{equation} J_g = -\frac {2}{c_g(c_g+2)}, V_g = -\frac {2c_g^2+2c_g-1}{2c_g(c_g+2)}, U_g = -\frac {1-c^2_g}{c_g(c_g+2)}, \end{equation} This is achieved by showing that it can be derived from the (graded) boundary quantum inverse scattering method \cite {Zhou97,BRA98}. Let us recall that the Hamiltonian of the 1D supersymmetric extended Hubbard model with the periodic boundary conditions commutes with the transfer matrix, which is the supertrace of the monodromy matrix $ T(u) = R_{0L}(u)\cdots R_{01}(u)$. Here the quantum R-matrix $ R_{0j}(u)$ takes the form, \begin{equation} R=\frac {u-2P}{u-2}, \label {r} \end{equation} where $u$ is the spectral parameter,and $P$ denotes the graded permutation operator, and the subscript $0$ denotes the 4-D auxiliary superspace $V=C^{2,2}$ with the grading $[i]=0~ {\rm if} ~i=1,2$ and $1~ {\rm if}~ i=3,4$. It should be noted that the supertrace is carried out for the auxiliary superspace $V$. The elements of the supermatrix $T(u)$ are the generators of an associative superalgebra ${\cal A}$ defined by the relations \begin{equation} R_{12}(u_1-u_2) \stackrel {1}{T}(u_1) \stackrel {2}{T}(u_2) = \stackrel {2}{T}(u_2) \stackrel {1}{T}(u_1)R_{12}(u_1-u_2),\label{rtt-ttr} \end{equation} where $\stackrel {1}{X} \equiv X \otimes 1,~ \stackrel {2}{X} \equiv 1 \otimes X$ for any supermatrix $ X \in End(V) $. For later use, we list some useful properties enjoyed by the R-matrix: (i) Unitarity: $ R_{12}(u)R_{21}(-u) = \rho (u)$ and (ii) Crossing-unitarity: $ R^{st_2}_{12}(-u)R^{st_2}_{21}(u) = \tilde {\rho }(u)$ with $\rho (u),\tilde \rho (u)$ being some scalar functions. In order to describe integrable electronic models on open chains, we introduce two associative superalgebras ${\cal T}_-$ and ${\cal T}_+$ defined by the R-matrix $R(u_1-u_2)$ and the relations \cite {Zhou97,BRA98} \begin{equation} R_{12}(u_1-u_2)\stackrel {1}{\cal T}_-(u_1) R_{21}(u_1+u_2) \stackrel {2}{\cal T}_-(u_2) = \stackrel {2}{\cal T}_-(u_2) R_{12}(u_1+u_2) \stackrel {1}{\cal T}_-(u_1) R_{21}(u_1-u_2), \label{reflection1} \end{equation} \begin{eqnarray} &&R_{21}^{st_1 ist_2}(-u_1+u_2)\stackrel {1}{\cal T}_+^{st_1} (u_1) R_{12}(-u_1-u_2) \stackrel {2}{\cal T}_+^{ist_2}(u_2)\nonumber\\ &&~~~~~~~~~~~~~~~=\stackrel {2}{\cal T}_+^{ist_2}(u_2) R_{21}(-u_1-u_2) \stackrel {1}{\cal T}_+^{st_1}(u_1) R_{12}^{st_1 ist_2}(-u_1+u_2), \label{reflection2} \end{eqnarray} respectively. Here the supertransposition $st_{\alpha}~(\alpha =1,2)$ is only carried out in the $\alpha$-th factor superspace of $V \otimes V$, whereas $ist_{\alpha}$ denotes the inverse operation of $st_{\alpha}$. By modifying Sklyanin's arguments \cite{Skl88}, one may show that the quantities $\tau(u)$ given by $\tau (u) = str ({\cal T}_+(u){\cal T}_-(u))$ constitute a commutative family, i.e., $[\tau (u_1),\tau (u_2)] = 0$. One can obtain a class of realizations of the superalgebras ${\cal T}_+$ and ${\cal T}_-$ by choosing ${\cal T}_{\pm}(u)$ to be the form \begin{equation} {\cal T}_-(u) = T_-(u) \tilde {\cal T}_-(u) T^{-1}_-(-u),~~~~~~ {\cal T}^{st}_+(u) = T^{st}_+(u) \tilde {\cal T}^{st}_+(u) \left(T^{-1}_+(-u)\right)^{st}\label{t-,t+} \end{equation} with \begin{equation} T_-(u) = R_{0M}(u) \cdots R_{01}(u),~~~~ T_+(u) = R_{0L}(u) \cdots R_{0,M+1}(u),~~~~ \tilde {\cal T}_{\pm}(u) = K_{\pm}(u), \end{equation} where $K_{\pm}(u)$, called boundary K-matrices, are representations of ${\cal T}_{\pm} $ in some representation superspace. Although many attempts have been made to find c-number boundary K matrices,which may be referred to as the fundamental representation,it is no doubt very interesting to search for non-c-number K matrices,arising as representations in some Hilbert spaces, which may be interpreted as impurity Hilbert spaces. We now solve (\ref{reflection1}) and (\ref{reflection2}) for $K_+(u)$ and $K_-(u)$. For the quantum R matrix (\ref {r}), One may check that the matrix $K_-(u)$ given by \begin{equation} K_-(u)= \left ( \begin {array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&A_-(u)&B_-(u)\\ 0&0&C_-(u)&D_-(u) \end {array} \right ),\label{k-} \end{equation} where \begin{eqnarray} A_-(u)&=&-(u^2+2u-4c^2_a-8c_a+4u{\bf S}^z_a)/Z_-(u),\nonumber\\ B_-(u)&=&-4u{\bf S}^-_a/Z_-(u),~~~~~~~~ C_-(u)=-4u{\bf S}^+_a/Z_-(u),\nonumber\\ D_-(u)&=&-(u^2+2u-4c^2_a-8c_a-4u{\bf S}^z_a)/Z_-(u),~~~~~~Z_-(u)=(u-2c_a)(u-2c_a-4). \end{eqnarray} satisfies (\ref{reflection1}). Here ${\bf S}^{\pm}={\bf S}^x \pm i{\bf S}^y$. The matrix $K_+(u)$ can be obtained from the isomorphism of the superalgebras ${\cal T}_- $ and ${\cal T}_+ $. Indeed, given a solution ${\cal T}_- $ of (\ref{reflection1}), then ${\cal T}_+(u)$ defined by \begin{equation} {\cal T}_+^{st}(u) = {\cal T}_-(-u)\label{t+t-} \end{equation} is a solution of (\ref{reflection2}). The proof follows from some algebraic computations upon substituting (\ref{t+t-}) into (\ref{reflection2}) and making use of the properties of the R-matrix . Therefore, one may choose the boundary matrix $K_+(u)$ as \begin{equation} K_+(u)=\left ( \begin {array} {cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0& A_+(u)&B_+(u)\\ 0&0&C_+(u)& D_+(u) \end {array} \right )\label{k^I+} \end{equation} with \begin{eqnarray} A_+(u)&=&-(u^2-2u-4c^2_b+4+4u{\bf S}^z_b)/Z_+(u),\nonumber\\ B_+(u)&=&-4u{\bf S}^-_b/Z_+(u),~~~~~~~ C_+(u)=-4u{\bf S}^+_b/Z_+(u),\nonumber\\ D_+(u)&=&-(u^2-2u-4c^2_b+4-4u{\bf S}^z_b)/Z_+(u),~~~~~~~Z_+(u)=(u-2c_b+2)(u-2c_b-2). \end{eqnarray} Now it can be shown that Hamiltonian (\ref{ham}) is related to the second derivative of the boundary transfer matrix $\tau (u)$ with respect to the spectral parameter $u$ at $u=0$ (up to an unimportant additive constant) \begin{eqnarray} H&=&\frac {t'' (0)}{4(V+2W)}= \sum _{j=1}^{L-1} H_{j,j+1} + \frac {1}{2} \stackrel {1}{K'}_-(0) +\frac {1}{2(V+2W)}\left[str_0\left(\stackrel {0}{K}_+(0)G_{L0}\right)\right.\nonumber\\ & &\left.+2\,str_0\left(\stackrel {0}{K'}_+(0)H_{L0}\right)+ str_0\left(\stackrel {0}{K}_+(0)\left(H_{L0}\right)^2\right)\right],\label{derived-h} \end{eqnarray} where \begin{eqnarray} V=str_0 K'_+(0), ~~~~~W=str_0 \left(\stackrel {0}{k}_+(0) H_{L0}^R\right),~~~~~~~ H_{i,j}=P_{i,j}R'_{i,j}(0), ~~~~~G_{i,j}=P_{i,j}R''_{i,j}(0). \end{eqnarray} This implies that the model under study admits an infinite number of conserved currents which are in involution with each other, thus assuring its integrability. The Bethe ansatz equations may be derived using the algebraic Bethe ansatz method \cite {Skl88,Gon94,ZM98}, \begin{eqnarray} (\frac {u_j- 1}{u_j+1})^{2L}= \prod ^N_{\stackrel {i=1}{i \neq j}} \frac {(u_j-u_i -2)(u_j+u_i-2)} {(u_j-u_i +2)(u_j+u_i+2)}&&\prod ^{M_1}_{\alpha =1} \frac {(u_j-v_\alpha+1) (u_j+v_\alpha+1)} {(u_j-v_\alpha-1) (u_j+v_\alpha-1)},\nonumber\\ \prod_{g =a,b} \frac{c_g +\frac {v_\alpha}{2}+1}{c_g-\frac{v_\alpha}{2}+1} \prod ^{N}_{j=1} \frac {(v_\alpha -u_j+1)(v_\alpha+u_j+1)} {(v_\alpha -u_j-1)(v_\alpha+u_j-1)} & = & \prod_{\gamma=1}^{M_2} \frac{(v_\alpha -w_\gamma+1)(v_\alpha +w_\gamma +1)} {(v_\alpha -w_\gamma-1)(v_\alpha +w_\gamma -1)},\nonumber\\ \prod_{g =a,b}\frac{c_g-\frac {w_\gamma}{2}+\frac {1}{2}} {c_g-\frac {w_\gamma}{2}-\frac {1}{2}} \frac{c_g+\frac {w_\gamma}{2}-\frac {1}{2}} {c_g+\frac {w_\gamma}{2}+\frac {1}{2}} \prod_{\alpha=1}^{M_1} \frac{(w_\gamma -v_\alpha -1)} {(w_\gamma -v_\alpha +1)} \frac{(w_\gamma +v_\alpha -1)} {(w_\gamma +v_\alpha +1)} &=&\prod _{\stackrel {\delta=1}{\delta \neq \gamma}}^{M_2} \frac {(w_\gamma-w_\delta-2)}{(w_\gamma -w_\delta +2)} \frac {(w_\gamma+w_\delta-2)}{(w_\gamma +w_\delta +2)} ,\label{Bethe-ansatz} \end{eqnarray} with the corresponding energy eigenvalue $E$ of the model \begin{equation} E=-\sum ^N_{j=1} \frac {4}{u_j^2-1}. \end{equation} In conclusion, we have studied an integrable Kondo problem describing two impurities coupled to the 1D supersymmetric extended Hubbard open chain. The quantum integrability of the system follows from the fact that the Hamiltonian may be embbeded into a one-parameter family of commuting transfer matrices. Moreover, the Bethe Ansatz equations are derived by means of the algebraic Bethe ansatz approach. It should be emphasized that the boundary K matices found here are highly nontrivial,since they can not be factorized into the product of a c-number K matrix and the local momodromy matrices. However,it is still possible to introduce a singular local monodromy matrix $\tilde L(u)$ and express the boundary K matrix $K_-(u)$ as, \begin{equation} K_-(u)=\tilde {L}(u){\tilde {L}}^{-1}(-u), \end{equation} where \begin{equation} \tilde L (u) = \left ( \begin {array} {cccc} \epsilon & 0&0 &0\\ 0& \epsilon &0 &0\\ 0& 0& u+2c_a+2 +2{\bf S}^z&2{\bf S}^-\\ 0& 0&2 {\bf S}^+&u+2c_a+2-2{\bf S}^z\\ \end {array} \right ).\label{tl} \end{equation} which constitutes a realization of the Yang-Baxter algebra (\ref {rtt-ttr}) when $\epsilon$ tends to $0$. The implication of such a singular factorization deserves further investigation. Indeed,this implies that integrable Kondo impurities discussed here appear to be ,in some sense,related to a singular realization of the Yang-Baxter algebra,which in turn reflects a hidden six-vertex XXX symmetry in the original quantum R matrix. A similar situation also occurs in the supersymmetric t-J model \cite {ZM98}. Also,the extension of the above construction to the case of arbitrary impurity spin is straightforward. It will be interesting to carry out the calculation of thermodynamic,equilibrium properties of the model under consideration. Especially,it is desirable to study the finite-size spectrum,which, together with the predictions of the boundary conformal field theory,will allow us to draw various critical properties. The details is deferred to a future publication. \vskip.3in This work is supported by OPRS and UQPRS.
2024-02-18T23:40:14.658Z
1998-11-04T08:07:52.000Z
algebraic_stack_train_0000
1,795
2,669
proofpile-arXiv_065-8825
\section{Introduction} Theorists have made considerable progress in explaining the stationary properties of OB star winds, and there is now a consensus that the winds are accelerated through resonance line scattering of the stellar continuum (Lucy \&\ Solomon 1970; Castor, Abbott, \&\ Klein 1975). The spectral lines predicted by detailed models match observations for a wide variety of ionization species (Pauldrach 1987), and for stars across a range of metallicity and evolutionary status (Kudritzki, Pauldrach, \&\ Puls, 1987; Pauldrach et al. 1988). However, it has become clear that real stellar winds are time-variable and contain inhomogeneities which, although expected on theoretical grounds, are difficult for the theory to treat properly. There is independent evidence for these inhomogeneities from the black absorption troughs (Lucy 1982a) and variable ``discrete absorption features'' seen in wind-formed P~Cygni lines (Massa et al. 1995), in the X-ray emission from OB stars (Harnden et al. 1979; Seward et al. 1979; Corcoran et al. 1994), and from polarimetric observations (Brown et al. 1995), IR observations (Abbott, Telesco, \&\ Wolff 1984) and radio observations (Bieging, Abbott, \&\ Churchwell 1989). Theoretically, the scattering mechanism thought to be responsible for the acceleration of the wind should be unstable, leading to shocks (MacGregor, Hartmann, \&\ Raymond 1979; Abbott 1980; Lucy \&\ White 1980; Owocki \& Rybicki 1984). Although these instabilities have been modelled numerically, the models encounter nonlinearities (Owocki, Castor, \&\ Rybicki 1988). The unknown nature of the wind inhomogeneities has hindered attempts to infer from observations such properties of stellar winds as the mass-loss rates, which are critical for understanding massive star evolution. The observations are, in essence, averages over the entire nonuniform and time-variable wind, and the unknown weighting of these averages prevents us from knowing the magnitude of the total mass outflow. Overcoming these difficulties is a major goal of current studies of hot stellar atmospheres. When the OB star has an accreting compact companion, X-rays from the compact object can be used as a probe to derive parameters of the undisturbed wind, such as its mass-loss rate, expansion velocity, and radial velocity law. In the ``Hatchett-McCray effect'' (Hatchett \&\ McCray 1977; McCray et al. 1984), the X-rays remove the ion responsible for a P~Cygni line from a portion of the wind, resulting in orbital variations of P~Cygni profiles. This effect has been observed in several High Mass X-ray Binaries (HMXB; Dupree et al. 1980; Hammerschlag-Hensberge et al. 1980; van der Klis, et al. 1982; Kallman \&\ White 1982; Haberl, White \&\ Kallman 1989). Analysis of the velocities over which the P~Cygni lines vary has suggested that in some cases, the stellar wind radial velocity dependence can be nonmonotonic (Kaper, Hammerschlag-Hensberge, \&\ van Loon 1993). Whether this is intrinsic to the OB star wind or is a result of the interaction with the compact object is not known. LMC~X-4 shows a more pronounced Hatchett-McCray effect than any other X-ray binary. The UV P~Cygni lines of \ion{N}{5} at 1240\AA\ appear strong in low-resolution IUE spectra at $\phi=0$, but nearly disappear at $\phi=0.5$ (van der Klis et al. 1982). The equivalent width of the \ion{N}{5} absorption varies by more than 100\% throughout the orbit, suggesting that the X-ray source may remove \ion{N}{5} from the entire region of the wind outside of the X-ray shadow of the normal star. If the X-ray ionization of the wind is indeed this thorough, then the change in the P~Cygni profile between two closely spaced orbital phases is largely the result of the X-ray shadow of the primary advancing through a thin ``slice'' of the stellar wind. Thus models of the changing P~Cygni profiles should be sensitive to a region of the wind that is small in spatial extent. Vrtilek et al. (1997, Paper~I) obtained high-resolution spectra of the \ion{N}{5} lines (and the \CIV lines, which show weaker variability) with the Goddard High Resolution Spectrograph (GHRS) aboard the Hubble Space Telescope. These spectra, unlike the earlier IUE spectra, provide a high enough resolution and signal to noise ratio to allow us to examine the detailed velocity structure of the P~Cygni lines over closely spaced intervals of the binary orbit. LMC X-4 consists of a main-sequence O star (variously classified as O7--9 III--V) of about $15\mbox{ M}_{\odot}$ and a $\approx1.4\mbox{ M}_{\odot}$ neutron star with a spin period of 13.5 seconds (Kelley et al. 1983). Pulse delays from the neutron star have established the projected semimajor axis of the orbit precisely as 26~light seconds. The 1.4~day orbital period is also seen in X-ray eclipses (Li, Rappaport, \& Epstein 1978) and modulations of the optical brightness (Heemskerk \&\ van Paradijs 1989). The X-ray spectrum in the normal state is hard (photon power-index $\alpha\approx0.8$) with little absorption of the soft X-rays (Woo et al. 1996 give $N_{\rm H}=1.14\pm0.005 \times 10^{21}$ cm$^{-2}$). The total quiescent X-ray luminosity is $\sim10^{38}$~erg~s$^{-1}$, near the Eddington limit for a $1.4\mbox{ M}_{\odot}$ neutron star. X-ray flares, in which the pulse-averaged X-ray luminosity can exceed 10$^{39}$~erg~s$^{-1}$, occur with a frequency of about once a day (Kelley et al. 1983; Pietsch et al. 1985; Levine et al. 1991). These flares have been interpreted as dense blobs of wind material crashing down on the surface of the neutron star (White, Kallman, \&\ Swank 1983; Haberl, White, \&\ Kallman 1989). It is an open question whether these blobs represent the natural state of the stellar wind, or are produced by interactions between the X-ray source and an unperturbed stellar wind. An accreting compact object embedded in a stellar wind can affect the wind through the heating, ionization, and radiation pressure of its X-ray emission, and through its gravity. Numerical simulations (Blondin et al. 1990, 1991) of the modifications of the stellar wind by the compact object show that the wind can form accretion wakes and disk-like structures (even in systems which are not thought to have substantial mass-transfer through Roche lobe overflow). Thus an investigation of instabilities in the wind of LMC~X-4 also bears on the cause of the X-ray flares. We present a series of increasingly more sophisticated models of the P~Cygni lines in LMC~X-4, as observed with the GHRS and reported in Paper~I. In \S\ref{sec:compare}, we compare the \ion{N}{5} and \ion{C}{4} P~Cygni lines to those of isolated OB stars and to those of other massive X-ray binaries. We make inferences from analytic and approximate models to the P~Cygni line variations in \S\ref{sec:simple}. Then we attempt more detailed models in \S\ref{sec:numeric}, and derive best-fit parameters for the wind and X-ray luminosity. We note that this is the first detailed fit of P~Cygni lines that has been attempted for a HMXB. The sophistication of the models that we present is justified by the high quality of the data on this system that should be obtained in the near future. \section{Comparison spectra\label{sec:compare}} In this section, we compare the UV line profiles of LMC~X-4 with the line profiles of similar stars. \subsection{Other X-ray binaries} The strength of the orbital variation in P~Cygni lines (the Hatchett-McCray effect) differs among the HMXB. Figure~1 compares the \ion{N}{5} lines at two orbital phases with the corresponding features in 4U1700-37 and Vela~X-1 (the latter two objects were observed with IUE at high spectral resolution in the echelle mode). The optical companion of 4U1700-37 is very massive ($\sim50\mbox{ M}_{\odot}$) and hot (spectral type O6.5 with effective temperature 42,000K; Kaper et al. 1993). In this system the \ion{N}{5} resonance line is saturated and shows little or no variation over orbital phase; this saturation is caused by the very strong stellar wind produced by the extremely massive and hot companion. The companion of Vela~X-1 has a lower mass (23$\mbox{ M}_{\odot}$) and is considerably cooler (spectral type B0.5 with an effective temperature of 25,000K; Kaper et al. 1993). The wind in LMC~X-4 is expected to be weaker than in Vela X-1 and 4U1700-37, as the optical companion is not a supergiant. As the abundances of metals are lower in the LMC than in the Galaxy, lower mass-loss rates for a given spectral type are expected. The low abundances will also make the \ion{N}{5} lines less saturated and the effects of X-ray ionization more visible. The X-ray luminosity of LMC~X-4 is the highest of the three systems. As a result of these considerations, it is not surprising that the effects of X-ray ionization on the \ion{N}{5} lines are strongest in LMC~X-4. For all three systems the \ion{C}{4} resonance lines show little orbital variation during the phases observed. \subsection{Photospheric lines at $\phi\sim0.5$ \label{sec:photo}} The effects of the X-ray photoionization on the blueshifted P~Cygni absorption are greatest for LMC~X-4 near $\phi=0.5$. The observed \ion{N}{5} line profiles still show residual absorption at $\phi=0.42$. Paper~I interpreted this absorption to be the photospheric \ion{N}{5} doublet, which is masked by the wind absorption during X-ray eclipse. If this interpretation is correct, the width of the absorption is determined by the rotation of the normal star, not the wind expansion velocity. Alternately, at the base of the wind where the expansion velocity is low, the wind may be dense enough that \ion{N}{5} can still exist even in the presence of X-ray illumination. Nearly all early-type stars whose \ion{N}{5} photospheric absorption lines could be usefully compared with those of LMC~X-4 also have strong wind absorption. The O9~IV star $\mu$Col (HD~38666) seems to be a promising candidate for comparison, as its \ion{N}{5} line has negligible red-shifted emission, and the blue-shifted absorption is confined to features near the rest wavelength. Although this is the best candidate for the intrinsic \ion{N}{5} photospheric absorption spectrum that we were able to obtain, Wollaert, Lamers, \&\ de Jager (1988) suggest that the \ion{N}{5} and \ion{C}{4} lines in this star are wind-formed because they are asymmetric. We have compared IUE and HST spectra of $\mu$Col with our spectra of LMC~X-4, and although the detailed fit is poor (Figure~2a), the strength of the absorption in $\mu$Col is comparable to the absorption in LMC~X-4 at $\phi=0.4$. This provides qualified support that the photospheric \ion{N}{5} lines of subgiant O stars can be comparable to the lines we see in LMC~X-4. We also compare the LMC~X-4 spectra with the optical absorption lines which Hutchings et al. (1978) concluded were photospheric. Hutchings et al. found that the optical lines were consistent with an orbital velocity of $50-60$~km~s$^{-1}$ and a systemic velocity of $\approx 280$~km~s$^{-1}$. They measured the width of the optical lines to be $\sim170$~km~s$^{-1}$, which would be expected if the primary rotates with a period twice as long as the binary orbit. We find a good fit to the \ion{N}{5} absorption profile using Gaussians with width (approximately equal to $v \sin i$ width) 170~km~s$^{-1}$ and optical depths $\tau_{\rm b}=0.6$ and $\tau_{\rm r}=0.3$ in the blue and red components (Figure~2b). While the \ion{N}{5} absorption velocity agrees with both the optical absorption line velocity found by Hutchings et al. and with the systemic velocity of the LMC (Bomans et al. 1996), the cross-over between absorption and emission occurs at relative red-shifts of $100-200$~km~s$^{-1}$. This may suggest that LMC~X-4 has a red-shift relative to the LMC and that the ``photospheric'' absorption is actually blue-shifted P~Cygni absorption. In the absence of a clear means of separating the wind and photospheric absorption, we will allow, in the more detailed analysis that follows, the strength of a photospheric absorption line to be an adjustable free parameter. We note that Groenewegen \&\ Lamers (1989) fit P~Cygni lines observed with IUE, and that for the stars HD~24912, 36861, 47839, and 101413 (with spectral types O7.5\,III,O8\,III, O7\,V, and O8\,V respectively), the best-fit values of $\tau_{\rm r}$ for \ion{N}{5} are 0.0, 0.4, 0.3, and 0.3 respectively. \section{Simple Models\label{sec:simple}} In the discussion that follows, we assume that the orbital variation of the \ion{N}{5} P~Cygni lines is caused principally by the Hatchett-McCray effect, that is, through the photoionization of the stellar wind by the X-ray source. There do not appear to be any variable Raman-scattered emission lines which Kaper et al. (1993) suggested cause an orbital variability in the X-ray binary 4U~1700-37. \subsection{Equivalent width variations (IUE and HST observations)\label{sec:IUE}} Following van der Klis et al. (1982), we examine the orbital variation of the equivalent width of \ion{N}{5} in LMC~X-4 using a simple numeric model. Hatchett \&\ McCray (1977) showed that the X-ray ionized zones could be approximated as surfaces of constant $q$, where each point in the wind has a value of $q$ given by \begin{equation} \frac{L_{\rm x}}{n_{\rm x} D^2} q = \xi. \end{equation} Here, $L_{\rm x}$ is the X-ray luminosity, $n_{\rm x}$ is the electron density at the radius of the X-ray source ($D=1.77$ stellar radii in the case of LMC~X-4), and $\xi$ is the ionization parameter $\xi\equiv L_{\rm x}/n_{\rm e} r_{\rm x}^2$, where $n_{\rm e}$ is the electron density at the given point, and $r_{\rm x}$ is the distance from that point to the X-ray source. Given a constant mass-loss rate $\dot{M}$ for the wind, $n_{\rm e}$ can be found from mass conservation: \begin{equation} n_{\rm e} = 1.2 \frac{\dot{M}}{4 \pi r^2 v} m_{\rm H}^{-1}.\label{eq:conserve} \end{equation} We illustrate these parameters in Figure~3. For large values of $q$, the surfaces of constant ionization are spheres surrounding the X-ray source, while for $q<1$, the surfaces are open. Where the surfaces extend into the primary's X-ray shadow, a conical shape must be used instead of the constant $q$ surface. We model the absorption line variation using the velocity law $v\propto (1-1/x)$, where $x$ is the wind radial distance in units of the stellar radius. According to van der Klis et al. the results are not sensitive to the velocity law assumed. We account for the orbital inclination $i$ ($i=66.2^{+2.5}_{-2.8}\arcdeg$, Pietsch et al. 1985) by using a corrected orbital phase $\phi^\prime$ such that $\cos 2\pi \phi^\prime=\cos 2 \pi \phi \sin i$. To model the orbital variation of equivalent width, we used the best-fit escape probability model we describe in \S\ref{sec:basic}, except that the only X-ray ionization occurs within the zone bounded by $q$, which is completely ionized. We measured the equivalent width of \ion{N}{5}, \ion{C}{4}, and \ion{Si}{4} for all the IUE observations of LMC~X-4 (using the Final Archive version of the data, which has been processed by NEWSIPS, the NEW Spectral Image Processing System, de la Pena et al. 1994). We used the ephemeris of Woo et al. (1996) to find the corresponding orbital phases for each spectrum. In Figure~4, we compare theoretical variations in equivalent width with those observed with IUE and the GHRS. That is, we plot \begin{equation} \Delta \equiv(W_\lambda-\bar{W_\lambda})/\bar{W_\lambda} \end{equation} where $W_\lambda$ is the equivalent width, and $\bar{W_\lambda}$ is the average value of $W_\lambda$. Here, $W_\lambda$ measures only the equivalent width of the absorption, and not the P~Cygni emission. As the GHRS spectra were obtained during less than a complete binary orbit, we use the average equivalent width from the IUE observations to find the per cent variation in equivalent width for the GHRS spectra as well. The results shown in Figure~4 are consistent with $q\lesssim1$, in which case the X-ray source photoionizes at least the entire hemisphere opposite the primary, and possibly as much as the entire wind outside of the primary's shadow. The fit to the IUE data is better for $q=1$ ($\chi^2=5.5$) than for full X-ray photoionization ($\chi^2=11$), although the GHRS equivalent widths agree with the prediction of full ionization. We note that interstellar and photospheric absorption lines, if present, provide additional absorption components that are not expected to show strong orbital variation. If these components are present, this would naturally explain why the fractional variation in equivalent width that we see is less than that predicted by $q\ll1$. For example, for \ion{N}{5}, $\bar{W_\lambda}=2.4$\AA, while $W_\lambda$ due to photospheric absorption was found with the GHRS to be $\approx0.8$\AA. Without consideration of the photospheric absorption, the observed variation of 1.6\AA\ corresponds to $\Delta=67$\%. However, this corresponds to a 100\%\ variation in the equivalent width of absorption due to the wind alone, excluding the photospheric absorption. It is likely that Figure~4 underestimates the per cent variation of the P~Cygni absorption for the \ion{C}{4} and \ion{Si}{4} lines, as X-ray should be {\it more} effective in removing these ions from the wind than removing \ion{N}{5} from the wind. A systematic uncertainty in choosing the continuum level will have a greater effect on the fractional variation in the equivalent width of the weaker lines, such as \ion{Si}{4}. In the next section, we will make the simplifying assumption that the entire region outside the conical shadow of the normal star is ionized by the X-ray source, and from this derive the radial velocity law. \subsection{The radial velocity law from the maximum velocity of absorption\label{sec:vlaw}} The spectra obtained with the GHRS were read out at 0.5 second intervals, using its RAPID mode (Paper~I). The P~Cygni absorption profile was variabile on time scales shorter than the HST orbit. We interpret this as the result of the X-ray shadow of the primary crossing the cylinder subtending the face of the primary, where the blue-shifted absorption is produced. We attempt to infer the radial velocity law in the wind from this rapid P~Cygni variation and a few simple assumptions. We assume here that X-ray photoionization removes \ion{N}{5} and \ion{C}{4} from the entire wind outside of the X-ray shadow of the primary. This assumption is easily shown to be reasonable, based not only on the IUE and HST equivalent width variations (\S\ref{sec:IUE}) but also on the expected value of $q$. Given a mass-loss rate of 10$^{-6} \mbox{ M}_{\odot}$~yr$^{-1}$, a velocity at the X-ray source of 600~km~s$^{-1}$, and an ionization parameter $\log \xi=2.5$ at which there should be little \ion{N}{5} present (Kallman \&\ McCray 1982), we find $q=0.16$. As Buff \&\ McCray (1974) pointed out, as an X-ray source orbits a star with a strong stellar wind, the absorbing column to the X-ray source should vary in a predictable way that depends on the wind structure and the orbital inclination. If the wind is not photoionized, then the absorbing column increases near eclipse. In a companion paper (Boroson et al. 1999) we will examine the ionization in the wind from the variation in the absorbing column as seen with ASCA. Preliminary analysis supports $q\ll1$. We now use the geometry shown in Figure~5. The primary star is assumed to be spherical with a radius $R$. The neutron star orbits at a radius $D$. The angle between the ray connecting the neutron star to the center of the primary and a tangent to the primary's surface is given by $\cos \theta=R/D$. Then we define three relevant distances: \begin{eqnarray} D_2 & \equiv & \frac{R}{\cos(\pi-\theta-2\pi\phi)}\\ D_3 & \equiv & D_2+\frac{R D_2}{D \sin 2 \pi \phi}+\frac{R}{\tan 2 \pi \phi},\\ D_4 & \equiv & (D_3^2+R^2)^{1/2}. \end{eqnarray} We wish to compute, given $\phi$, the radius $r(\phi)$ in the wind responsible for the maximum observed velocity of absorption. Given $(r(\phi),v(\phi))$ derived from closely spaced $\phi$ when the shadow of the primary is moving across the line of sight, we can compare with the theoretical expectation of the wind acceleration velocity law. (Note that this assumes a spherically symmetric wind.) Given a monotonic velocity law, in most situations the maximum absorption velocity will be observed to be $V(\phi)$ corresponding to a wind radial velocity $V_{\rm max}$ (Figure~5), at a distance of $D_4$ from the center of the primary. This is the furthest radial distance at which the wind can absorb stellar continuum and at which the X-rays from the neutron star are shadowed by the limb of the primary. Note that the wind at this point is moving obliquely so that $V(\phi)=V_{\rm max} D_3/D_4$. As $\phi$ increases from 0.1 to 0.5, the wind at $V_{\rm max}$ will move with increasing obliqueness to the line of sight. As a result, even though this point has the greatest radial distance of any point in the wind that can absorb \ion{N}{5}, it still may not cause the absorption with the highest possible {\em line of sight} velocity, even in a monotonic wind. As a first check to see if the obliquity of the velocity vector at $V_{\rm max}$ is important, we can compare $V(\phi)$ with $V_{\rm line-of-sight}$, which can be calculated from $D_2$, assuming a radial velocity law. We assume that $D/R=1.77$, which can be calculated from the mass ratio and the assumption that the primary fills its Roche lobe. (Note that we use the separation between the centers of mass of the two stars, and not the distance of the NS from the CM of the system, which is about 10\%\ smaller.) We determine the maximum velocity of absorption by fitting the high-velocity edge of the absorption profile with a broken line (Figure~6). The line is horizontal for the continuum blue-ward of the maximum velocity of absorption and has a nonzero slope, allowed to vary as a free parameter, at velocities less than the maximum. When the doublets do not overlap ($\phi>0.19$), we are able to obtain two maximum velocities at each phase, one for each doublet component. For the red doublet component, we do not fit a broken line in order to determine the maximum absorption velocity, as the profile is never horizontal. Instead, we find the maximum absorption velocity in the red doublet component by determining where the flux rises to the continuum level (which is determined by the broken-line fit to the blue doublet component.) The results from the red and blue doublet components do not show a systematic difference. Thus averaging maximum absorption velocities determined in each line allows us to reduce the errors in our velocity measurements. We can apply a further refinement to the method. The observed P~Cygni trough is actually a sum of the true absorption profile and blue-shifted P~Cygni emission. The observed maximum absorption velocity may be shifted by the addition of the emission. Although we can not observe the blue-shifted emission profile directly (it is masked by the absorption whose profile we are trying to uncover) we can approximate it using the reflected red-shifted profile at a complementary orbital phase, assuming spherical symmetry and the Hatchett-McCray effect. The emission flux $F$ at velocity $v$ and at phase $\phi$ is given by \begin{equation} \label{eq:emitcorrect} F(v,\phi)\approx F(-v,0.5-\phi). \end{equation} For example, at $\phi=0.1$, the red-shifted emission at high velocities will be diminished as a result of photoionization; the blue-shifted emission at $\phi=0.4$ will be similarly diminished. (This symmetry is broken because the primary shadows a portion of the red-shifted emission but none of the blue-shifted emission. However, we can safely neglect this effect because the blue-shifted emission in front of the stellar disk can not have a higher velocity than the blue-shifted absorption.) The zero-velocity for the reflection of the red-shifted emission is not completely constrained. We allow two choices: 1) that the systemic velocity of LMC~X-4 is identical to that of the LMC, and 2) that the systemic velocity of LMC~X-4 is 150~km~s$^{-1}$ so that the turnover between red-shifted emission and blue-shifted absorption occurs near a velocity of~0. We give the measured values of $(r(\phi),v(\phi))$ in Table~1. We consider that a reasonable $1\sigma$ error for the velocity measurements is 50~km~s$^{-1}$. As an independent check on our measurements of wind velocity, we examine the maximum velocities at which the absorption profiles vary between two orbital phases. This is the only method we can apply to the \ion{C}{4}$\lambda1548$ lines, in which the P~Cygni absorption can not be separated from the strong interstellar and photospheric lines by examination of the spectrum alone. The best-fit maximum velocity of the \ion{C}{4} line is at $910\pm50$ km~s$^{-1}$. We note that we did not observe any red-shifted P~Cygni emission in \ion{C}{4}, so were unable to compensate for the variability in the blue-shifted emission. The maximum velocity of the change in \ion{N}{5} absorption between $\phi=0.111$ and $\phi=0.209$ is $1130\pm50$ km~s$^{-1}$, consistent with the measurements in Table~1. Between $\phi=0.209$ and $\phi=0.312$, the peak velocity of absorption variations is $560\pm50$ km~s$^{-1}$, consistent with the previous measurements (depending on the model of the blue-shifted emission that is subtracted). However, between $\phi=0.312$ and $\phi=0.410$, the maximum velocity of the absorption variations is $160$~km~s$^{-1}$. This is lower than the velocity inferred from the line profiles alone ($\approx320$ km~s$^{-1}$ at $\phi\approx0.312$). As discussed in \S\ref{sec:photo}, the blueshifted absorption may have a component which is produced near the stellar photosphere, rather than in the wind. In this case, the velocity width of the line may be produced by the rotation of the star, rather than the wind expansion. We suggest that the absorption line for $\phi>0.25$ is likely to contain a large photospheric component. Thus we do not fit to the maximum absorption velocity at these phases. We fit our measurements of $(r(\phi),v(\phi))$ to the analytic expression \begin{equation} \label{eq:windvlaw} v=v_{\infty} (1-1/x)^\beta+v_0 \end{equation} where $v_0$ is the systemic velocity of LMC~X-4, and $x=r/R$. The best-fit values and formal errors are presented in Table~2, for 3~possibilities of organizing the data, depending on whether we subtract our model for the blue-shifted emission from the spectra and whether the velocity of the reflection point is 0 or 150~km~s$^{-1}$. We find the best fit using a downhill simplex method (Nelder \&\ Mead, 1965) and estimate errors using a Monte-Carlo bootstrap method. We plot $(r(\phi),v(\phi))$ (measured from both doublet components) in Figure~7, along with the analytic fits. There is no evidence that the wind is nonmonotonic. All the results indicate that the wind expands more slowly than expected from the radiatively accelerated solution (which can be well-approximated with $\beta=0.8$; Pauldrach, Puls, \&\ Kudritzki 1986). While there is a large range of $\beta$ that can fit the 3 arrangements of observing $(r(\phi),v(\phi))$ ($1.4<\beta<2.4$), the choice of $\beta=1.39\pm0.14$ provides the lowest $\chi^2$ and the most reasonable systemic velocity. (However, note that turbulence in the wind could add a constant velocity to the absorption edge and could cause the data to be well-fit by a nonphysical value of the systemic velocity.) The models are also consistent with the measurements of the wind veocity from differences of successive spectra of \ion{C}{4} at $\phi=0.15$ and \ion{N}{5} at $\phi=0.3$. While a coincident stellar absorption feature at $\lambda\approx1249$\AA\ decreases the red-shifted emission for velocities $v\gtrsim1000$~km~s$^{-1}$, compensating for this feature in our subtraction of the blue-shifted emission would only {\it increase} slightly the value of $\beta$ that we measure. Our fits for the stellar wind acceleration parameter $\beta$ are not sensitive to our choice of $D=1.77$; using $D=1.70$ and $D=1.84$ we find $\beta$ within $1\sigma$ of values in the first two rows of Table~2, but $\beta=2.00$ instead of $2.37\pm0.11$ for the third row. The effects of microturbulence in the wind on the measured maximum wind velocity should not alter the values we have found for $\beta$. For example, if the microturbulence velocity is constant, then our fits would give a different value of $v_0$ but the same $\beta$. If the microturbulence velocity is a constant fraction of the wind velocity, then as a rough approximation, all measured values of $v(\phi)$ will be multiplied by a constant, leading us to fit $(r(\phi),v(\phi))$ with a different value of $v_\infty$ but the same value of $\beta$. In this section we have demonstrated a new method of probing the velocity field of a stellar wind. We have assumed only standard parameters of the stellar radius and separation, that the wind is spherically symmetric and that the X-ray source completely ionizes the wind. One drawback of the method is that only a very small portion of the available data is used, that is, only the velocity of the edge of the absorption. In the next section, we adapt the method to fit the entire absorption profile. \subsection{The Radial Velocity Law from the Absorption Line Profile \label{sec:wholeprof}} We now attempt to fit the entire absorption profile to determine the wind radial velocity law. The geometry of this method is shown in Figure~8. The X-ray shadow of the primary is conical. At distances $r\gg R$ from the center of the primary, the wind within the cylindar subtending the stellar surface has only a small oblique velocity (assuming a radial velocity field). Thus the surfaces of constant projected velocity, far from the primary, are planes perpendicular to the line of sight. The intersection between the cone of shadow and the planar isovelocity surface at a given observed velocity will most often be a parabola. This parabola bounds those portions of the stellar disk that lie behind unilluminated gas at the observed velocity, and thus are absorbed by the \ion{N}{5} in the wind, and those portions that do not. If we could obtain an image of the stellar disk of the primary in LMC~X-4 with a narrow-band filter centered at a wavelength corresponding to a Doppler shift at the given velocity, we would see a crescent-shape bounded by the star's disk and the parabola. As the wavelength of the filter approached the terminal velocity of the wind, the intersection of the plane and cone would bound a smaller and smaller region of the stellar surface, and would thus determine the shape of the edge of the P~Cygni profile. In general, the isovelocity surfaces will not be planes, and we will need to solve for them numerically. We can no longer simply read off the velocity corresponding to each radius from the absorption profile, as we did in \S\ref{sec:vlaw}. Instead, we need to assume a velocity law, find the isovelocity surfaces, find their intersection with the conical shadow, and then compute the absorption profile and compare with the observed profile. To use the observed absorption profile, we must correct for the blue-shifted emission by subtracting the red-shifted emission at a complementary phase. Now that we consider the line profile and not merely the edge of the absorption, we must recognize the asymmetry between the red and blue-shifted emission profiles at $\phi$ and $0.5-\phi$ which is introduced by the primary's shadow of the red-shifted emission. We correct for this by assuming that the shadowed red-shifted emission at the complementary phase is given by \begin{equation} F_{\rm shadow} (v,\phi) = -\frac{1}{2}\left(1-\frac{\sqrt{x^2-1}}{x}\right)F_{\rm abs} (-v,0.5-\phi) \end{equation} where $x$ is the radial distance in the wind, normalized to the stellar radius, $F_{\rm shadow}$ is the line profile of the shadowed part of the red-shifted emission, and $F_{\rm abs}$ is the intrinsic absorption line profile. For this approximation, we have noted that the shadowed red-shifted emission at $\phi$ corresponds to the blue-shifted absorption at $0.5-\phi$, with the dilution factor needed to compare the absorption with the emission. We also need to assume an optical depth in the wind. As a first attempt, we assume that the wind is entirely black to the continuum in the region of \ion{N}{5}. We neglect limb and gravity darkening of the primary, and its nonspherical shape. We correct for the photospheric absorption by subtracting the profile at $\phi=0.4$, when we assume the absorption is entirely photospheric. A fit to individual short subexposures, as in \S\ref{sec:vlaw} would not be time-efficient. Instead, we fit to the blue doublet component absorption profiles at $\phi=0.1,0.2,0.3$. The fit is poor, with $\chi^2_\nu=11$ with 253 degrees of freedom, $v_\infty=1780\pm10$ km~s$^{-1}$, $v_0=110\pm10$ km~s$^{-1}$, and $\beta=4.25\pm0.08$, which we do not consider a reasonable value. Allowing a uniform optical depth $\tau$ in the wind gives a better fit, with $\chi^2=2.95$, $v_\infty=1330\pm30$ km~s$^{-1}$, $v_0=72\pm24$ km~s$^{-1}$, $\beta=1.57\pm0.04$, and $\tau=0.97\pm0.06$. We show the results of this fit in Figure~9. This high value of $\beta$ is similar to those found in \S\ref{sec:vlaw}. In the following section, we will investigate detailed numeric models of the wind, relaxing our assumption that the X-rays ionize the entire wind outside the X-ray shadow of the normal star. We do not remove the restriction that the wind velocities are spherically symmetric, as this would allow more than one point along the line of sight to be at the same projected velocity, which would require us to use more sophisticated but time-consuming techniques, such as Monte-Carlo simulation of P~Cygni profiles. \section{Escape Probability Models\label{sec:numeric}} \subsection{Methods and basic models\label{sec:basic}} To model the P~Cygni line variations, we apply and extend the method of McCray et al. (1984). We compute the line profiles using the Sobolev approximation and the escape probability method (Castor 1970; Castor \&\ Lamers 1979), using a wind velocity law given by Equation~\ref{eq:windvlaw} with $\beta=1$. The optical depth in the undisturbed wind obeys \begin{equation} \label{eq:tau} \tau=T(1+\gamma)r^{-\gamma} \end{equation} where $T$ and $\gamma$ are empirical parameters (Castor \&\ Lamers 1979). The optical depth of gas illuminated by the X-ray source is given by the Sobolev approximation, \begin{equation} \label{eq:sobolev} \tau=(\pi e^2/m c)f \lambda_0 n_{\rm i,j} (dv/dr)^{-1}, \end{equation} where $e$ is the electron charge, $m$ the electron mass, $\lambda_0$ the rest wavelength of the transition, $f$ the oscillator strength, and $n_{i,j}$ the particle density of element $i$ in ionization state $j$. From mass conservation, $n_{i,j}$ is given by \begin{equation} \label{eq:density} n_{\rm i,j} = \frac{\dot{M}}{4 \pi r^2 v} m_{\rm H}^{-1} a_{\rm i} g_{\rm i,j}, \end{equation} where $g_{i,j}$ is the fraction of element $i$ in ionization state $j$, and $a_{\rm i}$ is the fractional abundance of element $i$. Given a value of $\xi$ and the shape of the X-ray spectrum, the ion fraction $g_{i,j}$ (and thus $n_{\rm i,j}$) can be computed. For this we used the XSTAR code (Kallman \&\ Krolik, based on Kallman \&\ McCray 1982), which requires that the X-ray luminosity $L_{\rm x}$ be integrated over the energy range 1--1000~Ry (13.6 eV--13.6 keV). The effectiveness of X-ray photoionization depends on the X-ray spectrum; we have used a broken power-law spectrum, \begin{eqnarray} F(E) & = & K E^{0.37}\qquad\mbox{for }E<25\mbox{ keV}\\ & = & K E^{-2}\qquad\mbox{for }E>25\mbox{ keV}, \end{eqnarray} where $F(E)$ is the energy spectrum. The power-law index for $E<25$~keV was determined from simultaneous ASCA observations of the 0.4--10 keV spectrum (Paper~I), while the high-energy cutoff has been observed at other times using GRANAT (Sunyaev et al. 1991). Adding a soft blackbody that contributes $\sim10$\%\ to the total flux (as seen in the X-ray spectrum with ASCA) has little effect on the ion fractions. We can treat the combined effect of the background ionization (due to the photospheric EUV flux or X-ray emission from shocks in the wind for example) and the X-ray ionization by \begin{equation} \label{eq:background1} g_{i,j,\rm tot}=(1/g_{i,j,\rm X} +1/g_{i,j,\rm back} - 1)^{-1}, \end{equation} in the case in which the wind in the absence of X-ray illumination is at least as highly ionized as the ion which causes the P~Cygni line. If this is not the case, then we have \begin{equation} \label{eq:background2} 1/g_{i,j,\rm tot}-1=(1/g_{i,j-1,\rm X}+1/g_{i,j-1,\rm back} - 2)^{-1} \end{equation} In Equations~\ref{eq:background1} and~\ref{eq:background2}, $g_{i,j,\rm X}$ and $g_{i,j,\rm back}$ are, respectively, the ion fractions with X-ray illumination (computed using XSTAR) and without (computed from equations~\ref{eq:tau}, \ref{eq:sobolev}, and \ref{eq:density}). We assume that all N in the unilluminated wind is ionized to at least \ion{N}{5} and apply Equation~\ref{eq:background1}. The models of $\tau$Sco ($T\approx$30,000) by MacFarlane, Cohen, \&\ Wang (1994) predict that the dominant ionization stage of N in the wind is \ion{N}{6} for wind velocities $v>0.2v_\infty$. In these models, the wind is ionized by X-rays, presumably caused by shocks in the wind, that have been observed by ROSAT. However, MacFarlane et al. found that their model of $\tau$Sco did not match UV observations, and that X-rays were less effective in ionizing the denser winds of hotter stars (consistent with the results of Pauldrach et al. 1994). We also tested models in which \ion{N}{4} was the dominant ionization stage of the unilluminated wind, and found that the best-fit parameter values were within the errors of the model we present here. The choice of Equation~\ref{eq:background1} or Equation~\ref{eq:background2} has a small effect on our results, as it affects only that portion of the wind that is illuminated by direct X-rays from the neutron star, yet is ionized so slightly that the background ionization rate is important. We treat the overlap of doublet components using the approximation of Castor \&\ Lamers (1979). The optical depths in the photospheric absorption lines are assumed to have Gaussian profiles. The red and blue doublet components of the photospheric lines have optical depths at line center of $\tau_{\rm b}$ and $\tau_{\rm r}=0.5 \tau_{\rm b}$ (in the lines we consider, the oscillator strengths are in a 2:1 ratio). We do not account for limb and gravity darkening of the companion or its nonspherical shape. However, we normalize the absorbed continuum (which arises in the visible face of the star) to the continuum level observed by the GHRS at each particular orbital phase, while we normalize the scattered emission (which can originate anywhere on the companion) to the average continuum level observed with the GHRS. The free parameters of the fit are $T$, $\gamma$, $v_{\infty}$, $v_0$ (the systemic Doppler shift of LMC~X-4), $L_{38}/\dot{M_{-6}}$ (the ratio of X-ray luminosity in 10$^{38}$ erg~s$^{-1}$ to mass-loss rate in 10$^{-6} \mbox{ M}_{\odot}$~yr$^{-1}$), $\dot{M_{-6}}$, and $\tau_{\rm b}$. We employ a downhill simplex method (Nelder \&\ Mead, 1965) to find the minimum $\chi^2$ and then refine the solution and find errors in the parameters using the Levenberg-Marquart method (Marquardt, 1963). For all velocities, we set the error to at least 35~km~s$^{-1}$, which is the resolution limit of the GHRS G160M grating. We exclude from the fit the central 1\AA\ region around the rest wavelength of each transition, which is poorly modeled by the escape probability method. In Table~3 we show the best-fit parameters and we display the fits to the line profiles in Figure~10. The fits determine $L_{38}/\dot{M_{-6}}=0.26\pm0.01$ because the ion fractions in the wind (and thus the optical depths) depend sensitively on this ratio. The fits also determine $\dot{M_{-6}} a_{\rm N}/a_{\rm N, LMC}=3.2\pm2.3$, where $a_{\rm N, LMC}=1.8\times10^{-5}$ is the Nitrogen abundance in the LMC, assumed to be 20\%\ of the cosmic value. The error determined for this parameter shows that it is not effectively constrained by the fit; this is because it only affects linearly the optical depth in the portion of the wind outside of the X-ray shadow. In general, the model successfully matchess the change in the P~Cygni profiles between successive orbital phases (Figure~10, right-hand panel). However, the predicted change in the \ion{N}{5} profiles between $\phi=0.31$ and $\phi=0.41$ is blue-shifted from the observed change. This is further evidence that the wind expands more slowly at small radii than expected from a $\beta=1$ velocity law. \subsection{Wind Inhomogeneities\label{sec:inhom}} We now allow the wind density to be inhomogeneous by adding two new free parameters, another mass-loss rate $Z \dot{M}$, where $Z\gg1$, and a covering fraction $f$. In the absence of X-ray photoionization, the optical depth in the two wind components are $\tau_1=\tau$ (where $\tau$ is given by Equation~\ref{eq:tau}) and $\tau_2=Z \tau$. The ionization fractions are computed for a given point in the wind that has a density determined from $\dot{M}$ and $Z \dot{M}$. For the blue-shifted absorption, a fraction $f$ of sightlines to the stellar surface encounters material with the higher density. This material has a much higher optical depth, as not only is there more of this material, but it also contains a higher fraction of \ion{N}{5}. Applying a downhill simplex $\chi^2$ minimization algorithm gives an improved $\chi^2_\nu=4.01$. The fit is improved for two reasons. First, the fit of the homogeneous wind model over-predicted the effects of X-ray ionization on the red-shifted emission (Figure~10). The inhomogenous wind model allows more \ion{N}{5} to survive in the X-ray illuminated wind, so that more of the red-shifted emission persists. Second, the homogeneous wind model predicts too much absorption in the 1238\AA\ line and not enough absorption in the 1242\AA\ line at $\phi=0.12$. When an inhomogeneous wind is allowed, the strength of the absorption in the two components can be more nearly equal, as absorption from the dense wind is saturated. We compare the inhomogeneous wind models with the GHRS observations in Figure~11. We find a ``covering fraction'' $f=0.33\pm0.01$; this is the probability that a ray from the primary encounters denser material with the mass-loss rate $Z\dot{M}$. Our best-fit values are $Z=700\pm200$ and $L_{38}/\dot{M_{-6}}=160\pm40$. If we assume that the covering fraction is approximately the volume filling factor, then the average mass-loss rate of the wind, $\bar{\dot{M}}=(1-f)\dot{M}+f Z \dot{M}$, then leads to $L_{38}/\bar{\dot{M{-6}}}=0.7\pm0.4$. We note that the best-fit value for the photospheric optical depth in the blue doublet component, $\tau_{\rm b}=0.62\pm0.02$ is higher than the values obtained for similar stars by fits to IUE data (Groenewegen \&\ Lamers 1989). \subsection{Alternate Velocity Laws\label{sec:beta}} As discussed in \S\ref{sec:vlaw}, the change in the maximum velocity of the P~Cygni line absorption over orbital phase allows us to infer the radial velocity law in the wind, assuming only that the X-ray source ionizes the entire wind outside of the shadow of the normal star. In \S\ref{sec:basic}, we modelled the P~Cygni line variations allowing the X-ray photoionization of the wind to vary but assuming a standard $\beta=1$ radial velocity law. If we allow a slowly accelerating wind with $\beta=1.6$ we can still obtain a good match with the observed line profiles, with $\chi^2=4.91$. In this case we find $v_\infty=1560\pm35$~km~s$^{-1}$, $L_{38}/\dot{M}_{-6}=0.23$ Allowing an inhomogeneous wind improves the fit to $\chi^2=4.59$. We conclude that while the line profiles can be adequately fit with a $\beta=1$ velocity law, a more slowly accelerating wind, inferred from simpler models, is still consistent with the data. Velocity laws with $\beta>2$ (as in some of the fits in \S\ref{sec:vlaw}) generally predict steeply peaked red-shifted emission (Castor \&\ Lamers 1979); however this is difficult to rule out from the data, as photospheric absorption is probably present at low velocities. \subsection{X-ray Shadow of the Accretion Disk and Gas Stream on the Wind\label{sec:diskjet}} The accretion disk or gas stream from the L1 point could presumably shadow regions of the stellar wind from X-ray illumination. We have made models to find signatures of these effects in the P~Cygni line profiles. LMC~X-4 shows a $\approx30$~day variation in its X-ray flux (Lang et al. 1981), usually attributed to shadowing by an accretion disk that we observe at varying orientations. To allow the disk to shadow regions of the wind from the X-rays, we make the simple assumption that it is flat but inclined to the orbital plane. Howarth \&\ Wilson (1983) used a similar geometry to model the optical variability of the Hercules~X-1 system, which shows a similar 35~day X-ray period. We allow as free parameters the angular half-thickness of the disk, its tilt from the orbital plane, and its ``precessional'' phase within the 30-day cycle. Fixing the 30-day phase to be the at the X-ray maximum (as suggested by observations with the All Sky Monitor on the Rossi X-ray Timing Explorer, Paper~I), we perform a $\chi^2$ fit and find a disk half-thickness of $6.9\pm0.3\arcdeg$ and a disk tilt of $31\pm2\arcdeg$ from the orbital plane, for a reduced $\chi^2=4.62$. To simulate the shadow caused by the gas stream from the L1 point, we have assumed that the stream emerges at an angle of $22.5\arcdeg$ from the line to the neutron star (Lubow \&\ Shu 1975). The stream blocks the X-rays from a uniform half-thickness $\rho$ in elevation from the orbital plane (as seen from the X-ray source) and at angles $<\eta$ from the line to the L1 point along the orbital plane; $\rho$ and $\eta$ are free parameters of the fit. The fit gives $\chi^2=4.04$, with $\rho=33\pm2\arcdeg$ and $\eta=90\pm3\arcdeg$. The values for the other free parameters similar to those found in \S\ref{sec:basic}. The reason that the gas stream shadow improves the P~Cygni profile fits is that at $\phi=0.1-0.4$, the stream shadows the receding wind, increasing the red-shifted emission, which our basic model of \S\ref{sec:basic} underpredicts. Allowing an inhomogeneous wind as in \S\ref{sec:inhom} improves the fit for the same reason. It is possible that both effects are combined, and future observations will be needed to disentagle them. \subsection{\ion{C}{4} Lines\label{sec:c4}} The \CIV lines did not show as much variability with the GHRS as the \ion{N}{5} lines, although we did not observe the lines before $\phi=0.16$. Most of the absorption in these lines is probably either interstellar or photospheric (Vrtilek et al. 1997). Nevertheless, as a check on our models of the \ion{N}{5} lines, we also performed fits to the \CIV lines. We fix the parameters found in \S\ref{sec:basic}, except for $T$, $\gamma$, and $\tau_{\rm b}$, which are free parameters defining the optical depth of \ion{C}{4} in the wind and photosphere. We added fixed Gaussian interstellar \CIV absorption lines to the profiles. The result of our model, shown in Figure~12, does not match the line profiles in detail ($\chi^2=11.3$). It is possible that the fit is poor because there are photospheric lines in the vicinity of 1550\AA\ due to ions other than \ion{C}{4}. A systemic velocity closer to 0 might bring the photospheric lines into closer agreement with the data. In both the observed profile and in the model, red-shifted emission is not prominent. The fitted value of $\gamma=7.9\pm0.8$ implies that \ion{C}{4} in the undisturbed wind is confined to $x<2$, so that much of the red-shifted \ion{C}{4} emission could be occulted by the primary. If we fit the line profiles with models with higher values of $\beta$, then $\gamma$ is reduced, but our fits never gave $\gamma<4.5$ for $\beta<2.4$ (the upper limit found in \S\ref{sec:vlaw}). We compare our values of $\gamma$ with the optical depth law found for \CIV by Groenewegen \&\ Lamers (1989) by fitting the \CIV P~Cygni lines of similar stars observed with IUE. They parameterize the optical depth in the stellar wind by \begin{equation} \tau\sim (v/v_\infty)^{\alpha_1} [1-(v/v_\infty)^{1/\beta}]^{\alpha_2} \end{equation} where $\alpha_2=\gamma$ for $\alpha_1=0$ and $\beta=1$. For HD~36861 (spectral type O8\,III) Groenewegen \&\ Lamers find $\alpha_1=1.5\pm0.2$ and $\alpha_2=3.1\pm0.5$, which is inconsistent with $\gamma>3.1$. For HD~101413, they find $\alpha_1=-0.8\pm0.3$ and $\alpha_2=1.6\pm0.4$. These values imply a sharply peaked concentration of C\,IV towards the stellar surface, but do not fit our data as well as $\gamma\gae5$. We have no compelling alternate explanation for our high value of $\gamma$. However, the \CIV lines in LMC~X-4 that we observed were weak and were dominated by photospheric absorption. Further observations at phases when \CIV is more prominent are needed to test our conclusion that this ion is concentrated near the stellar surface. \subsection{Fits using the SEI method} It has been shown (Lamers et al. 1987) that inaccuracies resulting from the escape probability method can be reduced by computing the radiative transfer integral exactly along the line of sight while continuing to use the source function given by the Sobolev approximation. This is called the SEI (Sobolev with Exact Integration) method. The method can be used to simulate the effects of local turbulence within the wind, and using the method of Olson (1982), the effects of overlapping doublets can be computed precisely. We have implemented a program that uses the SEI method to predict the P~Cygni line profiles from a wind ionized by an embedded X-ray source. The details of the wind ionization are identical to those given by Equations~\ref{eq:sobolev} through \ref{eq:background2}. We find that in our analysis of LMC~X-4, the approximation we have made for the doublet overlap gives results very close to those given by Olson's method and the SEI method. This may result from the small amount of the doublet overlap in the \ion{N}{5}\,$\lambda\lambda\,1238.8,\,1242.8$\ lines, given a separation of 960~km~s$^{-1}$ between the doublet components and a wind terminal velocity of 1350~km~s$^{-1}$. Using a turbulent velocity $v_{\rm t}$ that is constant throughout the wind, we find that the line profiles are best fit with $v_{\rm t}<200$~km~s$^{-1}$. The SEI method also confirms the results of the escape probability model for the \CIV lines, as reported in \S\ref{sec:c4}. Although the separation of the \CIV lines is only 500~km~s$^{-1}$, we suggest that the doublet overlap was not severe during our observations because much of the wind at high velocity had already been ionized, and \ion{C}{4} is concentrated near the primary's surface, at low wind velocities. \section{Discussion} We have demonstrated a new method for inferring the radial velocity profile in a stellar wind. The wind radial velocity law for LMC~X-4 can be fit with $\beta\approx1.4-1.6$, and $\beta<2.5$. This differs from the expected value of $\beta=0.8$, but the wind in the LMC~X-4 system is likely to deviate from that of an isolated O star. Assumptions of the method, that the wind is spherically symmetric, for example, will need to be revised in light of further observations that cover the entire orbital period. Unfortunately, we did not observe the UV spectrum during the X-ray eclipse when the X-ray ionization of the blueshifted absorption should be minimal, and so our inference of the wind terminal velocity is model-dependent. Observations with STIS and the Far Ultraviolet Spectroscopic Explorer (FUSE) during eclipse may distinguish between our alternate models. The wind terminal velocity we have inferred from our $\beta=1$ escape probability method, $v_\infty=1350\pm35$~km~s$^{-1}$, is lower than the terminal velocities measured for similar stars by Lamers, Snow, \&\ Lindholm (1995). Their stars \# 33,37,38,39, and 44 are all near the 35,000\,K temperature of LMC~X-4, but have $v_\infty=1500-2200$~km~s$^{-1}$. However, stars in the LMC are known to have terminal velocities $\approx20$\%\ lower than their galactic counterparts (Garmany \&\ Conti 1985). We note from the IUE data shown in Figure~4 that the P~Cygni line absorption at orbital phases $\phi>0.5$ is greater than that at $\phi<0.5$. X-ray absorption dips are frequently seen at $\phi\sim0.8$, possibly indicative of dense gas in a trailing gas stream or accretion wake. However, it may be difficult for models to reproduce an accretion wake that occults a large fraction of the primary at orbital phases as late as 0.9. Another possible explanation is that a photoionization wake is present (Fransson \&\ Fabian 1980, Blondin et al. 1991). Such a wake results when the expanding wind that has not been exposed to X-rays encounters slower photoionized gas. While a strong photoionization wake is expected for $q\ll1$, the presence of Roche lobe overflow and a photoionization wake have been found to be mutually exclusive in some simulations (Blondin et al. 1991). A further possibility is that at $\phi\sim0.6-0.9$ the wind in the cylindar subtending the primary's disk is shielded from ionization by the trailing gas stream. A simple prediction of this scenario is that there should be more flux in the red-shifted emission at $\phi=0.25$ than at $\phi=0.75$. If the primary star does indeed rotate at half the corotation velocity (Hutchings et al. 1978) then the gas stream may trail the neutron star by more than the 22.5$\arcdeg$ predicted by Lubow \&\ Shu (1975). We have fit the changing P~Cygni line profiles using an escape probability method, assuming a radial, spherically symmetric stellar wind. However, we note that the Hatchett-McCray effect is most sensitive to the region between the $q=1$ surface and the conical shadow of the primary. Outside of this region, the wind should be entirely ionized for likely values of $\dot{M}$ and $L_{\rm x}$. The region of the wind that our method is sensitive to is illuminated by X-rays, and its dynamics may be affected as a result (Blondin et al. 1990). Structure associated with the distortion of the primary, such as the gas stream and a wind-compressed disk (Bjorkman 1994), may be also be important in this region. We have inferred $L_{38}/\dot{M}_{-6}=0.26\pm0.01$ from our escape probability P~Cygni line models. Our simultaneous ASCA observations found a 2--10~keV flux of $2.9\times10^{-10}$~erg~s$^{-1}$~cm$^{-2}$, which given a distance to the LMC of 50~kpc implies $L_{38}=0.9$ and $\dot{M}_{-6}\approx3$ (our inhomogeneous wind model of \S\ref{sec:inhom} would imply $\dot{M}_{-6}\approx1$). There is some uncertainty in applying $L_{38}=0.9$, as the isotropic X-ray spectrum and luminosity may differ from that observed by ASCA. Nevertheless, the derived $\dot{M}$ is an order of magnitude larger than that reported by Woo et al. (1995), who used the strength of scattered X-rays to find $\dot{M}/v_\infty=10^{-10}\mbox{ M}_{\odot}$~yr$^{-1}$~(km~s$^{-1}$)$^{-1}$, implying for our best-fit value of $v_\infty=1350\pm35$ km~s$^{-1}$ that $\dot{M}_{-6}=0.14$. Our result for $\dot{M}$ is on the order of the single-scattering limit given by the momentum transfer from the radiation pressure of the primary; $\dot{M}<L_{\rm opt}/v_\infty c$, where the optical luminosity of the primary $L_{\rm opt}\sim5\times10^{38}$~erg~s$^{-1}$ (Heemskerk \&\ van Paradijs 1989) implies that $\dot{M}<2\times10^{-6}\mbox{ M}_{\odot}$~yr$^{-1}$. However, if the wind is accelerated by scattering of overlapping lines, one can have $\dot{M}= 2-5 L_{\rm opt}/v_\infty c$ (Friend \&\ Castor 1982). Combining our value of $\dot{M}_{-6}\approx3$ with our measured wind velocity near the orbit of the neutron star (400~km~s$^{-1}$) and the orbital velocity of the neutron star (440~km~s$^{-1}$), we find that gravitational capture of the stellar wind (Bondi \&\ Hoyle 1944) could power a significant portion of the observed X-rays ($L_{38}\approx0.3$). However, we can not rule out that the low velocity and high wind density we have measured are attributable not to an isotropic wind, but to other gas in the system, such as the gas stream or a wind-compressed disk about the primary. \acknowledgements We would like to thank M. Preciado for his assistance. This work was based on observations with the NASA/ESA {\it Hubble Space Telescope}, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract GO-05874.01-94A. BB and SDV supported in part by NASA (NAG5-2532, NAGW-2685), and NSF (DGE-9350074). BB acknowledges an NRC postdoctoral associateship. \clearpage
2024-02-18T23:40:14.893Z
1998-11-17T18:18:05.000Z
algebraic_stack_train_0000
1,816
10,223
proofpile-arXiv_065-8948
\section*{Introduction} Investigations of the structure and physical properties of liquid crystals have greatly increased in the last decades and are now an integral part of solid-state physics. Certain organic materials do not show a single transition from solid to liquid, but rather a cascade of transitions involving new phases. The mechanical properties and the symmetry properties of these phases are intermediate between those of a liquid and those of a crystal \cite{degennes}. Depending upon the nature of the building blocks and upon external parameters (temperature, solvents, etc.) a wide variety of phenomena and transitions amongst liquid crystals are observed. To generate a liquid crystal one must use anisotropic objects, like elongated molecules. Several ways are known to achieve this: with small molecules; with long helical rods that either occur in nature or can be made artificially; with polymers; with more complex structure units like the capside of a virus \cite{alberts} that are associated structures of molecules and ions; likewise, with even more complex structure units, like amoeboid cells which are complex structures far from thermodynamic equilibrium and act as a machine \cite{PMNcluster,dewald}. Here, the orientational elastic properties of liquid crystal formed by migrating and living cells are investigated. Thermotropic nematic liquid crystals can be formed by small elongated molecules like p-azoxyanisole (PAA) or N-(p-methoxy\-benzylidene)-p-butylaniline (MBBA) which can be regarded as rigid rods ($ \approx 2$ nm length and $\approx 0.5$ nm width). At low temperature one observes a crystalline phase. Heating up the sample, a transition to a nematic phase is observed. With further heating the transition to an isotropic liquid is observed. Lyotropic nematic liquid crystal can be formed by long elongated molecules in suitable solvents. The anisotropic building blocks can have a rod-like conformation like (i) synthetic polypeptides ($\approx 30 nm$ length and $\approx 2$ nm width) and (ii) tobacco mosaic virus ($\approx 300 - 1000$ nm length and $\approx 20$ nm width). At low concentration one observes an isotropic liquid but a transition to a nematic phase is observed by increasing the concentration of the elongated building blocks. The fundamental property of a nematic liquid crystal, which makes it different from an isotropic liquid and similar to a solid, is the presence of an orientational degree of freedom which is characterized by a macroscopic spatial ordering of the long axis of molecules. The statistical mechanics for predicting the macroscopic spatial ordering is difficult but can be approximated by applying mean-field calculations. (i) The Onsager approach starts with anisotropic steric repulsion. The free energy is minimum for oriented rods (width $<<$ length): they show an orientational order of a nematic phase at high rod density. But, at low rod density, the suspension is an isotropic liquid, as expected. (ii) The Maier-Saupe theory approaches the intermolecular interactions by a mean-field which is proportional to the order parameter of the nematic phase. The minimum of the free enthalpy yields the equilibrium state. At high temperature an isotropic liquid without orientational order is predicted, but a nematic liquid crystal with orientational order can occur at low temperatures. The above mentioned nematic liquid crystals are described by equilibrium thermodynamic. This holds true even for tobacco mosaic virus since only the anisotropic shape of the viruses is considered. The intracellular metabolism can be neglected since the virus does not have the ability of self-motion. Thus, the thermal motion is the stochastic source for the molecular dynamics. However, liquid crystal phases can also be formed if the stochastic source is created by energy consuming processes. A typical example are migrating biological cells like white blood cells. In the absence of a directing extracellular signal, the single cells perform a random walk due to the existence of stochastic processes in the intracellular signal transduction chain. The migrating cells can form liquid crystal phases if each cell transmits extracellular signals that are received by other cells. The symmetry of the cell-cell interaction is important for the formed type of liquid crystal phase: (i) A polar nematic liquid crystal is expected if the cell-cell interaction is polar. This means that one cell transmits a signal which attracts another cell. The cellular response is directed migration. For example: (a) Single slime mould cells migrate as amoeboid cells on a surface and search for bacteria as food. In the case of low bacteria concentration, the intracellular metabolism of the slime mould cells alters in such a way that cyclic AMP is emitted as an extracellular signal. Other slime mould cells can receive this chemical signal and react accordingly to the received signal with directed migration. At high enough cell density, the cells form a cluster where each cell tries to move towards the center of the cluster \cite{slime}. (b) A similar process is observed with migrating white blood cells \cite{PMNcluster}. If human leukocytes (granulocytes) are expose to blood plasma they then migrate as single amoeboid cells on a surface as shown in Fig. \ref{Cluster1}a. But if the calcium concentration is lowered, e.g. by adding the chelator EDTA (ethylene diamine tetraacetic acid, $pK_{Ca} = 10.7$) or EGTA (ethyleneglycol-(aminoethyl ether) tetraacedic acid, $pK_{Ca} = 11$) to the plasma, then the migrating granulocytes attract each other and form a cluster, as shown in Fig. \ref{Cluster1}b. The emitted molecules (chemical signal) is still unknown. The process is reversible since the clusters disappear if the calcium concentration is increased. In both examples, the cells in a cluster form a polar nematic liquid crystal because (i) there is order in the cell orientation, $\langle cos \varphi \rangle$, induced by the directed migration and (ii) no order in the center of mass of the cells. The angle, $\varphi$, is the difference between the actual migration direction and the direction towards the center of the cluster. The cells in the cluster of Fig. \ref{Cluster1}b have a polar order of 0.82. A singularity in the cell orientation is expected in the center of the cluster. A common observation is that the cell in the center has a spherical shape and the surrounding cells have a polar shape. Human monocytes (another type of leukocytes) are immobile on a glass surface. If a mixture of granulocytes and monocytes is investigated, the immobile monocytes act as nucleation center as shown in Fig. \ref{Cluster1}b. The polar nematic liquid crystal formed by living and migrating cells is a system far from thermodynamic equilibrium. One can ask which temperature corresponds to the random movement of the cells? This question can be answered by using the Einstein relation which connects the random movement of an inert particle described by the diffusion coefficient D, the mobility R and the thermal energy $k_B$T: $D R = k_B T$. The mobility of a spherical particle with radius r in a viscous medium with viscosity $\eta$ is given by Stokes: $R = 6 \pi \eta r$. The random movement of a migration granulocyte can be quantified by measuring the mean-squared displacement as a function of time. The diffusion coefficient, $D$, is one fitting parameter ($D = 200 - 1000 \mu m^2/min$). The calculated temperature for an inert particle which has the value of the diffusion coefficient of the migrating cells, is very high $\approx 10^5 $ K ($r = 10 \mu m, \eta = 0.01 P$) \cite{franschi}. (ii) An apolar nematic liquid crystal is expected if the cell-cell interaction is apolar. This means that one cell can influence the orientation of another cell. The cellular response is such that the cells like to be parallel. For example, human melanocytes are elongated cells which distribute with their dendrites the pigment melanin in the skin. Although the involved chemical signal is unknown there exists an attractive cell-cell interaction because the cell body of one cell attracts another cell body. There also exists a repulsive cell-cell interaction since the dendrites avoid contact with each other. The elongated cells try to be parallel oriented to each other and form at high cell density a nematic liquid crystal. At low cell density the cells form cluster with oriented cells as shown in Fig. \ref{Melano21}a, however, with increasing cell concentration the cells form a nematic liquid crystal. Fig. \ref{Melano21}b shows a nematic liquid crystal close to the nematic-isotropic transition. This observation is not restricted to melanocytes. Other cell types like fibroblasts, osteoblasts, adipocytes etc. orient in the same way so that, in the mean, the cells are parallel to each other. The cells form an apolar nematic liquid crystal because there is no order in the center of mass of the cells but order in the cell orientation. The apolar symmetry of this phase will be demonstrated by investigating orientational defects as shown below. A few words to the dynamic behavior: Time lapse movies of melanocytes show that the cell body performs rhythmic movements and, in addition, each cell extend and retract periodically its opposing dendrites. The dendrites find their orientation with minimum interaction with other cells \cite{teichgraeber}. A uniform oriented nematic liquid crystal is formed if a small extracellular guiding signal is applied like parallel scratches in the surface (Fig. \ref{Melano21}). With this method oriented single nematic liquid crystals were produced which had a size of several cm$^2$. An isolated single cell is not oriented by such a weak extracellular guiding signal \cite{dewald}. The uniform oriented nematic liquid crystals which are used in the liquid crystal display technology, are made in a very similar way \cite{degennes}. In the absence of an orienting extracellular guiding signal the nematic phase contains numerous orientational point defects (disclinations). An example is given in Fig. \ref{Melano41}. Summarizing this part, nematic liquid crystal phases can be formed either by anisotropic interacting building blocks at thermal equilibrium or by anisotropic interacting building blocks far from thermodynamic equilibrium. \section*{Orientational Elastic Energy} An apolar nematic liquid crystal is an uniaxial system ( $\pm \vec{n}$) where the molecules are on average aligned along the director $\pm \vec{n}$. In the case of an ideal nematic liquid crystal, the direction of the director $\vec{n}$ is the same all over the sample. However, this ideal conformation will not be compatible with the constrains that are imposed by limiting boundaries. There will be some deformation of the alignment. The distance $\ell$, over which significant variations of orientation occur, is of most interest, and is much larger than the dimensions, $a$, of the building blocks. Thus, the deformations may be described by a continuum theory disregarding the details of the structure on the scale of the building blocks ($a/\ell<<1$). To construct such a theory one possible starting point would be the elastic distortion energy density $F_d$ (\cite{oseen,frank}). The spatial derivatives of $\vec{n}(\vec{r})$ are used to construct $F_d$. They form a tensor of rank two $\partial_\alpha n_\beta$. At any point we introduce a local system of Cartesian co-ordinates, x, y, z with y parallel to $\vec{n}$ at the origin, x chosen perpendicular to y (x-y plane are the plane where the cells migrate) and z parallel to the normal of the plane. x, y, z form a right-handed system. Referred to these axis, the two components of curvature, at this point, are the splay and bend deformation $s = \partial n_x/\partial x$ and $b = \partial n_x/\partial y$, respectively (see Fig. \ref{SplayBend}). The distortion energy of a liquid crystal specimen in a particular configuration, relative to its energy in the state of uniform orientation, is expressible at the area integral of distortion energy density $g$ which is a quadratic function of two differential coefficients which measure the curvature in two dimensions. \begin{equation} g = k_1s + k_3 b + \frac{1}{2} k_{11} s^2 + \frac{1}{2} k_{13}s b + \frac{1}{2} k_{33} b^2 + \cdots \end{equation} The spontaneous splay coefficient, $k_1$ is zero if the axis perpendicular to $\vec{n}$ is a symmetry axis ($\leftarrow = \rightarrow$). The spontaneous bend coefficient, $k_3$, is zero if the axis parallel to $\vec{n}$ is a symmetry axis ($\uparrow = \downarrow$). The splay-bend coefficient, $k_{13}$, is only unequal to zero for $\rightarrow \neq \leftarrow$ and $\uparrow \neq \downarrow$. $k_{11}$ and $k_{33}$ are the orientational elastic coefficient for splay and bend deformation. These coefficients are also known as Frank elastic constants. The distortion energy can be written in the coordinate free notation \begin{equation} g = k_1 \nabla \vec{n} + k_3 (curl \vec{n})_z + \frac{1}{2} k_{11} (\nabla \vec{n})^2 + \frac{1}{2} k_{13}(\nabla \vec{n})(curl \vec{n})_z + \frac{1}{2} k_{33} (curl \vec{n})_z^2 \end{equation} \section*{Orientational Elastic Constants} \subsection*{Splay and Bend Constants} The orientational elastic constant of a nematic liquid crystal (bulk) can be estimated by purely dimensional arguments: one expects that the elastic constants are to be in the order of $U/a$, where $U$ is a typical interaction energy between the molecules while $a$ is a molecular dimension. A typical value of the molecular interaction energy is $\approx$ 8 kJ/mol or $1.3 \times 10^{-20}$ J/molecule which corresponds to a temperature of $\approx 2000$ K. A typical liquid crystal molecule has a length of about 1.5 nm. It leads to a value of $\approx 8 \times 10^{-12} $ N which is in accordance with the experiments \cite{degennes}. The orientational elastic constant of a membrane can be estimated by taking this value times the thickness, d, of the membrane. It yields $3 \times 10^{-20}$ J which is again verified by experiments \cite{helfrich}. The orientational elastic constant of a nematic liquid crystal formed by living cells can be estimated in a similar way: In the case of migrating granulocytes the quasi temperature is $\approx 10^5$ K. A condensed phase can only be formed if the interaction potential, U, is in this order or even larger. This leads to an interaction energy of $\approx 5 \times 10^{-19}$ J/cell. The bulk elastic constant is obtained by dividing this interaction energy by the length of a cell ($\approx 20 \mu$m). It yields $2.5 \times 10^{-14}$ N. The two dimensional elastic constant is obtained by multiplying this value with the cell thickness ($\approx 20 \mu$m). The estimated orientational elastic constant of a quasi two dimensional nematic liquid crystal is then $\approx 5\times 10^{-19}$ J. The dependence of the orientational elastic coefficients on the apolar order parameter, $S$, and the cell density, $\rho$, can be estimated in the following way: The orientation angle of amoeboid cells is controlled by an automatic controller which compares the actual angle with the desired one \cite{peliti}. The cellular response is in such a way as to minimize the deviation between the actual angle and the desired one. The desired angle of one cell is to be parallel to another cell. In the case of a nematic liquid crystal the information of the desired angle is transferred to one cell by an extracellular guiding field which can be chemical, electrical or sterical. The extracellular guiding field, $E_2$, of a nematic liquid crystal can be approached by the nematic mean field where one cell is considered in the field produced by all the other cells. The strength of the mean field is assumed to be proportional to (i) the cell density, $\rho$, and (ii) the mean apolar order parameter, $S(=\langle cos 2 \Theta \rangle)$, of the surrounding cells. \begin{equation} \label{} E_2 = a_2 \, \rho \, S \end{equation} These assumptions are verified by experiments \cite{dewald}. The extracellular guiding field is calibrated in cellular units by $a_2$. The cellular automatic controller acts in such a way as to minimize the deviation between the actual angle and the desired one. In the absence of extracellular guiding field, the cells alter their angle of orientation in a random fashion. The response of the cellular automatic controller is hence not simply described by the extracellular guiding field but also by stochastic processes in the cellular signal transduction chain \cite{peliti}. The rate equation for the orientation angle is a stochastic differential equation (Langevin equation): \begin{equation} \label{langevin1} \frac{d \Theta}{d t} = - k_2 \, E_2 \, sin 2 \Theta + \Gamma_2(t) \end{equation} The orientation angle, $\Theta$, is measured in respect to the director $\vec{n}$ of the nematic liquid crystal. The cellular signal transformer is described by $E_2 sin 2 \Theta$ and the cellular reaction unit by the coefficient $k_2$. The stochastic part of the machine equation, $\Gamma(t)$, is approximated by a white noise of the strength $q_2$ ($\langle \Gamma \rangle =0, \, \langle \Gamma(t) \, \Gamma(t') \rangle = q_2 \, \delta(t-t')$) . The Langevin equation can be transformed into a Fokker-Planck equation which describes the temporal variations of the angle distribution function \cite{risken,franschi}. The predicted steady state angle distribution function is \begin{equation} \label{boltzmann1} f_2(\Theta) = f_{20} \, e^{V \, cos 2 \, \Theta} \end{equation} with the generating function V \begin{equation} V = \frac{2 \, k_2 \, E_2}{q_2} = \frac{2 \, k_2 \, a_2 \, \rho \ S}{q_2} = A_2 \, \rho \, S \end{equation} $f_{20}$ is determined by the normalization ($\int f(\Theta) \, d \Theta = 1$). It is important to realize that the steady state angle distribution function of living cells (system far from thermodynamic equilibrium) have the same mathematical structure as the Boltzmann distribution which describes the fluctuations of a system at thermal equilibrium. The predicted angle distribution function, $f_2(\Theta)$, of a nematic liquid crystal formed by migrating and orienting cells as well as the density- and order parameter dependence of the extracellular guiding field are verified by experiments \cite{dewald}. The unknown coefficient, $A_2 \, (=2 \, k_2 \, a_2 \, / \, q_2)$, can be determined by measuring the angle distribution function at different cell densities. For melanocytes one gets $1/A_2$ = 55 cells/mm$^2$. Form the self-consistency for the order parameter one gets the transition from the nematic to the the isotropic phase \cite{dewald}: $A_2 \, \rho_0 = 2$ with cell density, $\rho_0$, at the transition. This threshold cell density, $\rho_0$, can be estimated by considering random oriented cells without steric contact. One gets a threshold cell density of $\approx 100$ cells/mm$^2$ for $\approx 100 \mu$m long cells. This value is in accordance with $\rho_0$ (= 110 cells/mm$^2$) determined directly from $A_2$. The interaction potential, $U$, of a single cell with its environment is the generating function, $V$, times $\cos 2 \, \Theta$ \begin{equation} \label{IntPot} U = A_2 \, \rho \, S \, \cos 2 \, \Theta = 2 \frac{\rho}{\rho_0} \, S \, \cos 2 \Theta \end{equation} This interaction potential of a single cell in the environment of a nematic liquid crystal will be used to predict the order parameter dependence as well as the density dependence of the orientational elastic coefficients. The calculation is made in analogy to the elastic coefficients of nematic liquid crystal formed by molecules \cite{gruler74}. The angle $\Theta$ alters by the distortion angle $\alpha$ if one moves in space (y-direction). One obtains \begin{equation} \label{alpdaDis1} \Theta \rightarrow \Theta + \alpha \end{equation} For small distortion angle one gets \begin{equation} \label{alphaDis2} \cos2 \,(\Theta + \alpha) = (1 - 2 \, \alpha^2) \, \cos 2 \, \Theta - 2 \, \alpha \, \sin 2 \, \Theta \end{equation} The mean interaction potential per area is obtained from Eqs. \ref{IntPot} - \ref{alphaDis2} as \begin{equation} u = 2 \, \frac{\rho^2}{\rho_0} \, S^2 \, (1 - 2 \, \alpha^2) \end{equation} In the case of a bend deformation, the distortion angle is on a molecular scale \begin{equation} \alpha = \frac{\partial \Theta}{\partial y} \xi_y \end{equation} $\xi_y$ represents the characteristic length of a bend deformation. One expects that $\xi_y$ is the length of one elongated cell. The elastic distortion area density is then \begin{equation} u_{33} = 4 \, S^2 \, \frac{ \rho^2}{\rho_0} \, \xi_y^2 \, \left( \frac{\partial \Theta}{\partial y} \right)^2 \end{equation} which has to be compared with the phenomenological derived bend distortion energy \begin{equation} g_{33} = \frac{1}{2} \, k_{33} \, \left( \frac{\partial \Theta}{\partial y} \right)^2 \end{equation} The splay elastic constant is then \begin{equation} \label{k_mean} k_{33} = 8 \, b \, \frac{\rho^2}{\rho_0}(S \, \xi_y)^2 \end{equation} A similar expression can be derived for the splay elastic constant, $k_{11}$. One expects that the elastic coefficient increases with (i) increasing order parameter, (ii) increasing cell density and (iii) increasing cell dimension. A factor $\approx 500$ is expected between the value at the nematic-isotropic transition ($\rho_0 = 110$ cells/mm$^2$ and $S \approx 0.5$) and that of a nematic liquid crystal film of dense packed cells ($\rho \approx 10 \, \rho_0$ and $S \approx 1$). The ratio of the elastic constants is determined by the ratio of characteristic length of the splay and bend deformation. \begin{equation} \frac{k_{11}}{k_{33}} = \left( \frac{\xi_x}{\xi_y} \right)^2 \end{equation} One expects that these characteristic lengths, $\xi_x$ and $\xi_y$, are approximately $ a$. Only small differences between $\xi_x$ and $\xi_y$ are expected since a cluster of cells will change their orientation if one tries to change the orientation of a single cell. The coefficient $b$ transforms the cell specific physical units into man-made units. Up to now no orientational elastic measurements were performed and consequently the coefficient, $b$, is unknown. It can be estimated in the following way: The orientational elastic coefficient close to the nematic-isotropic transition is expected to be the interaction potential, $U$, divided by the cellular length, $a$, and multiplied by the thickness, $d$, of the nematic film ($U \cdot d/a$) as shown above. The orientational elastic coefficient close to the transition is predicted by Eq. \ref{k_mean} to $2 \, b \, \rho_0 \, \xi_y^2$ (with $S \approx 0.5$ at the transition). This leads to \begin{equation} b \approx \frac{1}{2} \, \frac{d }{\rho_0 \, a^3} \, U \end{equation} One gets $b \approx 5 \times 10^{-20}$ J if one uses $U \approx 5 \times 10^{-19}$ J/cell as estimated above and $d \approx 20 \mu$m, $a \approx 100 \mu$m, $\rho_0 = 110$ cells/mm$^2$. \subsection*{Spontaneous Splay} The spontaneous splay coefficient, $k_1$ is zero if the axis perpendicular to $\vec{n}$ is a symmetry axis. If the building blocks carry a preferred direction in the direction of the long axis (e.g. wedge-shaped building blocks) there are as many building blocks up ($\uparrow$) as there are down ($\downarrow$) then the spontaneous splay coefficient is zero. A (parallel) polar nematic liquid crystal with its (parallel) polar distribution function ($\uparrow \neq \downarrow$) is formed in case of a (parallel) polar cell-cell interaction. The (parallel) polar distribution and the (parallel) polar shape asymmetry of the building blocks leads to a natural splay of the structure and the spontaneous splay coefficient, $k_1$ is expected to be unequal to zero. If the (parallel) polar anisotropy of the building-blocks (e.g. wedge-shaped cell, (parallel) polar organized cell structure) is connected with a (parallel) polar distribution of the muscle proteins then the center of mass of an area element moves in the direction of the director field. In case of a point defect $div \vec{v} \ne 0$. To proceed further one has to introduce the motor characteristics of the migrating cells. The excess force, $F(c, v)$, of a migrating cell depends on the concentration, $c$, of the migration stimulating molecules in the extracellular space and on the speed, $v$, of the cell \cite{lelievre}. \begin{equation} \label{} F(c, v) = F_0 \, \frac{c}{c + K_c} (1 - \frac{v}{v_{max}} ) \end{equation} The first term describes the maximum force, $F_0$, which is 40 nN for a granulocyte. The second term describes the fraction of membrane-bound receptors loaded with the migration stimulating molecule. The maximum speed, $v_{max}$, is 24 $\mu$m/min for granulocytes. One expects movements as demonstrated in Fig. \ref{Cluster1}b. The hydrodynamic equations for nematic liquid crystals formed by migrating cells are not yet developed. \subsection*{Spontaneous Bend} The spontaneous bend coefficient, $k_3$, is zero if the axis parallel to $\vec{n}$ is a symmetry axis. If the building blocks carry a preferred direction perpendicular to the direction of the long axis (e.g. banana-shaped building blocks or elongated cells with the leading front on one side (e.g. keratinocytes)), there are as many building blocks right ($\rightarrow$) as there are left ($\leftarrow$) then the spontaneous bend coefficient is zero. A (transverse) polar nematic liquid crystal with its (transverse) polar distribution function ($\uparrow \neq \downarrow$) is formed in case of a (transverse) polar cell-cell interaction. The (transverse) polar distribution and the (transverse) polar shape asymmetry of the building blocks leads to a natural bend of the structure and the spontaneous bend coefficient, $k_3$ is expected to be unequal to zero. Human keratinocytes (skin cells) plated on a surface migrate and cluster in large islands. The elongated keratinocytes in the cluster show the typical paving stone appearance \cite{hegemann}. If the (transverse) polar anisotropy of the building-blocks (e.g. banana-shaped cell, (transverse) polar organized cell structure) is connected with a (transverse) polar distribution of the muscle proteins then the center of mass of an area element moves perpendicular to the director field. In case of a point defect one expects $curl_z \vec{v} \ne 0$. \section*{Disclinations} The basic assumption of the used continuum theory concerns the smoothness of the director field, $\vec{n}(\vec{r})$. In practice, however, textures are observed originating from singularities in the orientational field. These discontinuities in the orientation are called disclinations. The energy of the disclinations themselves is unknown and the energy of the surrounding distorted director field can be calculated by making use of the elastic continuum theory. The director is given by \begin{equation} \vec{n}(\vec{r}) = [\cos \Phi(x,y), \sin \Phi(x,y)] \end{equation} where $\Phi (x,y)$ is the angle between the director and the x-axis of a fixed coordinate system. In order to avoid complicated mathematics the one-constant approximation is used $k = k_{11} = k_{33}$. \begin{equation} g = \frac{1}{2} \, k \, (div \, \vec{n})^2 \end{equation} The cylindrical coordinates ($r, \psi $) are used to describe the singularity. \begin{equation} g(r, \psi) = \frac{1}{2} \, k \,\left[ \left( \frac{\partial \Phi}{\partial r}\right)^2 + \frac{1}{r^2} \, \left( \frac{\partial \Phi}{\partial \psi}\right)^2 \right] \end{equation} It is evident that this expression diverges at $r = 0$. The cause is a discontinuity in orientation at $r = 0$. The functional dependence of $\Phi$ on $r$ and $\psi$ is determined by requiring that the distortion energy must be minimum with respect to variations of $\Phi$. This means that $\Phi$ must satisfy the Euler-Lagrange equation. \begin{equation} \frac{\partial^2 \Phi}{\partial r^2} + \frac{1}{r} \, \frac{\partial \Phi}{\partial r} + \frac{1}{r^2} \, \frac{\partial^2 \Phi}{\partial \psi^2} = 0 \end{equation} The trivial solution is $\Phi = const$, where the cells form an uniform oriented nematic liquid crystal. The general solution for the disclinations is \begin{equation} \Phi = m \, \psi + \Phi_0 \end{equation} For an apolar nematic liquid crystal 2m is an integer and for a polar nematic liquid crystal m is an integer. Some examples are shown in Fig. \ref{DisHalbGanz}. The question whether the nematic liquid crystal formed by living and migrating cells has a polar or an apolar structure can be decided by investigating the types of defects. (i) Dense packed elongated melanocytes form an apolar nematic liquid crystal since the half-number disclination ($m = - 1/2$) is found very often in the orientation pattern (Fig. \ref{Melano41}). However, this result is not specific for melanocytes. Other cell types like fibroblasts \cite{goldman}, osteoblasts (Fig. \ref{Osteoblast}), etc. show half-number disclination and, thus, form an apolar nematic liquid crystal. (ii) No half-numbered defects are found in the dense packed clusters of migrating granulocytes and keratinocytes. Granulocytes form a (parallel) polar nematic phase since this cell type like to create a defect with $m = 1$ and $\Phi_0 = 0$ (Fig. \ref{Cluster1}). Keratinocytes form a (transverse) polar nematic phase since this cell type like to create a defect with $m = 1$ and $\Phi_0 = \pi /2$. \section*{Interacting Disclinations} The deformation energy, W, of an isolated disclination can be obtained by integrating the elastic energy density in a circular area around the disclination. \begin{equation} W = W_c + \pi \, k \, m^2 \, \ln \left( \frac{R}{R_c} \right) \end{equation} R, $R_c$ and $W_c$ are the radius of the circular integration area, the core radius of the disclination and the core energy of the disclination, respectively. The elastic energy, $W - W_c$, depends on the type of disclination. The elastic energy of a $\pm 0.5$-disclination is a forth of that of a $\pm 1$-disclination. Therefore, the $\pm 0.5$-disclination should occur more frequently than the $\pm 1$-disclination. This prediction is in accordance with the observation. The deformation field of one disclination is altered if another disclination is introduced. The elastic interaction energy, $W_{12}$, is then \cite{degennes} \begin{equation} W_{12} = 2 k \pi m_1 m_2 \ln \left(\frac{R_{12}}{R_c} \right) \end{equation} where $R_{12}$ is the distance between the two disclinations. As expected the interaction between disclination of opposite strength like $m_1 = +0.5$ and $m_2 = - 0.5$ is predicted to be attractive and repulsive for equal signs. The force between the two disclination is \cite{degennes} \begin{equation} F_{12} = - \frac{2 \, k \, \pi \, m_1 \, m_2}{R_{12}} \end{equation} As expected the force is predicted to be large for small distances and small for large distances. The elastic interaction between two disclinations is orientation dependent: Two possible orientations in the case of a $-0.5$ disclination and a $+0.5$ disclination where the interaction energy is minimum, are shown in Fig. \ref{interact}. The elastic interaction energy of the two orientations is the same in the case of equal elastic constants ($k = k_{11} = k_{33}$) is used. But, in the general case ($k_{11} \neq k_{33}$) the elastic interaction energy is minimum for one of the orientations \cite{russe}. The orientation as shown in Fig. \ref{interact}a is energetically preferred in the case of $k_{11} > k_{33}$ since a lot of bend deformation is between the disclinations. The orientation as shown in Fig. \ref{interact}b is energetically preferred in the case of $k_{11} < k_{33}$ since a lot of splay deformation is between the disclinations. The splay elastic constant, $k_{11}$, is smaller than the bend elastic constant, $k_{33}$, for an apolar nematic liquid crystal formed by melanocytes since in the case of a pair of $\pm 0.5$ disclination the orientation as shown in Fig. \ref{interact}b is preferentially formed. But up to now, we are not able to determine the relative value of the anisotropy, $2 (k_{11} - k_{33})/(k_{11} + k_{33})$, of the elastic constants. \section*{Core of a Disclination} The nature of the core of nematic liquid crystals formed by anisotropic molecules is not known. However, the core of a nematic liquid crystal formed by migrating cells can be observed directly in the light microscope. Different types of cores are observed as shown in Fig. \ref{DisHalbEx}: (i) The core of a disclination is an area free of cells. This type of core is frequently observed for very elongated cells like melanocytes (Fig. \ref{DisHalbEx}a). (ii) The core of a disclination is an area with isotropically distributed cell as shown in Fig. \ref{DisHalbEx}b. This type of core is frequently observed for smaller cells like fibroblasts and osteoblasts. (iii) The core of a disclination is occupied by a star-shaped cell as shown in Fig. \ref{DisHalbEx}c. The cells which form the nematic liquid crystal are in an elongated bipolar state. The defect in the orientation field attracts disturbances of the orientation field. One observes that other cell types or the same cell type with different shapes are trapped in the core of a disclination. For example, melanocytes have usual two opposing dendrites. But some cells have three dendrites and these cells disturb the nematic phase and are, therefore, attracted by the orientational disclination (Fig. \ref{DisHalbEx}c). In nematic liquid crystals formed by elongated molecules it is assumed that the molecules in the core of a disclination are in the isotropic state. This assumption can be verified for a nematic liquid crystal formed by elongated migrating cells. There exist a sharp boundary between the nematic and the isotropic phase. The nematic phase is destroyed if the elastic energy density is larger than the mean field. \begin{equation} \frac{1}{2} \, k_{33} \left( \frac{\partial \Theta}{\partial y} \right) ^2 \ge 2 \, b \, \frac{(\rho \, S)^2}{\rho_0} \, \end{equation} The bend elastic constant divided by the calibration coefficient b can be expressed by Eq. \ref{}. One gets \begin{equation} \label{disclinCore} \frac{\partial \Theta}{\partial y} \ge \frac{1}{\sqrt{2}} \, \xi_y \end{equation} The maximal bending $\partial \Theta / \partial y$ of a +1/2 disclination can be determined from photographic pictures. The characteristic length, $\xi_y$, of a bend deformation is expected to be in the order of the length of an elongated cell ($\xi_y \approx a$). The observed angular change by going from one cell to the next is approximately 15 to 20 degrees. The angular change per cell as predicted by Eq. \ref{disclinCore} is 29 degrees. The assumption that a strong bend deformation can destroy the nematic phase is verified since the predicted angular change is in accordance with the experimental observation. \section*{Outlook} The practical importance of this study lies in its objective description of the liquid crystal phases formed by migrating and interacting cells. Two types of nematic liquid crystals, a polar and an apolar phase, are found to be formed by certain cell types in culture and such phases may be expected to be present in many other cell types. In addition, the objective description may be useful to define and elucidate cases of cellular dysfunction, and of altered cellular function induced by specific molecules like pharmacological active molecules, etc.. The major objective of current and future research is to go beyond this phenomenological description of the thermodynamic phases formed by migrating and interacting cells. How the cellular machinery works on a molecular scale can be investigated by considering the technical data as a frame filled with the biochemical events. The cell, regarded as a fluid self-organized molecular machine, represents a new and important field in the triangle between physics, chemistry and life sciences. The knowledge that one essential process in the cellular signal transduction chain is the supply of fresh receptors to the plasma membrane could lead to the application of new techniques. For instance the liquid crystal phase can be changed if special designed molecules are added which interact with the intracellular signal transduction chain and alter cell-cell interaction either by the induced change in the cellular shape or by the transmitted chemical signal. The process by which migrating cells exchange information constitutes one of the most intriguing areas of life sciences and physics. An individual cell transmits signals and guides its migration and orientation according to the signals it receives, these messages being chemical, sterical or electrical in nature. The physical process of reducing a gas containing freely moving molecules to a liquid form is understood to a large extent and the mean behavior of a given molecule in the interaction field of the other moving molecules may be determined by Boltzmann statistics. Identical principle are used to reduce freely migrating cells to a condensed state. The mean behavior of a given cell in the interacting field of other cells can be described in terms of its automatic controller. The self-organization, involved in morphogenesis, organogenesis and wound healing, takes in a new light. The nematic liquid crystal formed by migrating cells opens new perspectives in the field of cellular tissue engineering. The two dimensional nematic liquid crystal can be used as a template. For example: (i) Cells of another type can migrate and orient on top of the liquid crystal to form more complex structures. (ii) Cells of another type can be attracted and localized in the orientational defects of the nematic liquid crystal. New types of liquid crystals are expected in analogy to blue phases where orientation defects are ordered in space. (iii) Complex structures formed by different types of cells can be altered if the phase of the template (cellular layer of the nematic liquid crystal) is altered by a chemical signal from a nematic liquid crystal phase to an isotropic phase. \subsubsection*{Acknowledgments} We particularly like to thank Volker Teichgr\"aber for fruitful discussion. This work was supported by Deutsche Forschungsgemeinschaft and Fonds der Chemischen Industrie. \section*{Appendix} \subsection*{Cell Preparation} Melanocytes are one type of cells in the skin. They produce and distribute the color pigment melanin via dendrites. Cultured human melanocytes form the desired bipolar mode with two opposing dendrites when they are exposed to melanocytes medium. Details are given in ref. \cite{kaufmann}. Briefly, skin biopsies were obtained from healthy donors. The cells were cultured in TIC medium (Ham F10, 16 \% serum, 85nM PMA, 0.1 mM IBMX, 2.5 nM cholera toxin). Cells designated for the experiment were seeded in $25-cm^2$ dishes (Nunclon, Nunc, Germany) and cultured until confluence (approx. 25-30 days). The cells, being of spherical shape in the suspension, come down onto the substrate and start to develop their dendrites. They stay in this shape and crawl around on the surface. The cell body of different melanocytes attract each other but the dendrites repel each other. The orientation of the dendrites occurs during a periodic retraction and elongation process in such a way that each dendrite has a minimum of contact with the surrounding cells. We used a precision motorized x-y stage on a CCD-camera (Hamamatsu) equipped microscope (Zeiss Axiovert) and scanned an area of several mm$^2$ using a 10x magnification lens. The pictures were digitized with a Scion frame grabber card on a Macintosh PPC and stored on the hard disk. The public domain software NIH-IMAGE (developed at the US National Institute of Health) was used for this purpose, as well as, for the further image processing. \subsection*{Picture Evaluation} Our goal is to find an algorithm which determines automatically the mean orientation of the elongated cells in a small part of the picture. This task was successfully solved by going into the Fourier space. First, a squared section (64X64 pixels) of a picture is chosen, as shown in Fig. \ref{kling}a. A two-dimensional Fast Fourier Transformation implemented in NIH-IMAGE is applied to this squared section of the picture. The result is a gray scale picture as shown in Fig. \ref{kling}b. The Fourier transform arises as a diffuse cloud with a handle-shaped contour. The maximum momentum of this elongated cloud is determined and found to be parallel to the direction of the mean cell alignment. In order to evaluate the exact orientation we apply a threshold procedure to remove all pixels with a gray value below a certainthreshold value. The remaining pixels are weighted with their gray value, g(x,y), and the angle, $\Theta$, of the orientation axis is found by maximizing the "momentum of mass". More details to the procedure are given in \cite{russ}. \begin{equation} tan \Theta = \frac{M_{xx} - M_{yy} + \sqrt{(M_{xx}-M_{yy})^2 +4 M_{xy}^2}}{2M_{xy}} \end{equation} with \begin{equation} M_{xx} = \sum x^2 g(x,y) - \left(\sum x g(x,y) \right)^2 \end{equation} and M$_{xy}$ and M$_{yy}$ analogous. This procedure is repeated for the next squared section of the picture. The scan was performed with overlapping squared sections. The shift length was half of the square size.
2024-02-18T23:40:15.290Z
1998-11-25T17:04:19.000Z
algebraic_stack_train_0000
1,840
6,702
proofpile-arXiv_065-9149
\section{Introduction} In recent years there have been some remarkable and surprising advances in non-perturbative gauge theory arising from the study of branes in string theory and M-theory. While there have been many interesting developments, here we will only review the precise connection between the classical dynamics of the M-fivebrane and four-dimensional $N=2$ quantum Yang-Mills theories. Let us begin by reviewing some of Witten's analysis of type IIA brane configurations~\cite{W}. A similar role for the M-fivebrane also appeared in~\cite{KLMVW}. We start by considering type IIA string theory. We place two parallel NS-fivebranes in the $x^0,x^1,x^3,...,x^5$ plane separated along the $x^6$ direction by a distance $\Delta x^6$. These two NS-fivebranes will preserve sixteen of the thirty-two spacetime supersymmetries. Next we introduce $N_c$ parallel D-fourbranes in the $x^0,x^1,x^2,x^3,x^6$ plane. These D-fourbranes stretch between the two NS-fivebranes and reduce the number of preserved supersymmetries to eight. At weak string coupling the NS-fivebranes are heavy and their motion can be ignored. The low energy fluctuations of this system are then described by the D-fourbranes. As is well known the low energy dynamics of $N_c$ parallel D-fourbranes is given by a five-dimensional $U(N_c)$ gauge theory with sixteen supersymmetries. The presence of the NS-fivebranes has two effects. Firstly they reduce the number of preserved supersymmetries to eight. Secondly, since the $x^6$ direction of the D-fourbrane is finite in extent, at low energy their worldvolume is four-dimensional. An overall $U(1)$ factor of the gauge group $U(N_c)$ is trivial and simply describes the centre of mass motion of the D-fourbranes so we may ignore it. Thus the low energy, weak coupling description of this configuration is given by four-dimensional $N=2$ $SU(N_c)$ gauge theory. One can also consider adding $N_f$ semi-infinite D-fourbranes to this configuration. These intersect the left or right NS-fivebrane at one end but extend to infinity at the other. Since they are infinitely heavy as compared to the finite D-fourbranes their motion is suppressed. However in the D-brane picture there are stretched open strings with one end on a semi-infinite D-fourbrane and the other on a finite D-fourbrane. These strings give rise to $N_f$ massive hyper-multiplets in the (anti-) fundamental representation of $SU(N_c)$ in the four-dimensional gauge theory. Their bare mass given by the length of the open strings, which is the distance between the finite and semi-infinite D-fourbranes. What Witten noticed was that there is an elegant strong coupling description of this configuration in M-theory. Increasing the string coupling lifts us up to eleven dimensions and introduces another coordinate $x^{10}$ which is periodic with period $2\pi R$. Furthermore one can go to strong coupling keeping the curvatures small, so that supergravity is a good approximation, yet also keeping the Yang-Mills coupling constant fixed~\cite{W}. The NS-fivebranes simply lift to M-fivebranes. The D-fourbranes also lift to M-fivebranes, only wrapped on the $x^{10}$ dimension. Thus in eleven dimensions the entire configuration appears as intersecting M-fivebranes. An important realisation is that this configuration can be viewed as a single M-fivebrane wrapped on a two-dimensional manifold, embedded the four-dimensional space with coordinates $x^4,x^5,x^6,x^{10}$. The condition that an M-fivebrane wrapped around a manifold breaks only of half the supersymmetry, leaving eight unbroken supersymmetries, is that the manifold is complex~\cite{BBS}, i.e. it must be a Riemann surface. It is helpful then to introduce the complex notation \begin{equation} s = (x^6 + ix^{10})/R\ ,\quad t = e^{-s}\ ,\quad z = x^4+ix^5\ . \end{equation} Thus the supersymmetric intersecting M-fivebrane configuration can be realised by any holomorphic embedding $F(t,z)=0$ of its worldvolume into the $x^4,x^5,x^6,x^{10}$ dimensions of eleven-dimensional spacetime. Let us return to the configuration in question. At a given point $z$ there should be two solutions for $s$, corresponding to the two NS-fivebranes. Therefore we take $F$ to be second order in $t$ \begin{equation} A(z)t^2 -2B(z)t + C(z) =0 \ . \end{equation} We expect that as $z\rightarrow\infty$, we are on either the left or right NS-fivebrane so that $t\rightarrow\infty,0$ respectively. If $A(z)$ or $C(z)$ has a zero at any finite value of $z$, so that $t \rightarrow\infty,0$ there, this can be interpreted as a semi-infinite D-fourbrane ending on the left or right NS-fivebrane respectively. Let us just consider semi-infinite D-fourbranes ending on the right NS-fivebrane. In this case we may set $A=1$ by rescaling $t$. For $N_f$ semi-infinite D-fourbranes we need $N_f$ zeros of $C(z)$. Thus $C(z)$ must take the form \begin{equation} C(z) = \Lambda\prod_{a=1}^{N_f}(z - m_a)\ , \end{equation} where $\Lambda$ is a constant and $m_a$ are the positions of the semi-infinite D-fourbranes, i.e. their bare masses. For $N_f=0$ we simply take $C=\Lambda$. Finally we need to determine $B(z)$. For a fixed $s$ we need there to be $N_c$ solutions for $z$, corresponding to the $N_c$ finite D-fourbranes. Thus $B(z)$ must take the form \begin{equation} B(z) = \prod_{i=1}^{N_c}(z - e_i)\ , \end{equation} note that we can set the coefficient of $z^{N_c}$ to one by rescaling $z$. For large $z$ the $e_i$ then appear as the positions of the $N_c$ finite D-fourbranes. Since we have frozen out the centre of mass motion we set $\sum_{i=1}^{N_c}e_i=0$, which can also be achieved by redefining $z$. With these conditions imposed ones see that $s(z)$ defines precisely the Seiberg-Witten curve \begin{equation} y^2 = \left(\prod_{i=1}^{N_c}(z - e_i)\right)^2 - \Lambda\prod_{a=1}^{N_f}(z - m_a)\ , \label{curve} \end{equation} where $y = t-B$. In summary then this brane configuration has a weak coupling description as four-dimensional $N=2$ $SU(N_c)$ Yang-Mills theory with $N_f$ hyper-multiplets in the fundamental representation and a strong coupling description as an M-fivebrane wrapped around the correct Seiberg-Witten curve. Thus from the brane configuration we can identify the Riemann surfaces that are known to be associated with the exact quantum low energy effective action of four-dimensional $N=2$ Yang-Mills theory \cite{SW}. This analysis also suggests why the scalar modes in the Seiberg-Witten solution correspond to moduli of a Riemann surface, since the zero modes of the M-fivebrane are just the Riemann surface moduli. In addition Witten was able to derive the appropriate curves for many new classes of Yang-Mills theories which were previously unknown. This remarkable correspondence left open the question as to {\it whether or not the classical M-fivebrane could predict the precise perturbative and instanton corrections of the Yang-Mills Theory and not just the Seiberg-Witten curve}. In other words, a knowledge of the elliptic curve alone is not enough to compare with the four-dimensional Yang-Mills quantum field theory. One must also know how to calculate the low energy effective action from the M-fivebrane dynamics, including all the instanton corrections. \section{Brane Dead} Since the paper~\cite{W} first appeared there have been several discussions of how to construct the low energy effective action from M-theory but many papers present a seriously flawed argument, apparently based on misinterpretations of comments in~\cite{W}. It would be invidious to reference all of these articles here, however the following argument, which has appeared in a substantial review article, illustrates many of these issues. The M-fivebrane worldvolume theory has a self-dual three-form $H$. The argument states that the action therefore contains the standard kinetic term for $p$-form fields \begin{equation} S_{SYM}=\int d^6 x H^2\ . \label{action} \end{equation} To obtain the effective action one must decompose $H$ in a basis of non-trivial one-forms $\Lambda_i$ of the Riemann surface $\Sigma$ (the Seiberg-Witten curve), with genus $N_c-1$, $I=1,...,N_c-1$, \begin{equation} H=\sum_{I=1}^{N_c-1}F_I\wedge \Lambda_I+ *F_I\wedge *\Lambda_I\ . \label{Hdecomp} \end{equation} Substituting eq. \ref{Hdecomp} into eq. \ref{action} leads to \begin{equation} S_{SYM}=\int d^4x ({\rm Im}\tau_{IJ})F_I\wedge*F_J+ ({\rm Re}\tau_{IJ}) F_I\wedge F_J\ , \end{equation} with $\tau$ the period matrix of $\Sigma$ \begin{eqnarray} {\rm Im}\tau_{IJ}&=&\int_\Sigma\Lambda_I\wedge *\Lambda_J+c.c. \ ,\label{Imtau} \\ {\rm Re}\tau_{IJ}&=&\int_\Sigma\Lambda_I\wedge \Lambda_J+c.c.\ .\label{Retau} \\ \nonumber \end{eqnarray} It should not take the alert reader much time to realise the following errors in the above argument. Firstly, and most seriously, since the ansatz in eq. \ref{Hdecomp} ensures $H$ is self-dual the action eq. \ref{action} {\it vanishes identically}. This can be easily seen since $H^2 = H\wedge *H = H\wedge H$. Now $H$ is an odd form so $H\wedge H=0$. Therefore this argument predicts that the Seiberg-Witten effective action vanishes identically. To avoid this problem some articles reference the paper~\cite{V} where a first order six-dimensional action is given for a self-dual three form which is non-zero. However, this action is not coupled to scalars so that the resulting four-dimensional action contains a constant period matrix $\tau_{ij}$. Therefore this method leads only to the classical low energy effective action with no quantum corrections. Even if one ignored this problem and let the Riemann surface moduli $e_i$ become spacetime dependent, one could not arrive at the correct low energy effective action because the relations between the moduli $e_i$ and the Yang-Mills scalar fields $a_I$ are still unknown. Given that this argument starts with zero one might wonder how its advocates obtain a non-trivial answer. In fact the expressions eq. \ref{Imtau} and eq. \ref{Retau} are incorrect. To see this one only needs to consider the case of genus one ($N_c$=2) and let us choose our basis of one forms so that $\Lambda_1 = \Lambda$ is a holomorphic one form, $*\Lambda = i\Lambda$ and $\Lambda_2$ is an anti-holomorphic one-form, ${\bar \Lambda}$, $*{\bar \Lambda} = -i{\bar \Lambda}$. Then one finds that the two equations \ref{Imtau} and \ref{Retau} are the same (up to a constant). Furthermore no redefinition will alter this since, in complex notation, it is clear that the only independent integrand one could write down is $\Lambda\wedge{\bar \Lambda}$ and this is purely imaginary, i.e. only ${\rm Im}\tau$ has a simple integral formula. \section{The Fivebrane Equations of Motion} Let us now discuss in detail the worldvolume theory of the M-fivebrane. It has a six-dimensional $(2,0)$ tensor multiplet of massless fields on its worldvolume. The component fields of this supermultiplet are five real scalars $X^{a'}$, a gauge field $B_{m n}$ whose field strength satisfies a modified self-duality condition and sixteen spinors $\Theta ^i_\beta$. The scalars are the coordinates transverse to the fivebrane and correspond to the breaking of eleven-dimensional translation invariance by the presence of the fivebrane. The sixteen spinors correspond to the breaking of half of the thirty-two component supersymmetry of M-theory. The classical equations of motion of the fivebrane in the absence of fermions and background fields are~\cite{HSW} \begin{equation} G^{mn} \nabla_{m} \nabla_{n} X^{a'}= 0\ , \label{eqomone} \end{equation} and \begin{equation} G^{m n} \nabla_{m}H_{npq} = 0. \label{eqomtwo} \end{equation} where the worldvolume indices are $m,n,p=0,1,...,5$ and the world tangent indices $a,b,c=0,1,...,5$. The transverse indices are $a',b'=6,7,8,9,10$. The usual induced metric and veilbien for a $p$-brane is given, in static gauge, by \begin{eqnarray} g_{mn} &=& \eta_{m n}+\partial_{m}X^{a'} \partial_{n}X^{b'}\delta_{a' b'}\nonumber\\ &=& e^{\ a}_m \eta_{ab} e_{n}^{\ b}\ .\nonumber\\ \label{gdef} \end{eqnarray} The covariant derivative $\nabla$ is defined as the Levi-Civita connection with respect to the metric $g_{m n} $. The inverse metric $G^{mn}$ which also occurs is related to $g^{mn}$ by the equation \begin{equation} G^{mn} = {(e^{-1})}^{m}_{\ c} \eta ^{c a} m_{a}^{\ d} m_{d} ^{\ b} {(e^{-1})}^{m}_{\ b}\ , \label{Gdef} \end{equation} where the matrix $m$ is given by \begin{equation} m_{a}^{\ b} = \delta_{a}^{\ b} -2h_{acd}h^{bcd}\ . \label{mdef} \end{equation} The field $H_{abc}$ is an anti-symmetric three-form and is the curl of $B_{ab}$. However is satisfies a non-linear self-duality constraint. To construct $H_{abc}$ we start from the three-form $h_{abc}$ which is self-dual; \begin{equation} h_{abc}= {1\over3!}\varepsilon_{abcdef}h^{def}\ , \label{hsd} \end{equation} but it is not the curl of a three form gauge field. The field $H_{mnp}$ is then obtained as \begin{equation} H_{m n p}= e_{m}^{\ a} e_{n}^{\ b} e_{p}^{\ c} {({m }^{-1})}_{c}^{\ d} h_{abd}\ . \label{Hdef} \end{equation} Clearly, the self-duality condition on $h_{abd}$ transforms into a condition on $H_{m n p}$ and vice-versa for the Bianchi identify $dH=0$. \section{Soliton Dynamics} In this section we wish to provide a complete derivation of the low energy effective action for the wrapped M-fivebrane. We will review the discussion given in~\cite{LWone} which treats the vector as well as scalar modes and we refer the interested reader there for more details. It is possible to derive the Seiberg-Witten action relatively simply by considering the scalar modes alone and using $N=2$ supersymmetry to complete the action from only its scalar part~\cite{HLW}. We have chosen to discuss the complete analysis here because the purely scalar argument is blind to many subtle and interesting features of the M-fivebrane, notably how the Abelian three-form can reproduce the low energy effective action for non-Abelian vector fields. In addition the argument presented below can be generalised to cases with less supersymmetry. In these cases there is no a priori relation between the vector and scalar dynamics. We also note that the construction can be performed in a manifestly $N=2$ supersymmetric form~\cite{LWtwo} which perhaps best highlights the underlying geometry. Our approach is to view the intersecting M-fivebrane configuration as a threebrane soliton on a single M-fivebrane worldvolume~\cite{three}. Viewed in this way the the soliton is a purely scalar field configuration of the worldvolume theory and the Bogomol'nyi condition is just the Cauchy-Riemann equation for $s(z)$. From this point of view we can obtain the low energy effective equations by expanding the equations of motion to second order in derivatives $\partial_{\mu}$, $\mu=0,1,2,3$ and field strengths $H_{mnp}$ around the threebrane background. To this order $h_{mnp}=H_{mnp}$ so that the field strength $H_{mnp}$ is self-dual but with respect to the induced metric $g_{mn}$. This implies that the ansatz in eq. \ref{Hdecomp} is incorrect as there are additional terms in $H_{mnp}$. In particular there is a non-zero contribution to $H_{\mu\nu\lambda}$. Taking this into account and expanding the equations of motion for two scalars (i.e. $X^6$ and $X^{10}$) leads to the expressions \begin{equation} E\equiv \eta^{\mu\nu}\partial_\mu\partial_\nu s -\partial_z\left[{(\partial_\varrho s\partial^\varrho s) \bar \partial\bar s\over(1+|\partial s|^2)}\right] -{16\over(1+|\partial s|^2)^2} H_{\mu\nu\bar z}H^{\mu\nu}{}_{\bar z}\partial \partial s =0\ , \label{Escalar} \end{equation} and \begin{equation} E_{\nu}\equiv \partial^\mu H_{\mu\nu z}-\partial_z\left[ {\bar \partial \bar s\partial^\mu s\over(1+|\partial s|^2)}H_{\mu\nu z} -{\partial s\partial^\mu\bar s\over(1+|\partial s|^2)}H_{\mu\nu\bar z} \right]=0\ . \label{Evector} \end{equation} In these expressions we have assumed that the threebrane soliton defined by the Seiberg-Witten curve $s(z)$ is dynamical due to its moduli $e_i$ becoming $x^{\mu}$-dependent. Before reducing these equations to four-dimensions we need an ansatz for the three-form components $H_{\mu\nu z}$. Let us restrict our attention here to the simplest case of $N_c=2, N_f=0$, although the analysis can be generalised by considering the appropriate curve. Explicitly, eq. ~\ref{curve} leads to the curve \begin{equation} s = -{\rm ln}\left[ z^2 - u \pm \sqrt{(z^2-u)^2 - \Lambda}\right]\ , \end{equation} where $u = e_1e_2$ is the only moduli since $e_1+e_2=0$. Solving the self-duality condition requires $H_{mnp}$ to take the form \begin{equation} H_{\mu\nu z} = \kappa {\cal F}_{\mu\nu}\ , \end{equation} where ${\cal F}_{\mu\nu} = F_{\mu\nu} + i*F_{\mu\nu}$ and $\kappa$ is undetermined. The closure of $H_{mnp}$ requires that $\kappa$ is holomorphic. We therefore write $\kappa = \kappa_0(x) \lambda_z$ where $\lambda_zdz = ds/du\ dz$ is the holomorphic one-form of the Riemann surface $\Sigma$. We are now free to choose $\kappa_0$ and we do this to ensure that $F_{mn}$ is a closed two-form. We will see below that this in turn fixes \begin{equation} \kappa = \left({da\over du}\right)^{-1}\lambda_z\ . \end{equation} We have also introduced the periods \begin{equation} a = \int_A s dz\ ,\qquad a_D = \int_B s dz\ , \end{equation} where $A$ and $B$ are a basis of one-cycles of $\Sigma$, i.e. $s dz$ is identified with the Seiberg-Witten differential. Note that the factor $(da/du)^{-1}$ normalises the period of the form $\kappa dz$ to be one around the $A$-cycle. This reveals another error in the argument in section two as the choice of $\Lambda$ in eq. \ref{Hdecomp} will not lead to the Seiberg-Witten solution. Finally to reduce these equations to four dimensions we project them over a complete set of one-forms of $\Sigma$ \begin{eqnarray} 0 &=& \int_{\Sigma} Edz\wedge\bar\lambda \nonumber\\ &=& \partial^{\mu}\partial_{\mu}u I + \partial_{\mu}u\partial^{\mu}u {dI\over du} - \partial_{\mu}u\partial^{\mu} u J - 16{\bar {\cal F}}_{\mu\nu}{\bar {\cal F}}^{\mu\nu} \left({d\bar a\over d\bar u}\right)^{-2}K \ ,\nonumber\\ 0 & = & \int_\Sigma E_\nu dz\wedge \bar\lambda \nonumber\\ &=& \partial^{\nu}{\cal F}_{\mu\nu}\left({da\over du}\right)^{-1} I -{\cal F}_{\mu\nu}\partial^{\nu}u{d^2 a\over du^2} \left({da\over du}\right)^{-2}I + {\cal F}_{\mu\nu}\partial^{\nu}u\left({da\over du}\right)^{-1}{dI\over du} \nonumber\\ &&-{\cal F}_{\mu\nu}\partial^{\nu}u \left({da\over du}\right)^{-1}J +{\bar{\cal F}}_{\mu\nu}\partial^{\nu}\bar u \left({d\bar a\over d\bar u}\right)^{-1}K \ .\nonumber\\ \label{surfacetwo} \end{eqnarray} Here we encounter integrals over $\Sigma$ labelled by $I,J$ and $K$ and given below. While it is straightforward to evaluate $I$ using the Riemann bilinear relation the $J$ and $K$ integrals require a more sophisticated analysis. This was done indirectly in~\cite{LWone} and directly in~\cite{LWtwo} using properties of modular forms resulting in \begin{eqnarray} I &\equiv& \int_{\Sigma}\lambda\wedge\bar\lambda = {da_D\over du}{d\bar a\over d\bar u} - {da\over du}{d\bar a_D\over d\bar u} \ ,\nonumber\\ J &\equiv& R^2\Lambda\int_{\Sigma}\partial_z\left( {\lambda_z^2\partial_{\bar z}\bar s\over 1+R^2\Lambda\partial_z s\partial_{\bar z}\bar s} \right)dz\wedge\bar\lambda\ = 0\ , \nonumber\\ K &\equiv& R^2\Lambda \int_{\Sigma}\partial_{z}\left( {\bar \lambda_{\bar z}^2\partial_{z}s\over 1 +R^2\Lambda\partial_z s\partial_{\bar z} \bar s}\right)dz\wedge\bar\lambda = -\left({d\bar a\over d\bar u}\right)^{3}{d{\bar \tau}\over d\bar a} \ ,\nonumber\\ \label{IJK} \end{eqnarray} where $\tau = da_D/da$. With these integrals we can now evaluate the four-dimensional equations of motion \begin{eqnarray} 0&=&\partial^{\mu}\partial_{\mu}a(\tau-{\bar \tau}) + \partial^{\mu}a\partial_{\mu}a{d\tau\over da} + 16 {\bar {\cal F}}_{\mu\nu}{\bar {\cal F}}^{\mu\nu} {d {\bar \tau}\over d\bar a} \ ,\nonumber\\ 0&=& \partial^{\nu}{\cal F}_{\mu\nu}(\tau-{\bar \tau}) + {\cal F}_{\mu\nu}\partial^{\nu}u{d\tau\over du} - {\bar{\cal F}}_{\mu\nu}\partial^{\nu}\bar u{d{\bar \tau}\over d\bar u} \ .\nonumber\\ \end{eqnarray} Note that real part of the second equation is just the Bianchi identity for $F_{mn}$. Finally we see that these equations may be derived from the Seiberg-Witten effective action \begin{equation} S_{SW} = \int d^4 x\ {\rm Im} \left( \tau\partial_{\mu}a\partial^{\mu}{\bar a} +16 \tau{\cal F}_{\mu\nu}{\cal F}^{\mu\nu}\right)\ . \label{SWaction} \end{equation} \section{Discussion} In this review we have shown that by using brane dynamics one can obtain not only qualitative features of quantum $N=2$ gauge theories such as the Seiberg-Witten curve but also the precise details of the low energy effective action, including instanton corrections. The example that we presented above also sets out a general method which can be applied to configurations with less supersymmetry. One first identifies a solitonic solution of the M-fivebrane and then the low energy dynamics can be obtained from the M-fivebrane equations of motion. The low energy effective action will contain scalar zero modes from the moduli of the soliton and vector zero modes arising from the three-form $H_{mnp}$ and the brane topology. Note that this construction is more subtle than the direct relation between the geometry of the brane and the $\beta$-function that has been suggested. For example the one-loop $\beta$-function coefficient can be recovered from the bending of the branes. However this interpretation has difficulties since if one identifies the coupling constant with the distance between the two NS-fivebranes this gives a function $\tau(z)$ rather than $\tau(u)$. In addition for $SU(N_c)$ gauge theories there are in fact ${1\over 2}N_c(N_c-1)$ coupling constants and these cannot all be identified with the distance between the two NS-fivebranes.\footnote{We are grateful to V. Khoze for this point.} It is of course natural to see if M-theory can produce other details of the Yang-Mills quantum field theory. For example it was pointed out in reference~\cite{HLW} that the M-fivebrane also predicts an infinite number of higher derivative terms which are not holomorphic. Although these terms are complicated one can easily see that they depend upon the radius of the eleventh dimension and do not seem to agree with the terms obtained from quantum field theory\cite{O,LWone}. One can also consider the M-fivebrane prediction for low energy monopole dynamics to see if this leads to the correct form for the monopole moduli space metric~\cite{mono}. In closing let us mention some unsolved issues which warrant further study. There are other formulations of the M-fivebrane dynamics, which in addition admit an action, and therefore it is natural to see if they can also reconstruct the Seiberg-Witten effective action. However there is one immediate difficulty with using an action. Namely, since there is no a priori distinction between $A$- and $B$-cycles, one expects that the construction will be modular invariant. On the other hand, the Seiberg-Witten action is not invariant under the $SL(2,{\bf Z})$ modular group, even though its equations of motion are. Lastly there are in fact discrepancies between the instanton coefficients predicted by the Seiberg-Witten curves and those obtained by explicit calculations using instanton calculus for the cases $N_f=2N_c$~\cite{inst}. Perhaps a more detailed consideration using M-theory will lead to alternative forms for these curves. \section*{References}
2024-02-18T23:40:15.962Z
1998-11-19T12:10:45.000Z
algebraic_stack_train_0000
1,864
4,103
proofpile-arXiv_065-9190
\section*{Figures} \begin{figure}[h] \psfig{figure=combfig1.eps,height=2.in} \caption{a)Normalized specific heat for ${\bf H}\| c$ (diamonds), ${\bf H}\| antinode$ (circles), and ${\bf H}\| node$ (triangles), for $\varepsilon=7$ ($\delta\simeq 0.05$) and $E_H=0.1\Delta_0$; Inset: low-$T$ anisotropy; b) Scaling function for $C({\bf H}\| c)$ (diamonds), $C({\bf H}\| antinode)$ (full circles), $C({\bf H}\| node)$ (full triangles), $\delta C({\bf H}\| antinode)$ (open circles), and $\delta C({\bf H}\| node)$ (open triangles). } \end{figure} \section*{Acknowledgments} Valuable communications with K.A. Moler, N. E. Phillips and G.E. Volovik are acknowledged. This research has been supported in part by NSERC of Canada (EJN and JPC), CIAR (JPC) and NSF/ AvH Foundation (PJH). EJN is a Cottrell Scholar of Research Corporation. \section*{References}
2024-02-18T23:40:16.097Z
1998-11-23T05:17:58.000Z
algebraic_stack_train_0000
1,873
135
proofpile-arXiv_065-9203
\section{INTRODUCTION} There are many calculations of bound state energies of the hydrogen molecular ion $\mbox{H}_2{}^+$ using the Born-Oppenheimer approximation or various adiabatic approximations and there are a number of studies that investigate deviations of energies from the Born-Oppenheimer values. The present work is a systematic high precision nonadiabatic\footnote{We would prefer to use the term `batic, which we coined to avoid the double negative implied in nonadiabatic, but clarity must yield to convention.} study of $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$ in each of the lowest electronic states of $\Sigma_g$, $\Sigma_u$, and $\Pi_u$ symmetry carried out using variational basis sets. It is motivated by recent precise experimental spectroscopy of Rydberg states of the hydrogen and deuterium molecules that has led to accurate experimental values of the the electric dipole polarizability of the corresponding molecular ions in their ground states~\cite{JacFisFeh97}. These experiments were followed by several papers detailing various nonadiabatic calculations of the electric dipole polarizability~\cite{SheGre98,BhaDra98,Mos98,Cla98}. The present paper is the first in a series. We are using the eigenstates studied in the present work in a study of the electric dipole sum rules for $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$, including the polarizability. Several investigators have performed nonadiabatic calculations on the ground electronic state of $\mbox{H}_2{}^+$ since Hunter and Pritchard~\cite{HunPri67a} and Ko\l{}os~\cite{Kol69} reported the first precision calculations. The most accurate calculations used variational basis set methods~\cite{Bis89,Mos90}, variation-perturbation methods~\cite{WolPol86,WolOrl91}, and artificial channel scattering methods~\cite{Mos93a,Mos93b}. Variational basis set calculations can be in principle quite accurate but appear to have been applied only to the lowest-lying eigenvalues of the $\Sigma_g$ symmetry. The variation-perturbation and the artificial channel methods yield energies for all of the vibration-rotational levels and have been applied to the states of $\Sigma_g$ and $\Sigma_u$ symmetry. There are other approaches applied to the $\Sigma_g$ symmetry that have not yet reported precision as great as those mentioned above such as the adaptive finite element method~\cite{AckShe96}, the generator coordinate method~\cite{RibTolPiz83}, quantum Monte Carlo~\cite{BreMelMor97} and perturbative approaches~\cite{BabDal91}. Energy calculations up to 1980 were reviewed by Bishop and Cheung~\cite{BisChe80} and a useful, more general review covering up to 1995 can be found in~\cite{LeaMos95}. \section{THEORY} In this section we derive the Hamiltonian and introduce the basis sets we used. Other derivations can be found in Refs.~\cite{JepHir60,KolWol63,HunGraPri66,CarKen84,WolPol86,MosSad89}. Some of the operators we use were introduced in those references and Ref.~\cite{Joh41}. Our intention is to avoid writing explicit matrix elements until the last steps and the spirit of the present derivation is closest to the derivations in Refs.~\cite{JepHir60,HunGraPri66,PacHir68}. \subsection{Hamiltonian} In a space-fixed frame and with the center of mass motion removed the Hamiltonian for the homonuclear one-electron diatomic molecule is \begin{equation} \label{ham} H = -\case{1}{2}M^{-1}\nabla_R^2 -[\case{1}{2}+\case{1}{8}M^{-1}]\nabla^2 + V({\bf r},{\bf R}) , \end{equation} where \begin{equation} \label{potential} V ({\bf r},{\bf R} ) = -\frac{1}{| {\bf r} - {\case 1 2}{\bf R} |} -\frac{1}{| {\bf r} + {\case 1 2}{\bf R} |} + \frac{1}{R} \end{equation} and $M=\case{1}{2}M_n$, with $M_n$ the nuclear mass, ${\bf r}$ the position vector of the electron from the midpoint of the vector $\bf R$ joining the nuclei, and $R=|{\bf R}|$. We use atomic units throughout. The electronic (cartesian) coordinates are to be held fixed in the space-fixed frame in carrying out the derivatives in the gradient operator $\nabla_R$ appearing in Eq.~(\ref{ham})~\cite{VanVle29,Bun68}. Following Ref.~\cite{LefFie86} we introduce the rotational angular momentum ${\cal R}$ implicitly expressing the Hamiltonian in a rotating molecular fixed frame. The nuclear kinetic energy is written as \begin{equation} \label{nke-pre} -\frac{\nabla_R^2}{2 M} = \frac{1}{2M R^2} \left( -\frac{\partial}{\partial R} R^2\frac{\partial}{\partial R} + {\cal R}^2 \right) . \end{equation} Defining a rotational Hamiltonian \begin{equation} \label{hrot} H_{\rm rot} = \frac{{\cal R}^2}{2MR^2} \end{equation} we write \begin{equation} -\frac{\nabla_R^2}{2 M} = -\frac{1}{2M R^2} \frac{\partial}{\partial R} R^2\frac{\partial}{\partial R} + H_{\rm rot} , \end{equation} where the three spherical polar coordinates comprised of $R$ and the two angles (contained in the ${\cal R}^2$ operator of $H_{\rm rot}$) contain the information on the orientation of the molecular fixed frame with respect to the space fixed frame. Since here we are ignoring electron and nuclear spins, the total angular momentum is ${\bf N}={\bf {\cal R}}+{\bf L}$, where $\bf L$ is the electronic angular momentum. Using ${\bf {\cal R}}={\bf N}-{\bf L}$, we replace ${\bf \cal R}^2$ in Eq.~(\ref{hrot}) giving \begin{equation} \label{hrot-expand} H_{\rm rot} = \frac{1}{2MR^2} {({\bf N}-{\bf L})^2} = \frac{1}{2MR^2} (N^2 + L^2 - N^- L^+ - N^+ L^- - 2 N_z L_z) , \end{equation} where the superscripts on $L^+$ and $L^-$ and subscript $z$ on $L_z$ refer to the components in the molecule-fixed frame~\cite{LefFie86}. Changing the electron coordinates from cartesian to prolate spheroidal coordinates ($\lambda,\mu,\chi$), we have $r=|{\bf r}|=\case{R}{2}(\lambda^2+\mu^2-1)^{1/2}$. The operator $\frac{\partial}{\partial R}$ in (\ref{nke-pre}) is taken with the electronic (prolate spheroidal) coordinates held fixed in the molecular fixed frame and can be expressed as \begin{equation} \label{partial} \frac{\partial}{\partial R} = \frac{\partial}{\partial R}\left.\right)_{\lambda,\mu} - \frac{\partial r}{\partial R}\frac{\partial}{\partial r} , \end{equation} where the term $ \frac{\partial}{\partial R}$ on the LHS of Eq.~(\ref{partial}) refers to the derivative with the electronic (cartesian) coordinates held fixed as in Eq.~(\ref{nke-pre}). Using the RHS of Eq.~(\ref{partial}) in Eq.~(\ref{nke-pre}) we can write the kinetic energy operator as \begin{equation} \label{nke} -\frac{\nabla_R^2}{2M} = \frac{1}{2M} \left[-\frac{\partial^2}{\partial R^2} - \frac{2}{R}\frac{\partial}{\partial R} + \frac{2Y}{R^2} \frac{\partial}{\partial R}R - \frac{r^2}{R^2} p_r^2 + H_{\rm rot} \right], \end{equation} where \begin{equation} p_r^2 = -\frac{1}{r^2}\frac{\partial}{\partial r} r^2 \frac{\partial}{\partial r} \end{equation} and \begin{equation} \label{Y-spheroidal} Y= r \frac{\partial}{\partial r} \end{equation} and it is now understood that the electronic (prolate spheroidal) coordinates are held fixed where appropriate. We use the expression \begin{equation} -p_r^2 - \frac{L^2}{r^2} = \nabla^2 \end{equation} to combine Eq.~(\ref{nke}) and (\ref{hrot-expand}), yielding \begin{equation} \label{KE-final} -\frac{\nabla_R^2}{2M} = \frac{1}{2M} \left[-\frac{\partial^2}{\partial R^2} - \frac{2}{R}\frac{\partial}{\partial R} + \frac{2Y}{R^2} \frac{\partial}{\partial R}R + \frac{r^2}{R^2} \nabla^2 + \frac{1}{R^2} (N^2 - N^- L^+ - N^+ L^- - 2 N_z L_z) \right] . \end{equation} We define for later use the coupling term \begin{equation} \label{sigpi} \frac{1}{2 M R^2} (-N^- L^+ - N^+ L^- ) \end{equation} that enters from Eq.~(\ref{KE-final}) into the Hamiltonian. The potential energy is given in terms of the prolate spheroidal coordinate system $(\lambda,\mu,\chi)$ by \begin{equation} V (\lambda,\mu,R) =\frac{1}{R} -\frac{4\lambda}{R(\lambda^2-\mu^2)}, \end{equation} and the electronic kinetic energy operator by \begin{equation} \nabla^2 = (4/R^2)[X + (\lambda^2-1)^{-1} (1-\mu^2)^{-1}\partial^2/\partial \chi^2] , \end{equation} where \begin{equation} X=( \lambda^2 - \mu^2)^{-1} [(\partial /\partial\lambda )(\lambda^2-1)\partial /\partial\lambda + (\partial /\partial\mu )(1-\mu^2)\partial /\partial\mu] . \end{equation} and the operator $Y$, Eq.~(\ref{Y-spheroidal}), becomes \begin{equation} Y=( \lambda^2 - \mu^2)^{-1} [\lambda(\lambda^2-1)\partial /\partial\lambda + \mu(1-\mu^2)\partial /\partial\mu] . \end{equation} The terms in $\bf L$ can be reexpressed in the $(\lambda,\mu,\chi)$ coordinates, see for example Ref.~\cite{DalMcC57}. The remainder of the Hamiltonian derivation follows that of, for example~\cite{WolPol86}, and in this way the Hamiltonian reduces to effective matrix elements that may be evaluated as integrals over $\lambda$, $\mu$, and $\chi$. \subsection{Basis sets and trial functions} For the electronic states of $\Sigma_g$, $\Sigma_u$ and $\Pi_u$ symmetry investigated here we used a basis set composed of functions of the form~\cite{MosSad89} \begin{equation} \label{elec-basis} \Phi_{bc}^{\Lambda p} (\lambda,\mu,\chi) = (\lambda^2-1)^{|\Lambda|/2} L_b^{|\Lambda|} [\alpha (\lambda-1)] \exp [-\case{1}{2}\alpha (\lambda-1)] P_c^{|\Lambda|} (\mu) \exp (i\Lambda\chi) , \end{equation} with $b=0,...,B$ and $\alpha$ a nonlinear parameter. We used values of $\Lambda=-1,0$, and 1. The values $|\Lambda|=0$ and 1 correspond, respectively, to $\Sigma$ and $\Pi$ states. For the $\Sigma_g$ symmetry $c=0,2,..,2C$ and $p=g$, for the $\Sigma_u$ and $\Pi_u$ symmetries $c=1,3,...,2C+1$ and $p=u$, and for the $\Pi_g$ symmetry $c=2,4,...,2C+2$ with $p=g$. The trial function for a particular set of states specified by $\Lambda$, $p$, and $N$ has the form \begin{equation} \label{tf} \Psi_{\Lambda p N} (\lambda,\mu,\chi,R) = \sum_{s[bcd]=1}^S k_{s[bcd]} \Phi_{bc}^{\Lambda p}(\lambda,\mu,\chi) \chi_d (R) \end{equation} where $\Phi_{bc}^{\Lambda p}$ is given in Eq.~(\ref{elec-basis}) and where $S= (B+1)(C+1)(D+1)$. The index $s\equiv [bcd]$ was filled in the order $[\{b,\{c,\{d\}\}\}]$, where $\{b\}$, for example, indicates a loop over all possible values of the index $b=0,...,B$. The vibrational basis functions were of the form \begin{equation} \label{vib-basis} \chi_d (R) = (1/R) (\gamma R)^{(\beta+1)/2} L_d^\beta (\gamma R) \exp (-\case{1}{2}\gamma R) , \end{equation} with $d=0,...,D$. The vibrational state quantum numbers were identified with levels in the spectrum resulting from the diagonalization. The eigenvalues approach the exact eigenenergies behaving as expected by the Hylleraas-Undheim theorem~\cite{New66}. Laguerre polynomials were used in the electronic basis because the integrals involved could be solved in closed form. Other possibilities explored such as Hermite polynomials did not offer this advantage. The electronic basis (\ref{elec-basis}) is independent of $R$ and is identical to that used by Moss and Sadler~\cite{MosSad89}. The vibrational basis is similar to theirs in functional form, but we used a different nonlinear parameter $\gamma$ that allowed us to avoid certain expressions involving hypergeometric series and thereby offered an apparent improvement in speed. We expect that the accuracy of our vibrational basis is at least equal to that of Moss and Sadler. \section{CALCULATION} Matrix elements of the Hamiltonian over the basis set functions and the overlap between basis set functions were set up as four-dimensional integrals over $\lambda,\mu,\chi$, and $R$. The evaluations reduce to integrals over $\lambda$, $\mu$, and $R$. The eigenvalues were obtained using the Rayleigh-Ritz method by solution of the generalized eigenvalue problem for the Hamiltonian and overlap matrices and iteratively varying the nonlinear parameters. Some details on the integrals and procedures are presented in this section. \subsection{Evaluation of the integrals} Consider the integrals over $\lambda$ and over $R$ required for evaluation of the Hamiltonian and overlap matrix elements. Any integrals containing derivatives were manipulated to eliminate the derivatives by utilizing \begin{equation} \frac{\partial}{\partial x}L_n^a (x) = -L_{n-1}^{a+1} (x) \end{equation} and \begin{equation} \label{Lag-lower} L_{n}^{a} (x) = \sum_{k=0}^{n} L_{k}^{a-1} (x) \end{equation} to rewrite each integrand as a linear combination of integrals of the form \begin{equation} \label{primary} \int_0^\infty dx\,x^{a+r} L_m^a (x)L_n^a (x) e^{-x}, \end{equation} where $r$ is an integer, $r\geq 0$. The resulting sets of integrals of form~(\ref{primary}), and any other integrals of that form, were then manipulated to eliminate the powers of $\lambda$. This was done by writing the product $x^{r} L_m^a (x)$ as a linear combination of Laguerre polynomials with the same superscript. To this end, the expression \begin{equation} x L_n^a (x) = (n+a) L_n^{a-1} (x) - (n+1) L_{n+1}^{a-1} (x), \end{equation} derived using the summation definition for associated Laguerre polynomials, can be reduced using \begin{equation} \label{Lag-raise} L_n^a (x) = L_n^{a+1} (x) - L_{n-1}^{a+1} (x) \end{equation} to the desired expression, \begin{equation} \label{simple-xlag} x L_n^a (x) = (2n+a+1) L_n^{a} (x) - (n+1) L_{n+1}^{a} (x) - (n+a)L_{n-1}^a (x). \end{equation} Substituting Eq.~(\ref{simple-xlag}) into Eq.~(\ref{primary}), each integral over $\lambda$ can now be expressed as a sum of integrals of the form \begin{equation} \label{orthog} \int_0^\infty dx\,x^{a} L_m^a (x)L_n^a (x) e^{-x} = \delta_{mn} (m+a)!/m!. \end{equation} The integrals involving $\mu$ could be performed through simple manipulations of associated Legendre polynomials. Coupling between states of different $\Lambda$ introduced two problems. The first was that in order to carry out manipulations such as those used above leading to~(\ref{orthog}), we required expressions for raising or lowering superscripts by more than unity. Using Eq.~(\ref{Lag-lower}) we derived the relation \begin{equation} L_{n}^{a} (x) = \sum_{k=0}^{n} {{l +k-1}\choose {k} } L_{n-k}^{a-l} (x) \end{equation} and similarly from repeated application of Eq.~(\ref{Lag-raise}) we derived the relation \begin{equation} L_{n}^{a} (x) = \sum_{k=0}^{l} (-1)^{k} {{l}\choose{k}} L_{n-k}^{a+l} (x). \end{equation} The second problem was the coupling of different $\gamma$ parameters. By using the same manipulations as for the $\lambda$ integral, we reduce the vibrational integral to a linear combination of functions $I$, where \begin{equation} I (a,m,n,\gamma_i,\gamma_j)\equiv \int_0^\infty dx\, x^a L_m^a (\gamma_i x) L_n^a (\gamma_j x) \exp (-\case{1}{2} (\gamma_i+\gamma_j) x) , \end{equation} which can be reexpressed in terms of the hypergeometric function ${}_2 F_1$ using Eq.~(7.414.4) of Ref.~\cite{GraRyz94} as \begin{equation} \label{secondary} I (a,m,n,\gamma_i,\gamma_j) = F (-m,-n;-m-n-a;\gamma_{\rm rat}^2) \frac{(m+n+a)!}{m!n!} 2^{a+1} (-1)^m \gamma_{\rm rat}^{-n-m} (\gamma_i+\gamma_j)^{-a-1} , \end{equation} where \begin{equation} \gamma_{\rm rat}\equiv(\gamma_i+\gamma_j)/(\gamma_i-\gamma_j). \end{equation} The hypergeometric series terminates since $m\geq 0$ and $n\geq 0$. Some additional notes on evaluating integrals of Laguerre and Legendre polynomials are given in~\cite{MosSad89}. Maple V was used to check the evaluation of the matrix elements and it was used to output them into Fortran code. \subsection{Numerical procedures}\label{subsec:numerical} The trial functions~(\ref{tf}) have three sectors. They are comprised of two electronic sectors, labeled by the indices $b$ and $c$ and governed by the nonlinear parameter $\alpha$, and one vibrational sector, labeled by the index $d$ and governed by the nonlinear parameters $\beta$ and $\gamma$. In our calculations each sector was treated separately in optimizing the nonlinear parameters and in studying convergence as the basis size was increased. The eigenvalues and wave functions were determined by solution of the secular equation using the {\sc lapack} routines DSYGV and DSPGV, part of the math subroutine library {\sc dxml}. The energy was further minimized by iteratively varying various nonlinear parameters (using a procedure discussed below) and rediagonalizing. For small basis set sizes we used a conjugate gradient method and then minimized by hand and for the larger basis set sizes we used an algorithm similar to Brent's~\cite{NumRec}. Minimization of $\alpha$ was accomplished with standard algorithms. The optimum values for the parameters $\beta$ and $\gamma$ were more difficult to determine for two reasons. First, $\beta$ is integer and the necessarily discrete choices impeded the optimization; furthermore, a change in $\beta$ does not correspond to a parabolic change in the value of the energy. Second, the nonlinear parameters $\beta$ and $\gamma$ are intrinsically linked requiring simultaneous minimization. A general procedure was developed which allowed us to optimize $\alpha$, $\beta$, and $\gamma$ efficiently. Four steps can be identified. 1)~We fixed $\beta$ and $\gamma$ and then $\alpha$ was optimized for a minimum energy. 2)~To minimize on $\beta$ and $\gamma$ we fixed $\beta$ and then minimized on $\gamma$. The parameter $\beta$ was then varied by a large interval (about 6) and then we minimized again on $\gamma$. Some care was required in selecting what would be the optimum values of $\gamma$ as false local minima occasionally appeared. 3)~Values of $\beta$ within the final interval were searched for the optimum value with minimization on $\gamma$. 4)~After all of the above $\alpha$ was reoptimized with the selected $\beta$ and $\gamma$. In all cases it was found in step~4) that the value of $\alpha$ was the same as that found in step~1), an important verification of our choice of final optimized nonlinear parameters. Having fixed the nonlinear parameters the basis set size was systematically increased to obtain precise eigenvalues by expanding each sector separately. Convergence to the final value was logarithmic. For $\mbox{H}_2{}^+$ in Figs.~\ref{sigma-g-fig}, \ref{sigma-u-fig}, and \ref{pi-u-fig} the convergence is demonstrated by plotting the difference between the energy for a particular basis set dimension and the energy for a basis set of dimension one unit larger. Results for $\mbox{D}_2{}^+$ are similar. For each figure, we begin with the final optimized wave function. The nonlinear parameters are not changed but the basis set dimension is set to $B=2$, then index $B$ is increased with the others held fixed at their optimized values and the difference between successive energies is plotted yielding the curves labeled ``$B$ (Electronic)'' and similarly for $C$ and $D$. For the $\Sigma_u$ states of $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$ convergence in the vibrational sector is slower than for the $\Sigma_g$ and $\Pi_u$ states so we extrapolated to the desired numerical accuracy using linear regression on the log of the energy differences. Figure~\ref{sigma-u-fig} illustrates the slow convergence but also the validity of the extrapolation. The basis set dimensions and nonlinear parameters for states with $N=0$ are given in Table~\ref{opt-table} for $\Sigma_g$ symmetry in the first row under ``Type I'' and for $\Sigma_u$ symmetry in the first row under ``Type II''. For the states with $N>0$, the off-diagonal term Eq.~(\ref{sigpi}) in the Hamiltonian requires the inclusion of coupling between basis sets of $\Sigma$ and $\Pi$ symmetry. Denoting the electronic basis sets by their value of $\Lambda$ as $|\Lambda\rangle$ we set up matrix elements of the Hamiltonian using the rotated basis $\case{1}{\sqrt{2}}(|\mbox{+}1\rangle +|\mbox{$-$}1\rangle)$ and $\case{1}{\sqrt{2}}(|\mbox{+}1\rangle -|\mbox{$-$}1\rangle)$. With it there is only coupling between $|0\rangle$ and $\case{1}{\sqrt{2}}(|\mbox{+}1\rangle - |\mbox{$-$}1\rangle)$. A two by two matrix of matrices was created with the uncoupled Hamiltonian matrix elements for each basis set as the diagonal elements and the matrix elements of the coupling term Eq.~(\ref{sigpi}) between the two basis sets as the off-diagonal elements. The energies of the states were determined by diagonalization of this matrix, while the energies corresponding to the uncoupled basis $\case{1}{\sqrt{2}}(|\mbox{+}1\rangle +|\mbox{$-$}1\rangle)$ were determined by diagonalization of the uncoupled Hamiltonian. For each state, the non-linear parameters and basis size were fixed at the values already determined for the minimum energies. Then the same technique used for the uncoupled energies was applied to the coupled basis sets to determine non-linear parameters and basis sizes that minimized the energy of the state under consideration. For example, when trying to determine the $\Sigma_u,v=0,N=1$ energy, the $\Sigma$ basis set parameters were held fixed at their uncoupled values, and the $\Pi$ basis set parameters were changed. The parameters for the coupling basis set were significantly different from those which minimized the energy in the uncoupled calculations, requiring six specialized parameters for each state when coupling was considered. The rate of convergence of the coupling terms is illustrated in Figs.~\ref{coupled-sigma-u-fig} and~\ref{coupled-pi-u-fig} for $\mbox{H}_2{}^+$. The energies converge logarithmically as each sector dimension is increased in turn. To evaluate the contribution of this small off-diagonal term to the energy many fewer basis set elements are needed than for the diagonal terms. The basis set dimensions and nonlinear parameters for states with $N>0$ are given in Table~\ref{opt-table}. For each symmetry there are two rows. The first row lists the dimensions and parameters for the primary symmetry used for all calculations and the second row lists the quantities for the additional symmetry required for $N>0$ entering through the coupling of Eq.~(\ref{sigpi}). The total number of basis functions used can be calculated from the data listed in Table~\ref{opt-table} and is the sum of the values of $S$ defined in Eq.~(\ref{tf}) entering for each symmetry. For example, for $\mbox{H}_2{}^+$ $\Sigma_g$, $v=0$, $N=0$, we used $(13+1)\times(5+1)\times (13+1)=1176$ functions and for $N=1$ we used $1176 + (5+1)\times (4+1)\times (6+1)=1386$ functions. For $\mbox{H}_2{}^+$ $\Pi_u$, $v=0$, $N=1$ we used two runs, each corresponding to one of the rotated basis sets. For the uncoupled set, we had $910$ functions, while for the coupled set, we used $910 + 270 = 1180$ functions. \section{DISCUSSION} Tables~\ref{e-h-sig-table} and~\ref{e-d-sig-table} compare the present calculations of nonadiabatic energies for $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$ respectively with available precision calculations. In each table the vibration-rotation eigenvalues for the $\Sigma_g$ symmetry are given first, followed by those for the $\Sigma_u$ symmetry. For the $\Sigma_g$ state the most precise variational basis set calculations are given for $\mbox{H}_2{}^+$ in Refs.~\cite{BisChe77b,BisSol85,GreDelBil98,Mos93b} and for $\mbox{D}_2{}^+$ in Refs.~\cite{BisChe77b,BisSol85,Mos93b}. Variation-perturbation calculations have been performed by Wolniewicz and Orlikowski~\cite{WolOrl91} for $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$ for all the $\Sigma_g$ vibration-rotation states but the tabulated results include radiative and relativistic corrections and can not be compared directly with the present work. Using the artificial channel approach Moss carried out extensive nonadiabatic calculations of all the vibrational-rotational states of $\mbox{H}_2{}^+$~\cite{Mos93b} and $\mbox{D}_2{}^+$~\cite{Mos93a} for the $\Sigma_g$ states. His results with radiative and relativistic corrections are in good agreement with Wolniewicz and Orlikowski and he also presented energies without these corrections. In Tables~\ref{e-h-sig-table} and~\ref{e-d-sig-table} the various calculations for the $v=0,N=0$, $v=0,N=1$, and $v=1,N=0$ states are compared to our calculations. Results listed in Refs.~\cite{Mos93a,Mos93b} are converted from dissociation energies in wavenumbers to atomic units and combined with the asymptotic energy $-M_n/[2 (1+M_n)]$. Our results are consistent with and slightly improve upon the precision of previous calculations. Only a few high-precision calculations are available for the lowest states of $\Sigma_u$ symmetry for $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$. Wolniewicz and Orlikowski used the variation-perturbation method and found 3 bound levels for $\mbox{H}_2{}^+$ and 7 bound levels for $\mbox{D}_2{}^+$ and gave energies of the levels with $\Sigma$-$\Pi$ coupling included. Subsequently, Moss using the artificial channel method including $\Sigma$-$\Pi$ coupling found results in agreement with those of Wolniewicz and Orlikowski for both $\mbox{H}_2{}^+$~\cite{Mos93b} and $\mbox{D}_2{}^+$~\cite{Mos93a}. Our $\Sigma_u$ results are compared with these prior calculations in Tables~\ref{e-h-sig-table} and~\ref{e-d-sig-table}. For the $v=0,N=0$ and $v=0,N=1$ states our energies are consistent with the others and of higher precision. However, for the $\mbox{D}_2{}^+$ $v=1,N=0$ state we found that a quite large basis set ($B=20, C=11, D=36$ with $\alpha=15.8$, $\beta=37$ and $\gamma=2.6$) was required to approach the energies given in Refs.~\cite{WolOrl91,Mos93a}. Peek~\cite{Pee69} showed that in the Born-Oppenheimer approximation the $v=1,N=0$ vibrational wave function can have significant amplitude at values of $R$ as large as several hundred $a_0$. Our electronic basis set is not explicitly dependent on $R$ and this may account for the large basis size needed. Other methods~\cite{WolPol86,WolOrl91,Mos93a,Mos93b} are based on coupled channel approaches that may be better at describing such diffuse vibrational states. There do not appear to be any published nonadiabatic energies for the lowest electronic state of $\Pi_u$ symmetry of either $\mbox{H}_2{}^+$ or $\mbox{D}_2{}^+$. Probably the most accurate study published is that of Bishop {\em et al.\/}~\cite{BisShiBec75}, who investigated the $\Pi_u$ energies of $\mbox{H}_2{}^+$ within the standard adiabatic approximation~\cite{Kol69,BisWet73}. In Table~\ref{pi-table} the present nonadiabatic energies are compared to Born-Oppenheimer and standard adiabatic energies. The energy calculated in the Born-Oppenheimer approximation is a lower bound to the true energy while the standard adiabatic and nonadiabatic energies are upper bounds~\cite{Eps66,HunGraPri66}. The standard adiabatic energies were calculated with the diagonal coupling of Ref.~\cite{BisShiBec75} rescaled to a proton mass of $1\,836.152\,701$ and the results differ in the seventh decimal place from the values reported in~\cite{BisShiBec75}. The present nonadiabatic results lie above the Born-Oppenheimer energy but below the standard adiabatic energy as expected~\cite{HunGraPri66}. The energies in Table~\ref{pi-table} were calculated without the consideration of Eq.~(\ref{sigpi}) leading to one level for each value of $N$. With the inclusion of the coupling term (\ref{sigpi}) as described above in Sec.~\ref{subsec:numerical} our calculations exhibit lambda-doubling in the eigenvalues of $\Pi$ symmetry. In Table~\ref{e-h-pi-table} calculated eigenvalues for the $v=0$ and 1 states with $N=1$ are presented for $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$. For each value of $v$ the first row gives the energy of the shifted level resulting from the diagonalization of the matrix coupling $|0\rangle$ and $\case{1}{\sqrt{2}}(|\mbox{+}1\rangle - |\mbox{$-$}1\rangle)$ and the second row gives the energy of the other, unshifted, level. The energy difference between the two levels is the lambda-doubling. \acknowledgements We are grateful to Prof. P. Froelich, Dr. S. Jonsell, and Prof. J. Shertzer for helpful comments. This work was supported in part by the U.S. Department of Energy, Division of Chemical Sciences, Office of Basic Energy Sciences, Office of Energy Research. ZCY was also supported by the Natural Sciences and Engineering Research Council of Canada. The Institute for Theoretical Atomic and Molecular Physics is supported by a grant from the National Science Foundation to the Smithsonian Institution and Harvard University. \begin{table} \begin{center} \caption{For $\mbox{H}_2{}^+$ values of the dimensions $B$, $C$, and $D$ and the optimized nonlinear parameters $\alpha$, $\beta$, and $\gamma$. The values used for $\mbox{D}_2{}^+$ are identical except for the three values listed in parentheses.} \label{opt-table} \begin{tabular}{cccccccc} \multicolumn{2}{c}{} & \multicolumn{3}{c}{Dimension}& \multicolumn{3}{c}{Nonlinear parameter} \\ \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{Symmetry} & \multicolumn{1}{c}{$B$} & \multicolumn{1}{c}{$C$} & \multicolumn{1}{c}{$D$} & \multicolumn{1}{c}{$\alpha$} & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$\gamma$} \\ \hline I & $\Sigma_g$ & 13 & 5 & 13(17) & 3.1561 & 67 & 37.0 \\ & $\Pi_g$ & 5 & 4 & 6 & 3.0 & 79 & 42.0 \\ \multicolumn{8}{c}{} \\ II & $\Sigma_u$&14(10) & 11(9) & 30 & 15.8 & 43 & 3.1 \\ & $\Pi_u$ & 5 & 5 & 11 & 13.0 & 97 & 7.4 \\ \multicolumn{8}{c}{} \\ III& $\Pi_u$ & 9 & 6 & 12(19) & 6.0 & 125 & 16.5 \\ & $\Sigma_u$& 8 & 5 & 4 & 5.0 & 47 & 3.86 \\ \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{ Comparison of nonadiabatic vibration-rotation energies for $\mbox{H}_2{}^+$ for each of the lowest electronic states of $\Sigma_g$ or $\Sigma_u$ symmetry. Calculations with $N>0$ include the coupling term of Eq.~(\protect\ref{sigpi}). Unless indicated otherwise all calculations correspond to a proton mass of $1\,836.152\,701$ in units of the electron mass.} \label{e-h-sig-table} \begin{tabular}{llll} \multicolumn{1}{c}{State} &\multicolumn{1}{c}{Author (Year)} & \multicolumn{1}{c}{Ref.} & \multicolumn{1}{c}{Energy} \\ \hline $\Sigma_g, v=0,N=0$ & Bishop and Cheung (1977)\tablenote{Proton mass 1836.15} & \cite{BisChe77b} & $-$0.597\,139\,062\,5 \\ & Bishop and Solunac (1985)\tablenotemark[1]{} & \cite{BisSol85} & $-$0.597\,139\,063\,18 \\ & Moss (1993) & \cite{Mos93b} & $-$0.597\,139\,063\,1 \\ & Gr{\'e}maud et al. (1998) & \cite{GreDelBil98} & $-$0.597\,139\,063\,123(1) \\ & This work & & $-$0.597\,139\,063\,123\,9(5) \\ $\Sigma_g, v=0,N=1$ & Moss (1993) & \cite{Mos93b} & $-$0.596\,873\,738\,9 \\ & This work & & $-$0.596\,873\,738\,832\,8(5) \\ $\Sigma_g, v=1,N=0$ & Bishop and Cheung (1977)\tablenotemark[1]{} & \cite{BisChe77b} & $-$0.587\,155\,675\,8 \\ & Moss (1993) & \cite{Mos93b} & $-$0.587\,155\,679\,2 \\ & Gr{\'e}maud et al. (1998) & \cite{GreDelBil98} & $-$0.587\,155\,679\,212(1) \\ & This work & & $-$0.587\,155\,679\,213\,6(5) \\ \multicolumn{4}{c}{ }\\ $\Sigma_u, v=0,N=0$ & Wolniewicz and Orlikowski (1991)&\cite{WolOrl91} & $-$0.499\,743\,49\\ & Moss (1993) & \cite{Mos93b} & $-$0.499\,743\,502\,2 \\ & This work & & $-$0.499\,743\,502\,21(1) \\ $\Sigma_u, v=0,N=1$ & Wolniewicz and Orlikowski (1991)&\cite{WolOrl91} &$-$0.499\,739\,25\\ & Moss (1993) & \cite{Mos93b} & $-$0.499\,739\,268\,0 \\ & This work & & $-$0.499\,739\,267\,93(2)\tablenote{For this energy, the basis set had dimension $B=16$.} \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{ Comparison of nonadiabatic vibration-rotation energies for $\mbox{D}_2{}^+$ for each of the lowest electronic states of $\Sigma_g$ or $\Sigma_u$ symmetry. Calculations with $N>0$ include the coupling term of Eq.~(\protect\ref{sigpi}). Unless indicated otherwise all calculations correspond to a deuteron mass of $3\,670.483\,014$ in units of the electron mass.} \label{e-d-sig-table} \begin{tabular}{llll} \multicolumn{1}{c}{State} &\multicolumn{1}{c}{Author (Year)} & \multicolumn{1}{c}{Ref.} & \multicolumn{1}{c}{Energy} \\ \hline $\Sigma_g, v=0,N=0$ & Bishop and Cheung (1977)\tablenote{Deuteron mass 3670.48} & \cite{BisChe77b} & $-$0.598\,788\,782\,0 \\ & Bishop and Solunac (1985)\tablenotemark[1]{} & \cite{BisSol85} & $-$0.598\,788\,782\,22 \\ & Moss (1993) & \cite{Mos93a} & $-$0.598\,788\,784 \\ & This work & & $-$0.598\,788\,784\,330\,8(1) \\ $\Sigma_g, v=0,N=1$ & Moss (1993) & \cite{Mos93b} & $-$0.598\,654\,873\,1 \\ & This work & & $-$0.598\,654\,873\,220\,5(5) \\ $\Sigma_g, v=1,N=0$ & Bishop and Cheung (1977)\tablenotemark[1]{} & \cite{BisChe77b} & $-$0.591\,603\,115\,4 \\ & Moss (1993) & \cite{Mos93a} & $-$0.591\,603\,122 \\ & This work & & $-$0.591\,603\,121\,903\,2(1) \\ \multicolumn{4}{c}{ }\\ $\Sigma_u, v=0,N=0$ & Wolniewicz and Orlikowski (1991)&\cite{WolOrl91} & $-$0.499\,888\,93\\ & Moss (1993) & \cite{Mos93a} & $-$0.499\,888\,937\,5 \\ & This work & & $-$0.499\,888\,937\,71(1) \\ $\Sigma_u, v=0,N=1$ & Wolniewicz and Orlikowski (1991)&\cite{WolOrl91} & $-$0.499\,886\,38 \\ & Moss (1993) & \cite{Mos93a} & $-$0.499\,886\,382\,5 \\ & This work & & $-$0.499\,886\,382\,63(1) \\ $\Sigma_u, v=1,N=0$ & Wolniewicz and Orlikowski (1991)&\cite{WolOrl91} & $-$0.499\,865\,21 \\ & Moss (1993) & \cite{Mos93a} & $-$0.499\,865\,221\,0 \\ & This work & & $-$0.499\,865\,217\,(5)\tablenote{For this energy, the basis set had dimensions $B=20$, $C=11$, $D=36$ with nonlinear parameters $\alpha=15.8$, $\beta=37$, and $\gamma=2.6$ as discussed in the text.} \end{tabular} \end{center} \end{table} \clearpage \begin{table} \begin{center} \caption{For $\mbox{H}_2{}^+$ the first several eigenvalues of the $\Pi_u$ symmetry with $N=1$ calculated nonadiabatically compared with Born-Oppenheimer and standard adiabatic calculations, respectively. For the present calculations, col. 4, the coupling term~(\ref{sigpi}) has not been included.} \label{pi-table} \begin{tabular}{cccc} \multicolumn{1}{c}{Vibrational state} & \multicolumn{1}{c}{Born Oppenheimer} & \multicolumn{1}{c}{Standard Adiabatic} & \multicolumn{1}{c}{Present\protect\tablenote{Nonlinear parameters $\alpha=6.0,\beta =125,\gamma=16.5$ with $B=9, C=6, D=24$. }} \\ \hline 0 & $-$0.133\,905\,216\,5 & $-$0.133\,841\,244\,8 & $-$0.133\,841\,939\,2 \\ 1 & $-$0.132\,752\,851\,6 & $-$0.132\,689\,153\,4 & $-$0.132\,689\,769\,1 \\ 2 & $-$0.131\,660\,981\,7 & $-$0.131\,597\,475\,8 & $-$0.131\,598\,133\,6 \\ 3 & $-$0.130\,631\,351\,9 & $-$0.130\,567\,953\,2 & $-$0.130\,568\,676\,9 \\ 4 & $-$0.129\,666\,127\,2 & $-$0.129\,602\,748\,3 & $-$0.129\,603\,541\,6 \\ \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Lambda-doubling in nonadiabatic vibration-rotation energies of $\mbox{H}_2{}^+$ and $\mbox{D}_2{}^+$ for the lowest electronic state of $\Pi_u$ symmetry for $v=0$ and 1, with $N=1$. For each value of $v$ the first row gives the energy of the shifted level arising from the coupling term in Eq.~(\protect\ref{sigpi}) and the second row gives the energy of the other, unshifted, level.} \label{e-h-pi-table} \begin{tabular}{lll} \multicolumn{1}{c}{Ion} & \multicolumn{1}{c}{State} & \multicolumn{1}{c}{Energy} \\ \hline $\mbox{H}_2{}^+$ &$\Pi_u,v=0,N=1$ & $-$0.133\,841\,940\,395(5) \\ & & $-$0.133\,841\,939\,176\,3(1) \\ &$\Pi_u,v=1,N=1$ & $-$0.132\,689\,769\,820(5) \\ & & $-$0.132\,689\,769\,121\,8(1) \\ $\mbox{D}_2{}^+$ &$\Pi_u,v=0,N=1$ & $-$0.134\,052\,118\,044(5) \\ & & $-$0.134\,052\,117\,739\,8(1) \\ &$\Pi_u,v=1,N=1$ & $-$0.133\,224\,515\,520(5) \\ & & $-$0.133\,224\,515\,448\,7(1) \\ \end{tabular} \end{center} \end{table} \clearpage \begin{figure}[p] \epsfxsize=1.\textwidth \epsfbox{sggnd.pre.eps} \caption{Convergence study for the ground state $\Sigma_g$ energy of $\mbox{H}_2{}^+$ with $v=0,N=0$. The three basis sectors are fixed at their optimized dimensions for $B$, $C$, and $D$. Then for each sector, in turn, the index of the basis set $B$, $C$, or $D$, is set back to 2 and the value is increased until the optimized value of $B$, $C$, or $D$ is reached again. Each line represents the $\log_{10}$ of the energy for the index value $n$ subtracted from the energy for the previous index value. (For sector $B$ we have omitted the energy $E_{12}$.) \label{sigma-g-fig}} \end{figure} \clearpage \begin{figure}[p] \epsfxsize=1.\textwidth \epsfbox{sugnd.pre.eps} \caption{Convergence study for the $\Sigma_u$ energy of $\mbox{H}_2{}^+$ with $v=0,N=0$. \label{sigma-u-fig}} \end{figure} \begin{figure}[p] \epsfxsize=1.\textwidth \epsfbox{pignd.pre.eps} \caption{Convergence study for the $\Pi_u$ energy of $\mbox{H}_2{}^+$ for the $v=0, N=1$ state with with no coupling to the $\Sigma_u$ symmetry included. \label{pi-u-fig}} \end{figure} \clearpage \begin{figure}[p] \epsfxsize=1.\textwidth \epsfbox{couplesigma.pre.eps} \caption{Convergence study for the energy of $\mbox{H}_2{}^+$ in the $\Sigma_u$, $v=0, N=1$ state for the basis set of $\Pi_u$ symmetry entering in the calculation. The $\Sigma_u$ symmetry basis set is fixed with the optimized size and nonlinear parameters listed in Table~\protect\ref{opt-table} for the calculations of this plot. \label{coupled-sigma-u-fig}} \end{figure} \clearpage \begin{figure}[p] \epsfxsize=1.\textwidth \epsfbox{couplepi.pre.eps} \caption{ Convergence study for the energy of $\mbox{H}_2{}^+$ in the $\Pi_u$, $v=0, N=1$ state for the basis set of $\Sigma_u$ symmetry entering in the calculation. The $\Pi_u$ symmetry basis set is fixed with the optimized size and nonlinear parameters listed in Table~\protect\ref{opt-table} for the calculations of this plot. \label{coupled-pi-u-fig}} \end{figure}
2024-02-18T23:40:16.136Z
1999-03-11T18:27:38.000Z
algebraic_stack_train_0000
1,876
6,138
proofpile-arXiv_065-9232
\section{Introduction} Transport in granular metals is mediated by activated transport among the metallic grains. In granular superconductors composed of spatially separated metallic grains, such single particle charging events ultimately lead to a destruction of phase locking between the grains. This state of affairs obtains because the particle number, $n$, and phase, $\theta$, associated with each grain are conjugate variables. If two grains phase lock, the resultant infinite uncertainty in the particle number leads necessarily to single particle charging. Should the single particle charging energy sufficiently exceed the Josephson coupling energy between grains, superconductivity is quenched\cite{anderson}. The simplicity of the physics underlying the quantum phase transition in an array of superconducting islands implies that the resultant Hamiltonian, \begin{eqnarray}\label{HJ} H=-E_C\sum_i\left(\frac{\partial}{\partial\theta_i}\right)^2- \sum_{\langle i,j\rangle} J_{i,j}\cos(\theta_i-\theta_j), \end{eqnarray} is characterized by only two parameters: 1) a charging energy, $E_C$, and 2) a Josephson coupling energy, $J_{i,j}$. In writing Eq. (\ref{HJ}), we assumed that the islands occupy regular sites on a 2D lattice and only nearest neighbor, $\langle i,j\rangle$, Josephson coupling is relevant. For the ordered case in which $J_{\langle i,j\rangle}=J_0$, the superconductor-insulator transition is well-studied\cite{doniach} as the parameter $E_C/J_0$ increases. In sufficiently disordered systems, however, the Josephson energies are not all equal and in fact can be taken to be random. We are concerned in this paper with the case in which the Josephson energies are random and characterized by a Gaussian distribution \begin{eqnarray} P(J_{ij})=\frac{1}{\sqrt{2\pi J^2}}\exp{\left[-\frac{(J_{ij}-J_0)^2}{2J^2}\right]} \end{eqnarray} with non-zero mean, $J_0$. While random Josephson systems have been treated previously\cite{sachdev,huse,bm}, such studies have focused predominantly on the zero mean case in which $J_0=0$. This limit differs fundamentally from the non-zero mean case, because the superconducting phase exists only when $J_0\ne 0$. Hence, for $J_0=0$, there is an absence of an ordering transition. Nonetheless, the zero-mean case is still of physical interest because as Read, Sachdev, and Ye (RSY)\cite{sachdev} as well as Miller and Huse\cite{huse} have shown, a zero temperature transition from a quantum spin-glass to a paramagnet occurs as the strength of the quantum fluctuations increases. To put the random Josephson model in the context of the work of RSY, it is expedient to introduce the change of variables \begin{eqnarray} {\bf S}_i=(\cos\theta_i, \sin\theta_i). \end{eqnarray} The resultant Josephson Hamiltonian \begin{eqnarray}\label{hspin} H=-E_C\sum_i\left(\frac{\partial}{\partial\theta_i}\right)^2+ \sum_{\langle i,j\rangle}J_{i,j}{\bf S}_i\cdot{\bf S}_j \end{eqnarray} is recast as a 2-component (M=2) interacting spin problem with random magnetic interactions. This model is easily generalizable to describe interactions among any M-component spin operator (or quantum rotor) in the group $O(M)$. M=1 corresponds to Ising spins and is relevant to random magnetic\cite{rosenbaum} systems such as LiHo$_{0.167}$Y$_{0.833}$F$_4$ whereas the M=3 limit is applicable to spin fluctuations in the cuprates\cite{haldane}. In this paper we focus primarily on the M=1 and M=2 cases in which the ordered phases correspond to a ferromagnet and a superconductor, respectively. Two types of fluctuations control the transitions between the phases: 1) dynamic quantum fluctuations arising from the charging energy and 2) static fluctuations induced by the disorder. For non-zero mean, three phases (spin-glass, paramagnet, ferromagnet (M=1) or superconductor (M=2)) are expected to meet at a bi-critical point. Experimentally, a bi-critical point is anticipated whenever $J_0$, $J$ and $E_C$ are on the same order of magnitude. It is the physics at this bi-critical point that we focus on in this paper. To discriminate between these phases, we distinguish between thermal averages, $\langle ...\rangle$ and averages over disorder, $[...]$. In the superconductor (M=2) or ferromagnetic phases (M=1), the disorder and thermal average of the local spin operator, $[\langle S_{i\nu}\rangle]\ne 0$, is non-zero. In the spin glass phase, the thermal average $\langle S_{i\nu}\rangle\ne 0$, while $[\langle S_{i\nu}\rangle]=0$. At zero temperature, quantum fluctuations and static disorder conspire to lead to a vanishing of the static moment in the paramagnetic phase, that is, $\langle S_{i\nu}\rangle=0$. RSY have performed an extensive study of the spin-glass/paramagnetic boundary using the replica formalism\cite{sk} in the case of zero mean in which the purely ordered phase is absent. We adopt this formalism here in our analysis of the bi-critical region. The only prior study on the non-zero mean case is that of Hartman and Weichman\cite{hw} who studied numerically the spherical limit, $M\rightarrow\infty$, of the quantum rotor Hamiltonian. In their study, they found that the spin-glass phase is absent in the $M\rightarrow\infty$ limit for $d=2$. In the present work, we will not address the issue of dimensionality because we limit ourselves to a mean-field description in which all fields are homogeneous in space. Our results then are valid above some upper critical dimension that can only be determined by including fluctuations around the mean-field solution. This paper is organized as follows. In section II, we use the replica formalism to average over the disorder explicitly and obtain the effective Landau free-energy functional. We show that the leading terms in the Landau action resulting from the non-zero mean are analogous to those arising from an external magnetic field in the zero-mean problem. As a result, the appropriate saddle-point equations can be solved using a direct analogy to the zero-mean problem in the presence of a magnetic field. Explicit criteria are presented in Sec. III for the stability of each of the phases in the vicinity of bi-criticality. We construct the phase diagram at finite temperature and find excellent agreement with the experimental results of Reich\cite{rosenbaum}, et. al. on the magnetic system $LiHo_xY_{1-x}F_4$. The explicit solution for $M\ge 2$ is presented in the last section with a special emphasis on the Gabay-Toulouse\cite{gt} line. In section IV we analyze the possibility of replica symmetry breaking along the de Almeida-Thouless\cite{at} line in the M=1 case. \section{Landau Action} Central to the construction of a Landau theory of the bi-critical region is the free-energy functional. For a quantum mechanical system, this is obtained by explicitly including in the partition function, \begin{eqnarray}\label{par} Z=Tr\left(e^{-\beta H}\right)=Tr\left[e^{-\beta H_0}\widehat{T} \exp\left[-\int_0^\beta H_1(\tau)d\tau\right]\right] \end{eqnarray} the time evolution according to a reference system. For this problem, the Hamiltonian for free quantum rotors \begin{eqnarray} H_0= -E_C\sum_i\left(\frac{\partial}{\partial\theta_i}\right)^2 \end{eqnarray} describes our reference system. The perturbation \begin{eqnarray} H_1(\tau)=e^{H_0\tau}\sum_{\langle i,j\rangle}J_{i,j}{\bf S}_i\cdot{\bf S}_je^{-H_0\tau} \end{eqnarray} corresponds to the random magnetic interactions. The trace in the partition function is evaluated over the complete set of quantum rotor states. A primary hurdle in evaluating the partition function is the average over the random spin interactions. It is now standard to perform this average\cite{bm} by replicating the spin system $n$ times and using the identity, \begin{eqnarray} \ln [Z]=\lim_{n\rightarrow 0}\frac{[ Z^n]-1}{n} \end{eqnarray} to obtain the partition function, $Z$. We first must then evaluate $\langle Z^n\rangle$. Formally, the replicated partition function is defined through Eq. (\ref{par}) by replacing $H_0$ and $H_1$ by their replicated equivalents \begin{eqnarray} H_i^{\rm eff}=\sum_{a=1}^n H_i^a \end{eqnarray} where the superscript indexes the individual replicas and $i=0,1$. Within the replica formalism, the average over the random interactions with the Gaussian distribution gives rise to a quartic spin interaction. This quartic spin interaction is easily reduced to a quadratic interaction by use of the Hubbard-Stratanovich transformation. Let us define the Fourier transforms \begin{eqnarray} {\bf S}^a(k,\tau)=\frac{1}{\sqrt{N}}\sum_i {\bf S}_i^a(\tau) e^{i{\bf k\cdot R_i}} \end{eqnarray} of the local spin operator for each site and the corresponding transform of the nearest-neighbour interaction \begin{eqnarray} J(k)=J\sum_{\langle ij\rangle}e^{\bf k\cdot r_{ij}}=2J\sum_{i=1}^d cosk_ia \end{eqnarray} The replicated partition function now takes on the form, \begin{eqnarray} [ Z^n]=Z_0^n\int D\Psi D Qe^{-F_{\rm eff}}[Q,\Psi] \end{eqnarray} where the effective Free-energy \begin{eqnarray}\label{free} F_{\rm eff}[\Psi, Q]&=&\sum_{a,k}\int_0^\beta d\tau\Psi^a_{\mu}(k,\tau) \cdot(\Psi^a_{\mu}(k,\tau))^\ast +\sum_{a,b,k,k'} Q_{\mu\nu}^{ab}(k,k',\tau,\tau')[Q_{\mu\nu}^{ab}(k,k',\tau,\tau')]^\ast\nonumber\\ &&-\ln\left[\langle \widehat{T}\exp\left(2\int_0^\beta d\tau\sum_{a,k}\sqrt{J_0(k)}\Psi^a_\mu(k,\tau) S^a_\mu(-k,\tau)\right.\right.\nonumber\\ &&\left.\left.+\sum_{a,b,k,k'}\int_0^\beta\int_0^\beta d\tau d\tau'\sqrt{2J(k)J(k')} Q_{\mu\nu}(k,k',\tau,\tau')S_\mu^a(k,\tau)S_\nu^b(-k',\tau')\right)\rangle_0 \right] \end{eqnarray} is now a functional of the auxilliary fields $Q$ and $\Psi$ which appear upon use of the Hubbard-Stratanovich transformation to decouple the quartic spin term proportional to $J^2(k)$ and the quadratic term scaling as $J_0(k)$, respectively. In writing this expression, we used the Einstein convention where repeated spin (but not replica) indices are summed over, $Z_0=Tr[\exp(-\beta H_0)]$, and \begin{eqnarray} \langle A\rangle_0=\frac{1}{Z_0^n}Tr\left(e^{-\beta H_0^{\rm eff}} A\right) \end{eqnarray} The fields $Q$ and $\Psi$ play fundamentally different roles. The proportionality of the $\Psi$ field to the mean of the distribution implies that this field determines the ordering transition (superconducting for M=2 or ferromagnetic for M=1). This can be seen immediately upon differentiating the free energy with respect to $\Psi$. The self-consistent condition is that \begin{eqnarray} \Psi^a_\mu(k,\tau)=\langle S^a_\mu(k,\tau)\rangle \end{eqnarray} Hence, a non-zero value of $\Psi^a_\mu(k,\tau)$ implies ordering. It is for this reason that $\Psi$ functions as the order parameter for the ferromagnetic or superconducting phase within Landau theory. Likewise, differentiation of the free energy with respect to $Q$ reveals that \begin{eqnarray} Q_{\mu\nu}^{ab}(k,k',\tau,\tau')=\langle S_\mu^a(k,\tau)S_\nu^b(k',\tau')\rangle \end{eqnarray} is the self-consistency condition for the $Q$ matrices. For quantum spin glasses, it is the diagonal elements of the Q-matrix \begin{eqnarray} D(\tau-\tau')=\lim_{n\rightarrow 0}\frac{1}{Mn}\langle Q^{aa}_{\mu\mu}(k,k',\tau,\tau')\rangle \end{eqnarray} in the limit that $|\tau-\tau'|\rightarrow\infty$ that serves as the effective Edwards-Anderson spin-glass order parameter\cite{sachdev,bm} within the Landau theory. A disclaimer is appropriate here as $D$ is non-zero in the spin-glass as well as the paramagnet phases. However, its behaviour is sufficiently different in the three phases: in the paramagnetic and ferromagnetic (superconducting) phases, as we will show, $D(\omega)$ still has the form $\sqrt{\omega^2+\Delta^2}$. The gap, $\Delta$, vanishes at the transition to the spin-glass phase giving rise to the long-time behavior of $D(\tau)$. Before we analyze the form of the gap, we must obtain the effective Landau action. The goal here is to obtain a polynomial functional of $Q$ and $\Psi$ from which a saddle point analysis can be performed. We proceed in the standard way and perform a cumulant expansion on the free energy. For the case of zero-mean, this procedure is documented quite closely in RSY. In addition to the terms containing the $Q$ matrices studied by RSY, the non-zero mean case will contain powers of $\Psi^a_\mu$ as well as cross terms. The resultant action must contain, of course, only even powers of $\Psi_\mu^a$. For an analysis of the bi-critical point, it is sufficient to retain quadratic and quartic terms in $\Psi_\mu^a$. Terms of this kind are completely analogous to those derived previously by Doniach\cite{doniach}. Of the cross terms, the simplest are of the form $\Psi^a_\mu\Psi_\nu^bQ_{\mu\nu}^{ab}$, $\Psi^a_\mu\Psi_\nu^aQ_{\mu\nu}^{aa}$, and $\Psi_\mu^a\Psi_\mu^aQ_{\nu\nu}^{aa}$. We have confirmed explicitly that retension of the latter two terms in which only one replica index occurs leads only to the renormalization of various coupling constants and minor modification of the phase boundaries near the bi-critical region. Hence, we do not consider such terms. Likewise, we do not retain terms of the form $Q^{aa}Q^{bb}$ because this term vanishes in the limit $n\rightarrow 0$. We find then that the effective action \begin{eqnarray}\label{action} \cal{A}&=&\frac{1}{t}\int d^dx\left\{\frac{1}{\kappa}\int d\tau\sum_a \left(r+\frac{\partial}{\partial\tau_1}\frac{\partial}{\partial\tau_2} \right)Q_{\mu\mu}^{aa}(x,\tau_1,\tau_2)|_{\tau_1=\tau_2=\tau}\right.\nonumber\\ &&+\frac12\int d\tau_1 d\tau_2\sum_{a,b}\left[\nabla Q^{ab}_{\mu\nu}(x,\tau_1,\tau_2)\right]^2\nonumber\\ &&-\frac{\kappa}{3}\int d\tau_1 d\tau_2 d\tau_3\sum_{a,b,c}Q^{ab}_{\mu\nu}(x,\tau_1,\tau_2) Q^{bc}_{\nu\rho}(x,\tau_2,\tau_3)Q^{ca}_{\rho\mu}(x,\tau_3,\tau_1)\nonumber\\ &&\left.+\frac12\int d\tau\sum_a\left[uQ^{aa}_{\mu\nu}(x,\tau,\tau)Q^{aa}_{\mu\nu}(x,\tau,\tau) +vQ^{aa}_{\mu\mu}(x,\tau,\tau)Q^{aa}_{\nu\nu}(x,\tau,\tau)\right]\right\}\nonumber\\ &&+\frac{1}{2g}\int d^dx\left\{ d\tau \sum_a\Psi_\mu^a(x,\tau) \left[-\gamma-\frac{\nabla^2}{2}\right]\Psi_\mu^a(x,\tau)\right.\nonumber\\ &&\left.+\int d\tau\sum_a\frac{\partial}{\partial\tau_1}\Psi_\mu^a(x,\tau_1) \frac{\partial}{\partial\tau_2}\Psi_\mu^a(x,\tau_2)|_{\tau_1=\tau_2=\tau} +\frac{\zeta}{2}\int d\tau\sum_a\left[\Psi_\mu^a(x,\tau)\Psi_\mu^a(x,\tau)\right]^2\right\}\nonumber\\ &&-\frac{1}{\kappa t g}\int d^d x\int d\tau_1 d\tau_2\sum_{a,b}\Psi_\mu^a(x,\tau_1) \Psi^b_\nu(x,\tau_2)Q_{\mu\nu}^{ab}(x,\tau_1,\tau_2)\+\cdots \end{eqnarray} contains three types of terms. 1) The terms that depend purely on the Q-matrices, as in the terms in the first curly bracket in Eq. (\ref{action}), have been studied previously by RSY in the context of quantum spin-glass/paramagnet transition. 2) The $\Psi$-dependent terms in the second curly bracket give rise to the ferromagnet or superconducting phases. They are controlled by the parameter $\gamma$. 3) The competition between these two transitions is mediated by the last term in Eq. (\ref{action}). To re-iterate, other cross-terms exist--for example the term noted previously. However, such terms have no bearing on the critical region. It suffices then to truncate the action at the level of Eq. (\ref{action}). The last term in Eq. (\ref{action}) bears a strong resemblance to the term that appears in the Landau action for the zero-mean problem in the presence of a magnetic field, $h$. In fact, we can obtain the corresponding term from Eq. (\ref{action}) by the transformation $\Psi^a_\mu\rightarrow h\delta_{\mu,1}\sqrt{\kappa g/2t}$. This mapping is not unexpected. In the non-zero mean problem, ordered spins create an effective ``magnetic field'' that acts as a source term for the $Q$-matrices. This analogy is particularly powerful and serves as a useful check as to the validity of our saddle-point equations. Further, this mapping is true even in the classical case. A word on the coupling constants is in order. The parametrization of the action in terms of the coupling constants $\kappa$, $t$, and $g$ was obtained by appropriately rescaling the fields $Q$ and $\Psi$ as well as the space and time coordinates. Fundamentally, $g$ is a function of $E_C$ and $J_0$, while $\kappa$ and $t$ are functions of $E_C$ and $J$ only. $u$, $v$, and $\zeta$ are related to a four-point spin correlation function evaluated in the long-time limit as discussed by RSY. In the zero-mean case, the phase diagram demarcating spin-glass and paramagnetic stability is determined by the parameter $r$ which determines the strength of the quantum fluctuations. When these fluctuations exceed a critical value, that is $r>r_c$, a transition to a paramagnetic phase occurs\cite{sachdev}. In the problem at hand, two parameters, $r$ and $\gamma$, determine the phase diagram. The coupling constant, $\gamma$, is directly related to the parameter $E_C/J_0$ and $E_C/J$ as well. The latter dependence arises as a result of the removal of the quadratic term, \begin{eqnarray} \int d^dx d\tau_1d\tau_2\sum_{a,b}[Q_{\mu\nu}^{ab}(x,\tau_1,\tau_2)]^2,\nonumber\\ \end{eqnarray} by the transformation\cite{sachdev} $Q\rightarrow Q-C\delta^{ab}\delta_{\mu}(\tau_1-\tau_2)$. From microscopic considerations, it follows that $\kappa^2 t/2=1$. This is an important simplification because we will show that the ferromagnet/paramagnet boundary is determined by the line $\gamma=2\Delta/\kappa^2 t=\Delta$. \section{Saddle Point Analysis: Phase Diagram} In terms of the frequency-dependent order parameters, \begin{eqnarray} \Psi_\mu(k,\omega)=\frac{1}{\beta}\int_0^\beta d\tau \Psi_\mu^a(k,\tau) e^{-i\omega\tau} \end{eqnarray} and \begin{eqnarray} Q_{\mu\nu}^{ab}(k_1,k_2,\omega_1,\omega_2)&=&\frac{1}{\beta^2}\int_0^\beta\int_0^\beta d\tau_1 d\tau_2 Q_{\mu\nu}^{ab}(k_1,k_2,\tau_1,\tau_2) e^{-i\omega_1\tau_1-i\omega_2\tau_2}, \end{eqnarray} the three phases in our problem form under the following conditions. For the paramagnetic phase near the bi-critical point, the saddle-point equations are satisfied when $\Psi^a_\mu=0$ but as RSY have shown \begin{eqnarray} Q_{\mu\nu}^{ab}(k,\omega_1,\omega_2)=(2\pi)^d\beta\delta_{\mu\nu}\delta^{ab}\delta^d(k) \delta_{\omega_1+\omega_2,0}D(\omega_1). \end{eqnarray} We have retained the ansatz for the $Q$-matrix used by RSY for the paramagnetic phase. Similarly, the ordering parameter also vanishes in the spin-glass phase, $\Psi_\mu=0$ and for the replica-symmetric solution, \begin{eqnarray} Q_{\mu\nu}^{ab}(k,\omega_1,\omega_2)=(2\pi)^d\delta^d(k)\delta_{\mu\nu}\left[\beta D(\omega_1) \delta_{\omega_1+\omega_2,0}\delta^{ab}+\beta^2\delta_{\omega_1,0}\delta_{\omega_2,0}q^{ab} \right]. \end{eqnarray} The possibility of a replica broken-symmetry solution will also be explored by including terms of $O(Q^4)$ in the Landau action. In the ordered phase (that is ferromagnetic or superconductor) both $\Psi$ and $Q$ are nonzero. As ferromagnetism has generally been studied with a frequency-independent order parameter, we explore a static ansatz of the form \begin{eqnarray}\label{psi} \Psi_\mu^a(k,\omega)=(2\pi)^d\delta^d(k)\beta \delta_{\omega,0}\delta_{\mu,1}\psi \end{eqnarray} for $\Psi^a_\mu$ and \begin{eqnarray}\label{qm} Q_{\mu\nu}^{ab}(k,\omega_1,\omega_2)=(2\pi)^d\delta^d(k)\delta_{\mu\nu}\left[\beta\bar{D}(\omega_1) \delta_{\omega_1+\omega_2,0}\delta^{ab}+\beta^2\tilde{q}\delta_{\omega_1,0} \delta_{\omega_2,0}\delta^{ab} +\beta^2\delta_{\omega_1,0}\delta_{\omega_2,0}q^{ab} \right] \end{eqnarray} for the Q-matrices. Our explicit inclusion of $\tilde{q}$ as the $\omega=0$ diagonal element of the Q-matrices implies that we can redefine $D(\omega)=\bar{D}(\omega)+\beta \tilde{q}\delta_{\omega,0}$. Consequently, we can set $\bar{D}(\omega)=0$ and assume that $q^{ab}$ is purely off-diagonal. The replica-symmetric solution corresponds to $q^{ab}=q$ for all $a\ne b$. Initially, we will explore only this case. \subsection{M=1: Ferromagnetic order} We now specialize to the $M=1$ ferromagnetic case as crucial differences can occur with the analysis for $M>1$. If we substitute Eqs. (\ref{psi}) and (\ref{qm}) into the Landau action, we obtain a free-energy density of the form \begin{eqnarray}\label{fed} \frac{\cal F}{n}&=&\frac{1}{\beta\kappa t}\sum_{\omega\ne 0}(\omega^2+r)\bar{D}(\omega)\ +\frac{r\tilde{q}}{\kappa t}-\frac{\kappa}{3\beta t}\sum_{\omega\ne 0}\bar{D}^3(\omega)\nonumber\\ &&+\frac{u+v}{2t}\left(\tilde{q}+\frac{1}{\beta}\sum_{\omega\ne 0}\bar{D}^2(\omega)\right)^2 +\frac{1}{2g}\left(-\gamma\psi^2+\frac{\zeta \psi^4}{2}\right)\nonumber\\ &&-\frac{\kappa\beta^2}{3t}\left(\tilde{q}^3+3\tilde{q}\frac{Tr q^2}{n}+\frac{Tr q^3}{n}\right) -\frac{\beta\psi^2}{\kappa g t}\left(\tilde{q}+\frac{\sum_{ab} q^{ab}}{n}\right). \end{eqnarray} Formally, the free-energy density as defined here is the disorder-averaged free energy per replica per spin component. The last term in Eq. (\ref{fed}) arises from the cross term and generates off-diagonal components of the Q-matrix. For $\psi=0$, this term is absent and we generate the solution of RSY. We obtain the saddle point equations by differentiating Eq. (\ref{fed}) with respect to $q$, $\tilde q$, $\bar D$, and $\psi$ in the $n\rightarrow 0$ limit. All of these quantities can be simplified once the appropriate gap parameter, \begin{eqnarray} \Delta^2=r+\kappa(u+v)\left(\tilde{q}+ \frac{1}{\beta}\sum_{\omega\ne 0}\bar{D}(\omega)\right) \end{eqnarray} is identified. The difference with the corresponding quantity in the work of RSY is the presence of the $\tilde{q}$ term in the gap which includes the contribution arising from the cross term in the Landau action. From the derivative equation with respect to $\bar D(\omega)$, we find that \begin{eqnarray} \bar{D}(\omega)=-\frac{1}{\kappa}\sqrt{\omega^2+\Delta^2}. \end{eqnarray} The constraint, $\Delta\ge 0$ is crucial for the stability of the free energy density and the minus sign in the above expression ensures that $D(\tau)>0$. The saddle-point equations for $q$ and $\tilde{q}$ yield that \begin{eqnarray} q=\frac{\psi^2}{2\kappa g\Delta} \end{eqnarray} and \begin{eqnarray} \tilde{q}=\frac{\psi^2}{2\kappa g\Delta}-\frac{\Delta}{\kappa \beta}. \end{eqnarray} As expected, these equations form the basis for the bulk of our analysis and are identical to the RSY saddle-point equations with $h$ replaced by $\psi$. Despite this mapping, $h$ and $\psi$ do serve different roles in the zero and non-zero mean theories. In the former, $h$ is an external adjustable scalar quantity without any critical properties, whereas $\psi$ plays the role of an order parameter in the non-zero mean case and must be treated on equal footing with $Q$. The saddle-point equation for $\psi$ \begin{eqnarray} \psi\left[-\gamma+\Delta+\zeta \psi^2\right]=0 \end{eqnarray} implies that aside from the trivial solution $\psi=0$, the non-trivial solution corresponds to \begin{eqnarray}\label{order} \psi^2=\frac{1}{\zeta}\left(\gamma-\Delta\right). \end{eqnarray} In obtaining this equation, we used the fact that $\kappa^2 t/2=1$. If we specialize to low temperatures, that is low relative to the gap ($T<\Delta$), the sum in the gap equation can be evaluated \begin{eqnarray}\label{sum} \frac{1}{\beta}\sum_\omega\sqrt{\omega^2+\Delta^2}=\frac{\Lambda_\omega^2}{2\pi}+\frac{\Delta^2}{4\pi}\ln\frac{\Lambda_\omega^2}{\Delta^2} \end{eqnarray} and the resultant self-consistent equation takes the form \begin{eqnarray}\label{gap} \Delta^2=r-r_c-\frac{u+v}{4\pi}\Delta^2\ln\frac{\Lambda_\omega^2}{\Delta^2} +\frac{u+v}{2g\zeta}\left(\frac{\gamma}{\Delta}-1\right). \end{eqnarray} We have introduced \begin{eqnarray}\label{rc} r_c=(u+v)\frac{\Lambda_\omega^2}{2\pi} \end{eqnarray} and the cut-off $\Lambda_\omega$ is determined by the energy scale at which zero-point quantum fluctuations become important. The natural cut-off for such fluctuations in this problem is $E_C$, the charging energy. The essential physics of the bi-critical point is contained in Eqs. (\ref{order}) and (\ref{gap}). From Eq. (\ref{order}), it is clear that the ferromagnetic phase exists only when $\gamma\ge\Delta$. This ensures that $\psi^2>0$ in the ferromagnetic phase. The vanishing of the order parameter $\psi$ signifies a termination of the ferromagnetic phase. For $\gamma\ne 0$ and $\Delta\ne 0$, the only line along which $\psi^2=0$ corresponds to $\gamma=\Delta$. If we substitute this condition into the gap equation, we find that the critical line separating the ferromagnet from the paramagnet at low temperatures ($T\ll\Delta$) is given by \begin{eqnarray}\label{gamline} \gamma=\Delta=\left(\frac{4\pi(r-r_c)}{(u+v) \ln\frac{\Lambda_\omega^2}{r-r_c}}\right)^{\frac{1}{2}} \end{eqnarray} and is depicted in Fig. (\ref{fig1}). In obtaining this equation, we assumed that $|r-r_c|\ll 1$ and $\gamma\ll 1$. The bi-critical point corresponds to $r=r_c$ and $\gamma=0$. At this point, the gap vanishes as does $\psi$. The essential non-trivial nature of this result is the non-analytic dependence of $\gamma$ on the quantum fluctuation parameter, $r-r_c$. In the vicinity of the bi-critical point, distinct regimes partition the ferromagnetic phase that are determined by the magnitude of $\Delta$ and $\psi$. This behavior is shown in Fig. (\ref{fig1}). The regions $O1$ and $O2$ are distinguished by their distance from the line $\gamma=\Delta$. The transition between $O1$ and $O2$ occurs when \begin{eqnarray}\label{gaml2} \gamma-\left(\frac{4\pi(r-r_c)}{(u+v) \ln\frac{\Lambda_\omega^2}{r-r_c}}\right)^{\frac{1}{2}}\approx \frac{(r-r_c)^{\frac32}}{\ln\frac{\Lambda_\omega^2}{r-r_c}} \end{eqnarray} Because $r-r_c$ is much less than unity, algebraic dependence in Eq. (\ref{gtt2}) of the form, $(r-r_c)^{3/2}$, implies that the $O2$ region is quite narrow. Within this narrow region, the gap is well approximated by Eq. (\ref{gamline}) and the value of $\psi$ is given by \begin{eqnarray} \psi^2=\frac{1}{\zeta}\left(\gamma-\left(\frac{4\pi(r-r_c)}{(u+v) \ln\frac{\Lambda_\omega^2}{r-r_c}}\right)^{\frac{1}{2}}\right)\quad\quad {\rm region}\quad O2 \end{eqnarray} In the bulk of the ferromagnetic phase, region $O1$, the $\gamma$-dependent terms must be retained in Eq. (\ref{gap}) to accurately describe the gap, $\Delta=\gamma-\zeta\psi^2$, with the ordering parameter given by \begin{eqnarray}\label{psi2} \psi^2=\frac{2g}{u+v}\left[\frac{u+v}{2\pi} \gamma^3\ln\frac{\Lambda_\omega}{\gamma}-\gamma(r-r_c)\right]\quad\quad {\rm region}\quad O1 \end{eqnarray} Hence, at the point $r=r_c$, we find a strong non-analytic dependence, $\psi\propto\gamma^{3/2}\sqrt{\ln\Lambda_\omega/\gamma}$, on the coupling constant $\gamma$. This is particularly important because it signifies that even at the mean-field level, deviations from the standard square-root dependence are present in the current theory. This result is fundamentally tied to the logarithmic dependence induced by the frequency summations and is caused by the zero-point quantum fluctuations. However, if we extrapolate our results to the regime, $r-r_c\approx O(1)$, the second term in Eq.(\ref{psi2}) dominates and we do recover that $\psi\propto\gamma^{1/2}$ in agreement with the expectation from standard mean-field Landau theories. Paramagnetic and spin-glass behaviour obtain whenever $\gamma<\Delta$. In this regime, the non-trivial solution for $\psi$ no longer holds and $\psi=0$ is the only valid solution. In this limit, our solution for $\Delta$ is identical to that of RSY and all of their results are recovered. For example, consider the free energy density (in units of $\kappa^2 t/2=1$), \begin{eqnarray}\label{F0} \frac{\cal{F}}{n}&=&\frac{\Lambda_\omega^4}{2\pi}\left(\frac16+\frac{u+v}{8\pi}\right) -{\Lambda_\omega^2(r-r_c)\over 4\pi}-{(r-r_c)^2\over 4(u+v)}+{\gamma^2(r-r_c)\over 2(u+v)} -\frac{\gamma^4}{8\pi}\ln{\Lambda_\omega\over\gamma}+\cdots\nonumber\\ &=&{\cal F}_0-\frac{\gamma^4}{8\pi}\ln{\frac{\Lambda_\omega}{\gamma}}+ {\gamma^2(r-r_c)\over 2(u+v)}+\cdots \end{eqnarray} in region $O1$. This quantity is obtainable from Eq. (\ref{fed}) once the saddle point solutions for region $O1$ are used and only the leading terms in $\gamma$ and the cross-term are retained. When $\gamma=0$, this expression is identical to that of RSY in the spin-glass phase. As anticipated, the leading $\gamma$ dependence in the free-energy density is non-analytic. This behaviour originates from the non-analytic behaviour of the order parameter $\psi$ on $\gamma$. We will see that this non-analyticity does not survive for $M>1$ above and below the Gabay-Toulouse line. Extending these results to finite temperatures, $T\gg\Delta$, simply requires the evaluation of the frequency summation over frequencies in the gap equation for $T\gg\Delta$. Using the result in Eq. (2.l3) of RSY, we obtain a self-consistent condition for the gap \begin{eqnarray}\label{eq40} \Delta^2=r-r_c(T)-(u+v)T\Delta-\frac{u+v}{2\pi}\Delta^2\ln \frac{\Lambda_{\omega}}{T}+\frac{u+v}{2g\zeta\Delta}(\gamma-\Delta) \end{eqnarray} in the regime $T\gg\Delta$ where we have defined $r_c(T)=r_c-(u+v)\pi T^2/3$. Recall that the transition between the ferromagnetic and paramagnetic phases occurs when $\gamma=\Delta$. Hence, in the units chosen here, this condition simplifies to $\gamma=\Delta(T)$. For $T\ll\sqrt{r-r_c(T)}$, the boundary for the paramagnetic/ferromagnetic state remains unchanged from the $T=0$ results discussed above. However, for $T\gg\Delta$, two distinct regimes \begin{eqnarray} \gamma=\Delta(T)=\left\{ \begin{array}{lll} \frac{r-r_c(T)}{\sqrt{(u+v)}T} & \quad \sqrt{r-r_c(T)}\ll T & \quad {\rm region \quad O3}\\ \sqrt{\frac{2\pi^2T^2}{3\ln\frac{\Lambda_\omega}{T}}} & \quad \sqrt{r-r_c}\ll T & \quad {\rm region \quad O4} \end{array}\right. \end{eqnarray} emerge depending on the magnitude of the thermal fluctuations. These regimes are depicted in Fig.(\ref{fig2}a). The crossover between these two regions occurs when $\gamma\propto\sqrt{r_c-r}$. In Fig. (\ref{fig2}a), $r_c-r>0$. The temperature \begin{eqnarray} T_0=\sqrt{\frac{3(r_c-r)}{\pi(u+v)}} \end{eqnarray} is denoted explicitly in Fig. (\ref{fig2}a) as this is the lowest temperature at which region $O3$ obtains. Immediately below region $O3$ where $\gamma-\Delta_0(T)\propto(r-r_c(T))^2/T$ and to the right of $O4$ where $\gamma-\Delta_0(T)\propto T^3/\sqrt{\ln\Lambda_\omega/T}$, a transition to a new region occurs in which the gap takes the form, \begin{eqnarray}\label{ftpsi} \Delta=\gamma-\frac{2g\zeta}{u+v}\left[{(u+v)\gamma^3\over 2\pi}\ln\frac{\Lambda_\omega}{T} +(u+v)\gamma^2T-(r-r_c(T))\gamma\right]=\gamma-\zeta\psi^2(T) \end{eqnarray} In this region, denoted as $O5$ in Fig. (\ref{fig2}a), as well as in regions $O3$ and $O4$, classical thermal fluctuations dominate. These regions can be construed as being quantum critical. Further away in region $O1$, the ferromagnetic phase is impervious to thermal fluctuations. The crossover to this regime occurs when $\gamma\approx O(T)$. This partition is the dashed line separating region $O5$ from $O1$. In Fig. (\ref{fig2}b), the corresponding phases are shown for $r-r_c>0$. The key difference with the $r-r_c<0$ regime is the absence of the spin-glass phase. Experimentally, the phase diagram has been measured for the random Ising spin system $LiHo_xY_{1-x}F_4$ at finite temperature. This system possesses all three phases discussed here. It then serves as a bench mark test of the phenomenological theory we have developed. While the overall features of the experimentally-determined phase diagram are similar to that shown in Fig. (\ref{fig2}a), it is worth looking closely at the form of the boundaries between the three phases. Particularly striking in the experimentally-determined phase diagram\cite{rosenbaum} is the close to linear dependence of the PM/FM phase boundary away from the bi-critical region but a non-linear dependence on the doping level in the vicinity of the bi-critical region. This dependence mirrors closely the behavior of the PM/FM finite temperature phase boundary shown in Fig. (\ref{fig2}a). While a quantitative comparison cannot be made because of the phenomenological nature of the coupling constants used in this model, the agreement with experiment is sufficiently striking and serves to justify the applicability of the model used here. \subsection{$M>1$} We consider now explicitly $M>1$. For the problem at hand, the ordered phase for $M=2$ corresponds to a superconductor. Analogous isotropic solutions can be obtained for $M>1$ with the transformations $u+v\rightarrow u+Mv$. However, because non-zero mean generates spontaneously an effective magnetization, there exists a possibility that the different spin components of the replica Q-matrices might acquire fundamentally different values as first proposed by Gabay and Toulouse\cite{gt}. In the zero-mean case, this happens only when a magnetic field is present. However, in this case, the Gabay-Toulouse (GT) line exists for all $M>1$ as a result of the spontaneously-generated magnetization. To explore the possibility of a GT line, we must generalize the ansatz for the Q-matrices to explicitly break the symmetry between the spin-components of $Q$. The simplest way of doing this is to divide the spin components of the Q-matrix into longitudinal, $\mu=\nu=1$ and transverse, $\mu=\nu\ne 1$ sectors. Hence, in Eq. (\ref{qm}), we introduce the parameters, $q_L^{ab}$, $\tilde{q}_L$, and $\bar {D}_L(\omega)$ for the longitudinal $\mu=\nu=1$ component and $q_T^{ab}$, $\tilde{q}_T$, and $\bar {D}_T(\omega)$ for the transverse components $\mu>1$. At the replica-symmetric level, both $q^{ab}_L$ and $q_T^{ab}$ are constants independent of the matrix label, $ab$. We will call these constants, $q_L$ and $q_T$, respectively. The resultant expression for the free-energy \begin{eqnarray}\label{fm1} \frac{\cal{F}}{n}&=&\frac{1}{\beta\kappa t}\sum_{\omega\ne 0} (\omega^2+r)\bar D_L(\omega)+\frac{M-1}{\beta\kappa t}\sum_{\omega\ne 0}(\omega^2+r)\bar{D}_T(\omega)\nonumber\\ && +\frac{r}{\kappa t}\tilde{q}_L+(M-1)\frac{r\tilde{q}_T}{\kappa t} -\frac{\kappa}{3\beta t}\sum_\omega\left(\bar{D}_L^3(\omega)+(M-1)\bar{D}_T^3(\omega)\right)+\nonumber\\ &&+\frac{u}{2t}\left\{\left[\tilde{q}_L+\frac{1}{\beta}\sum_{\omega\ne 0}\bar{D}_L(\omega)\right]^2+(M-1)\left[\tilde{q}_L+\frac{1}{\beta}\sum_{\omega\ne 0}\bar{D}_T(\omega)\right]^2\right\}\nonumber\\ && +\frac{v}{2t}\left\{\tilde{q}_L+(M-1)\tilde{q}_T+\frac{1}{\beta}\sum_{\omega\ne 0}\left(\bar{D}_L(\omega)+(M-1)\bar{D}_T(\omega)\right)\right\}^2\nonumber\\ &&-\frac{\kappa\beta^2}{3t}(\tilde{q}^3_L-3\tilde{q}_Lq_L^2+2q_L^3)-(M-1)\frac{\kappa\beta^2}{3t} (q_T^3-3\tilde{q}_Tq_T^2+2q_T^3)\nonumber\\ &&-\frac{\beta\psi^2}{\kappa g t}(\tilde{q}_L-q_L)+\frac{1}{2g}\left(-\gamma\psi^2+\frac{\zeta \psi^4}{2}\right) \end{eqnarray} is a generalization of Eq. (\ref{fed}) to an anisotropic system. The explicit factor of $M-1$ arises from the separation into transverse and longitudinal components. If we approach the GT line from below, we find that the relevant saddle-point equations are \begin{eqnarray}\label{mg1spe} \Delta_L^2&=&r+(u+v)\kappa[\tilde{q}_L+\frac{1}{\beta}\sum_{\omega\ne 0}\bar{D}_L(\omega)]+(M-1)v\kappa[\tilde{q}_T+\frac{1}{\beta}\sum_{\omega\ne 0}\bar{D}_T(\omega)]\nonumber\\ \bar{D}_L(\omega)&=&-\frac{1}{\kappa}(\omega^2+\Delta_L^2)^{\frac12}\nonumber\\ \bar{D}_T(\omega)&=&-\frac{1}{\kappa}|\omega|\\ q_L&=&\frac{\psi^2}{2\kappa g\Delta_L}\nonumber\\ \tilde{q}_L&=&\frac{\psi^2}{2\kappa g\Delta_L}-\frac{\Delta_L} {\kappa\beta}\nonumber\\ 0&=&\psi\left(-\gamma+\zeta\psi^2+\Delta_L\right)\nonumber\\ q_T&=&\tilde{q}_T=\frac{1}{\kappa}\left\{\frac{1}{\beta}\sum_\omega|\omega|+ \frac{1}{u+(M-1)v}\left(\frac{v}{\beta} \sum_\omega\sqrt{\omega^2+\Delta_L^2}-r-\frac{v\psi^2}{2g\Delta_L} \right)\right\}\nonumber \end{eqnarray} which are a direct generalization of the $M=1$ equations to the anisotropic system. We are particularly interested in the solution in the $\gamma$-$r$ plane where $q_T=0$. This demarcates the GT line. Below the GT line, $\tilde{q}_T=q_T\ne 0$, while above, the transverse replica off-diagonal component of $Q$ vanishes. Note this state of affairs does not occur unless $\psi\ne 0$. If we substitute the non-trivial solution for $\psi$ into the expression for $\Delta_L^2$, we find that within logarithmic accuracy at zero temperature, we recover the result obtained previously for $M=1$ but with $u+v\rightarrow u+Mv$. The phase diagram hence is identical to that shown in Fig. (\ref{fig1}). However, a new region, $\tilde{O}1$, appears. This is illustrated in Fig. (\ref{fig3}a). To find the line demarcating this region we must solve for the transverse replica off-diagonal component of $Q$. After several manipulations of the set of equations in Eq. (\ref{mg1spe}), we find that \begin{eqnarray} q_T=\frac{1}{\kappa(u+Mv)}\left[r_c-r-\frac{v}{u}\Delta_L^2\right]. \end{eqnarray} If we use the fact that $\Delta_L\approx\gamma$, we find that the GT line occurs when \begin{eqnarray}\label{gtline} \gamma=\sqrt{\frac{u}{v}(r_c-r)}. \end{eqnarray} The phase diagram depicting this line at zero temperature is shown in Fig. (\ref{fig3}a). In the region labeled $\tilde{O}1$, $q_T\ne 0$ and $q_L\ne 0$, whereas in $O1$ only $q_L\ne 0$. Hence, we have identified the zero-temperature GT line. At finite temperature, the generalization of the Eq.(\ref{gtline}) is simply \begin{eqnarray}\label{ftgt} \gamma=\sqrt{\frac{u}{v}\left[r_c(T)-r\right]}=\sqrt{\frac{v}{u}\left[r_c-r-(u+Mv)\frac{\pi T^2}{3}\right]}. \end{eqnarray} Hence, the GT line is now a surface in the $\gamma$, $r$, and $T$ space, a slice of which is shown in Fig. (\ref{fig3}b). At the point $\gamma=0$, we recover the isotropic result that \begin{eqnarray} q_T=q_L=\frac{r_c(T)-r}{\kappa(u+Mv)} \end{eqnarray} in the spin-glass phase. We now consider the region above the GT line. This region was not analyzed by RSY. However, this region is of considerable interest because although $q_T$ vanishes in this region, the non-zero transverse component of the order parameter becomes gapped, significantly different in character from the longitudinal one. This can be seen immediately by setting $q_T=\tilde{q}_T=0$ in Eq. (\ref{fm1}) and differentiating with respect to $\bar{D}_T(\omega)$. From this operation, we find that contrary to the ungapped transverse component of $Q$ below the GT line, \begin{eqnarray}\label{dt} \bar{D}_T(\omega)=-\frac{1}{\kappa}(\omega^2+\Delta_T^2)^\frac12 \end{eqnarray} with \begin{eqnarray} \Delta_T^2=r+\frac{u+(M-1)v}{\beta}\sum_\omega\sqrt{\omega^2+\Delta_L^2}+ v\left(\frac{\psi^2}{2g\Delta_L}-\frac{1}{\beta} \sum_\omega\sqrt{\omega^2+\Delta_L^2}\right) \end{eqnarray} above the GT line. The corresponding expression for $\Delta_L$ is easily obtained from the first equation in Eq. (\ref{mg1spe}) by setting $q_T=0$. The ferromagnetic phase above the GT line can be divided into two regions, $O1$ and $O2$, which are now different with respect to the relative magnitudes of $\Delta_L$ and $\Delta_T$. In the region $O2$ which is completely analogous to the corresponding region in the $M=1$ case, $\Delta_L$ and $\Delta_T$ are almost equal and are given by Eq. (\ref{gamline}) with $u+v \rightarrow u+Mv$. The condition for crossover to $O1$ in which $\Delta_L$ and $\Delta_T$ are somewhat different in magnitude is given by Eq. (\ref{gaml2}). This conclusion is reached by manipulating the system of equations in Eq. (\ref{mg1spe}) with $q_T=\tilde{q}_T=0$ and $\bar{D}_T(\omega)$ given by Eq. (\ref{dt}). The resultant expression, \begin{eqnarray}\label{eq53} v\Delta_L^2-(u+v)\Delta_T^2=-ur+\frac{u(u+Mv)}{\beta} \sum_\omega\sqrt{\omega^2+\Delta_T^2}, \end{eqnarray} which is valid at any temperature, contains both the transverse and longitudinal gaps. Within the approximation that $\Delta_L\approx\gamma$, we obtain that at zero temperature in $O1$, \begin{eqnarray}\label{eq54} \Delta_T=\left(\frac{4\pi(v\gamma^2+u(r-r_c))}{u(u+Mv) \ln\frac{\Lambda_\omega^2}{(v\gamma^2+u(r-r_c))}}\right)^{\frac{1}{2}} \end{eqnarray} From this equation it is immediately clear that $\Delta_T$ is logarithmically smaller than $\Delta_L$. Also, we easily recover the GT line, $\gamma=\sqrt{u(r_c-r)/v}$, simply by solving $\Delta_T=0$. At finite temperature, we can formally distinguish three limiting cases: 1) $\Delta_L\gg T$ and $\Delta_T\gg T$, 2) $\Delta_L\gg T$ and $\Delta_T\ll T$, and 3) $\Delta_L\ll T$ and $\Delta_T\ll T$. The regime $\Delta_L\ll T$ and $\Delta_T\gg T$ does not exist as $\Delta_T$ is always less than $\Delta_L$. Case (1) is identical to the $T=0$ limit whereas cases (2) and (3) are the high-temperature limit with respect to $\Delta_T$. To describe (2) and (3), we calculate the sum in Eq. (\ref{eq53}) in the high-temperature limit with the approximation that $\Delta_L\approx\gamma$. Within logarithmic accuracy, the resultant equation \begin{eqnarray} v\gamma^2-u(r_c(T)-r)=u(u+Mv)\left(\frac{\Delta_T^2}{2\pi} \ln\frac{\Lambda_\omega}{T}+T\Delta_T\right) \end{eqnarray} is similar in structure to Eq. (\ref{eq40}). Consequently, within cases (2) and (3), two distinct regimes denoted by $O5$ and $O5'$ and $O6$ and $O6'$, respectively, arise. In the regions superscripted with a prime, two conditions hold \begin{eqnarray} \sqrt{v\gamma^2-(r_c(T)-r)}\ll T,\quad\Delta_T=\frac{v\gamma^2-u(r_c(T)-r)}{T}\quad O5\quad {\rm and} \quad O6 \end{eqnarray} Contrastly, in the unprimed regions, \begin{eqnarray} \sqrt{v\gamma^2-(r_c-r)}\ll T,\quad\Delta_T=\sqrt{\frac{2\pi T^2}{3\ln\frac{\Lambda_\omega}{T}}}\quad O5'\quad {\rm and} \quad O6'. \end{eqnarray} The transition between the primed and unprimed regions occurs when $\sqrt{v\gamma^2-u(r_c(T)-r)}\approx T/\ln\Lambda_\omega/T$, whereas the transition between $O6'$ and $O1$ occurs when $\sqrt{v\gamma^2-u(r_c-r)}\approx T$. The difference between regions $O5$ and $O5'$ and regions $O6$ and $O6'$ is that $T\ll\Delta_L$ in the latter whereas the opposite is true in the former. We have taken particular care in distinguishing the primed from the unprimed regions because they imply that the GT transition is identical in structure to the ordinary paramagnet/spin-glass transition described by RSY. Except, only the transverse component of $Q$ is affected at the GT transition. In fact, regions $O5'$ and $O6'$ are quantum critical with respect to the GT transition while the temperature dependence of the transverse component is `classical' in regions $O5$ and $O6$. In $O1$, thermal fluctuations are subservient to quantum fluctuations for both transverse and longitudinal components of $Q$. In each of these regions, a key question that can be addressed is how does the transverse gap renormalize $\psi$ and $\Delta_L$. Consider first the regime $\gamma\gg T$. In this regime, we find that \begin{eqnarray}\label{gtt2} \Delta_L=\gamma-\zeta\psi^2(T)=\gamma-\frac{2g\zeta}{u+Mv}\left[{\gamma^3\over 2\pi}\ln\frac{\Lambda_\omega}{\gamma} +(r_c-r)\gamma-\frac{(M-1)v\gamma}{u(u+Mv)}\Delta_T^2\right]. \end{eqnarray} It follows immediately from Eq. (\ref{eq54}) that the transverse contribution to the longitudinal gap is logarithmically small. As a result, the order parameter $\psi$ is well described by its value given by Eq. (\ref{psi2}) with $u+v\rightarrow u+Mv$. Consider now the high-temperature regime. In this case, $\Delta_L$ is exactly given by Eq. (\ref{ftpsi}) plus the term proportional to $\Delta_T^2$ in Eq. (\ref{gtt2}). However, we checked explicitly that in every sub-regime, the correction due to $\Delta_T^2$ is sub-dominant to the leading terms. Consequently, the order parameter, $\psi$, and $\Delta_L$ are both unrenormalized by the transverse gap to the leading order. Using the solutions delineated in Eq. (\ref{mg1spe}), we can calculate the free-energy density below and above the GT line. Above and below the GT line, we find that the free-energy density at zero-temperature up to the leading $\gamma-$ dependent terms, \begin{eqnarray} \frac{\cal{F}}{n}={\cal F}_0(u+v\rightarrow u+Mv,\zeta)-\frac{(M-1)v\Lambda_\omega^2\gamma^2}{4\pi u}+\frac{\gamma^2(r-r_c)}{2u(u+Mv)}\left(1-\frac{(M-1)v}{u}\right)+\cdots, \end{eqnarray} contains the standard analogous contribution from the $M=1$ analysis as well as $q_T$ and $\Delta_T$-dependent terms arising from Eq. (\ref{fm1}). It is the contribution from the latter terms that results in a suppression of the $\gamma^4\ln\Lambda_\omega/\gamma$ as the leading $\gamma-$dependent terms when $r=r_c$. At the bi-critical point, we find that in contrast to the M=1 case, the leading term in the free energy density is analytic in the coupling constant, $\gamma$. This term is of the form, $\Lambda_\omega^2\gamma^2$. \section{Replica-Symmetry Breaking} The requisite\cite{gt} for a replica asymmetric solution within Landau theory of spin glasses is the presence of a $Q^{4th}$ term in the Landau action. We specialize to $M=1$ for simplicity. To facilitate such an analysis, we must extend the cumulant expansion of Eq. (\ref{free}) to the next order in perturbation theory. As RSY have performed such an analysis for the spin-glass phase, we focus on the ferromagnetic case. While several types of fourth-order terms occur, the most relevant is of the form, \begin{eqnarray} -\frac{y_1}{6t}\int d^d x\int d\tau_1d\tau_2\sum_{ab}\left[Q^{ab}(x,\tau_1,\tau_2)\right]^4 \end{eqnarray} This term will give rise to a $(q^{ab})^4$ contribution to the free-energy. Our focus is the resultant change in the free energy \begin{eqnarray}\label{deltaf} \frac{\Delta\cal{F}}{n}=-\frac{\kappa\beta^2}{3t}\left(Tr q^3+3\tilde{q}Tr q^2\right)-\frac{\beta\psi^2}{\kappa g t}\sum_{a,b} q^{a,b}-\frac{y_1\beta}{6t}\sum_{ab}(q^{ab})^4. \end{eqnarray} The presence of the $\psi^2$ term suggests that within the space of ultrametric functions\cite{parisi} $q(x)$ on the interval $0\le x\le 1$, we should choose an ansatz for $q$, \begin{eqnarray}\label{qs} q(s)= \left\{ \begin{array}{ll} q_0 & \quad 0<s<s_0 \\ \frac{\kappa\beta s}{2y_1} & \quad s_0<s<s_1 \\ q_1 & \quad s_1<s<1 \end{array}\right. \end{eqnarray} that has two distinct plateaus. This insight is based on an analogy with the replica broken-symmetry solution in the presence of a magnetic field. From continuity, we must have that $q_0=\kappa\beta s_0/2y_1$ and $q_1=\kappa\beta s_1/2y_1$. The constants $s_0$ and $s_1$ can be determined from the saddle-point equations for $\Delta\cal{F}$. Upon differentiating with respect to $q_1$, we find that $\tilde{q}=q_1-y_1q_1^2/\kappa\beta$. The corresponding equation for $q_0$ provides a relationship \begin{eqnarray} q_0=\left(\frac{3\psi^2}{4y_1\kappa g}\right)^\frac13 \end{eqnarray} between $q_0$ and $\psi$. Replica symmetry breaking occurs when $q_0< q_1$. To leading order in temperature, we can approximate $q_1\approx \tilde{q}$, where $\tilde{q}$ is given by \begin{eqnarray} \tilde{q}=\frac{\psi^2}{2\kappa g\Delta}. \end{eqnarray} Hence, the boundary demarcating the replica-symmetric solution is determined by \begin{eqnarray} \psi^\frac43=\left(\frac{6\kappa^2 g^2}{y_1}\right)^\frac13\Delta \end{eqnarray} which we obtain upon equating $q_0$ and $q_1$. If we use Eq. (\ref{psi2}) for $\psi$ which is valid in region $O1$ and use the fact that in this region $\Delta\approx \gamma$, we obtain \begin{eqnarray} \gamma=\frac{2 y_1(r_c-r)^2}{3\kappa^2 g(u+v)^2} \quad r<r_c \end{eqnarray} as the condition for replica symmetry breaking at low temperatures, $T \ll\gamma$. This condition for replica-symmetry breaking extends continuously to the spin-glass phase agreeing with the work of RSY. The phase diagram illustrating the replica-broken symmetry region is depicted in Fig. (\ref{fig5}). At finite temperature, we use Eq. (\ref{ftpsi}) for $\psi$ and obtain a generalization, \begin{eqnarray}\label{rsbft} \gamma=\frac{2 y_1}{3\kappa^2 g}\left[\gamma T+ \frac{r_c-r}{u+v}\right]^2 \quad r<r_c, \end{eqnarray} of the replica-breaking condition which extends smoothly over to the zero-temperature condition. To estimate the strength of the replica-symmetry breaking, we define the effective broken ergodicity order parameter, \begin{eqnarray} \Delta_q=q_1-\int_0^1 q(s)ds. \end{eqnarray} If we substitute the expression for $q(s)$ and integrate, we obtain \begin{eqnarray} \Delta_q=\frac{y_1T}{\kappa}(q_1^2(T)-q_0^2(T)). \end{eqnarray} Because both $q_0$ and $q_1$ are finite at low temperatures, $\Delta_q\rightarrow 0$ as $T\rightarrow 0$. The weakness of the replica-symmetry breaking in the ferromagnetic phase is in accord with the weak-symmetry breaking found by RSY in the spin-glass phase. Implicit in the replica-broken solution in Eq. (\ref{qs}) is the presence of many degenerate energy minima in the energy landscape. The weakness of replica symmetry-breaking at low temperatures in this model within the ferromagnetic phase suggests that the ferromagnetic phase is energetically homogeneous. \section{Summary} We have constructed here a Landau theory near the bi-critical point for a ferromagnet, spin-glass, and paramagnet. The analogous analysis was also performed for $M>1$ in which the ordered phase for $M=2$ is a superconductor. All transitions were found to be second order in contrast to the assertion by Hartman and Weichman\cite{nzm} who claimed that the FM/PM transition was first order in the spherical limit. A key point of our analysis is the formal equivalence between the role of a non-zero mean and the presence of a magnetic field in the zero mean problem. This observation is equally valid for classical systems. Resilience of the ferromagnet against thermal fluctuations occurs when $T<\gamma$, where $\gamma$ is the coupling constant that ultimately determines the rigidity of the ferromagnet phase. Additional features of our analysis that are particularly striking are 1) the non-analytic dependence of $\psi$ on $\gamma$, namely $\psi\propto\gamma^{3/2}/\ln\Lambda_\omega/\gamma$, near the bi-critical region, in the vicinity of the bi-critical point and 2) the subsequent leading non-analytic dependence of the free-energy density in region $O1$ on $\gamma$ for $M=1$. We showed that this behavior ceases for $M>1$ below and above the GT line. The reason underlying this difference with the $M=1$ case is the presence of $M-1$ components of $Q$ that yield leading analytic contributions of order $O(\gamma^2)$ to the free energy density. An additional feature which our analysis brings out for $M>1$ is the similarity between the Gabay-Toulouse transition with the spin-glass/paramagnet transition with zero mean. This similarity has been noticed previously in the context of classical spin-glasses\cite{by}. A key surprise found in the analysis of the GT transition is the sub-leading depdence of $\psi$ and $\Delta_L$ on the transverse gap and $q_T$. This suggests that the GT line should have only weak experimentally-detectable features in the superconducting phase, for example. The excellent agreement observed with the experimental results on $LiHo_xY_{1-x}F_4$ for the case of $M=1$ is encouraging that similar agreement will be found with experiments on the analogous superconducting systems. While we referred to the point of intersection between all the phases as being bi-critical, it is in fact multi-critical. This state of affairs obtains as a result of the presence of replica symmetry breaking and the Gabay-Toulouse instability. As in the $M=1$ spin-glass case, we also showed that the non-ergodicity parameter is linear in temperature illustrating the weakness of replica symmetry breaking in the ferromagnetic phase. Though we did not treat explicitly replica-symmetry breaking for $M>1$, such symmetry breaking is expected in this case as well when quartic terms are included in the action. In a future study, we will extend this analysis beyond mean field and report on the renormalization group analysis of the `bi-critical' region. \acknowledgements We thank S. Sachdev for useful comments during the early stages of this project. This work was funded by the DMR of the NSF and the ACS petroleum research fund.
2024-02-18T23:40:16.248Z
1998-11-30T17:21:35.000Z
algebraic_stack_train_0000
1,881
9,103
proofpile-arXiv_065-9243
\section{The Gamma Matrices} \setcounter{equation}{0} In this appendix we give our conventions for the gamma matrices. We follow closely the conventions of \cite{green}, however some relabeling of the coordinates will be required. The $32\times 32$ gamma matrices are in the Majorana representation and are purely imaginary. They are \begin{eqnarray} \Gamma^0&=&\tau_2\times I_{16}\nonumber\\ \Gamma^I&=&i \tau_1\times \gamma^I, \;\;\;\;\;\;\;\; I=1,...8\nonumber\\ \Gamma^9&=&i \tau_3\times I_{16}\nonumber\\ \Gamma^{10}&=&i \tau_1\times \gamma^9 \end{eqnarray} where $\tau_i$ are the Pauli matrices, $I_x$ are $x\times x$ identity matrices and the $16\times 16$ real matrices $\gamma^I$ satisfy \be \{\gamma^{{I}},\gamma^{{J}}\}=2\delta^{{I}{J}},\; \;\;\;\;\;\;\; \; {I},{J} =1,...8. \end{equation} and \be \gamma^9=\prod_{I=1}^{8}\gamma^{{I}}. \end{equation} This ensures that \be \{\Gamma^\mu,\Gamma^\nu\}=-2\eta^{\mu\nu}. \end{equation} We now construct the $spin(8)$ Clifford algebra.\footnote{ This construction is that presented in Appendix 5.B of Ref.\cite{green}} The matrices $\gamma^{{I}}$ take the form \begin{eqnarray} \gamma^{\hat{I}}&=&\pmatrix{0& \tilde{\gamma}^{{\hat{I}}}\cr -\tilde{\gamma}^{{\hat{I}}}&0\cr },\ {\hat{I}}=1,...7,\nonumber\\ \gamma^{8}&=&\pmatrix{I_{8}& 0\cr 0&-I_{8}\cr }, \end{eqnarray} where the $8\times 8$ matrices $\tilde{\gamma}^{{\hat{I}}}$ are antisymmetric and explicitly given by \begin{eqnarray} \tilde{\gamma}^1&=&-i \tau_2\times\tau_2\times\tau_2\nonumber\\ \tilde{\gamma}^2&=&i 1_2\times\tau_1\times\tau_2\nonumber\\ \tilde{\gamma}^3&=&i 1_2\times\tau_3\times\tau_2\nonumber\\ \tilde{\gamma}^4&=&i \tau_1\times\tau_2\times1_2\nonumber\\ \tilde{\gamma}^5&=&i \tau_3\times\tau_2\times1_2\nonumber\\ \tilde{\gamma}^6&=&i \tau_2\times1_2\times\tau_1\nonumber\\ \tilde{\gamma}^7&=&i \tau_2\times1_2\times\tau_3 \end{eqnarray} It follows that $\gamma^{9}$ is given by \be \gamma^{9}=\pmatrix{0&-I_{8}\cr -I_{8}&0\cr }. \end{equation} Furthermore \be \Gamma^+=\frac{1}{\sqrt{2}}\pmatrix{i & -i \cr i & -i \cr}\times I_{16},\;\;\;\;\;\; \Gamma^-=\frac{1}{\sqrt{2}}\pmatrix{-i & -i \cr i & i \cr}\times I_{16}, \end{equation} such that \be (\Gamma^+)^2=(\Gamma^-)^2=1,\;\;\;\;\;\; \{ \Gamma^+,\Gamma^-\}=2. \end{equation} Then it is straightforward to show that the condition $\Gamma^+\theta=0$ leads to \be \theta=\pmatrix{S_1\cr S_2 \cr S_1 \cr S_2 \cr}. \end{equation} Moreover, it follows that \begin{eqnarray} &\bar{\theta}\Gamma^\mu\partial\theta=0&,\;\;\;\;\;\;\;\;\;\;\;\; \mbox{unless}\;\;\mu=-\nonumber\\ &\bar{\theta}\Gamma^{\mu\nu}\partial\theta=0&,\;\;\;\;\;\;\;\;\;\;\;\; \mbox{unless}\;\;\mu\nu=-M \end{eqnarray} where $\bar{\theta}=\theta^\dagger\Gamma_0=\theta^{T}\Gamma_0\;$ ($\theta$ is real). Finally notice that \be (\Gamma^\mu)^\dagger=\Gamma^0\Gamma^\mu\Gamma^0,\;\;\;\;\;\;\; \; \Gamma^{11}=\prod_{\mu=0}^{10}\Gamma^{{\mu}}=i\Gamma^{10}. \end{equation} \newpage
2024-02-18T23:40:16.304Z
1998-11-27T10:42:30.000Z
algebraic_stack_train_0000
1,883
472
proofpile-arXiv_065-9351
\section{The main constituents of the ISM } The detection of neutral interstellar clouds at large $z$-distances (M\"unch 1957) led to the hypothesis of a hot gaseous Galactic halo (Spitzer 1956). This high temperature ($T\simeq 10^6$ K) low volume density gas was proposed to confine the neutral clouds high above the Galactic disk. Alternatively, a low temperature, almost neutral halo ($T\simeq 10^4$ K) was postulated by Pikelner \& Shklovsky (1958). Their model predicted emission lines of neutral species with velocity dispersions of 70\,$\rm km\,s^{-1}$ due to turbulent motions. In the early fifties optical polarization studies revealed that the Galaxy hosts an interstellar magnetic field, with a field strength of a few $\mu$G. This is, on the average, oriented parallel to the Galactic disk. Several years later, radio continuum surveys (Beuermann et al. 1985) gave clear evidence for radio synchrotron emission originating at distances of a few kpc above the Galactic plane. This indicates that magnetic fields are also present within the Galactic halo. Already in 1966 Parker pointed out that magnetic fields must be associated with gaseous counterparts. UV absorption line measurement with the {\it Copernicus\/} satellite showed the presence of highly-ionized species within the Galactic halo. However, the detection of an ubiquitous gaseous component was not easy to establish. It took nearly 40 years, since Spitzer's prediction. The {\it ROSAT\/} all-sky survey revealed the presence of an ubiquitous hot X-ray emitting plasma within the Galactic halo. Pietz et al. (1998) and Kerp et al. (1999) showed that the X-ray emitting halo gas is smoothly distributed across the whole sky. They performed a quantitative correlation the new Leiden/Dwingeloo HI 21-cm line survey and the {\it ROSAT\/} all-sky survey. These analyses gave evidence that the Galactic halo is the brightest diffuse soft X-ray source. Also recently, evidence for an extended neutral Galactic halo was presented by Albert et al. (1994). Lockman \& Gehman (1991) found H\,{\sc i~} gas with a velocity dispersion of up to 35\,$\rm km\,s^{-1}\;$ towards the Galactic poles, which was interpreted as emission originating from a Galactic halo. Kalberla et al. (1998a) disclosed the existence of a pervasive H\,{\sc i~} component with a velocity dispersion of 60 $\rm km\,s^{-1}\;$ from the Leiden/Dwingeloo Survey (LDS, Hartmann \& Burton 1997). The X-ray as well as the neutral gas within the Galactic halo are consistent with a hydrostatic equilibrium model of the Milky Way on large scales (Kalberla \& Kerp 1998). The X-ray data constrain the extent of the gaseous halo, while the high-velocity dispersion component determines the pressure balance of the equilibrium model. According to this equilibrium model of Kalberla \& Kerp (1998), the Galactic halo has a vertical scale height of 4.4\,kpc and a radial scale length of 15\,kpc. These values are in agreement with those derived from the distribution of the highly-ionized atoms within the Galactic halo published by Savage et al. (1997): $h_z(\rm{Si}\sc {iv}) = 5.1 (\pm0.7)$\,kpc, $h_z({\rm C\sc {iv}}) = 4.4 (\pm 0.6) $\,kpc and $h_z({\rm N \sc{v}}) = 3.9 (\pm 1.4)$\,kpc. Kalberla et al. (1998a) as well as Savage et al. (1997) find that the observational data are fitted best by assuming a co-rotation of the Galactic disk and halo in addition to a turbulent motion of the gas with an averaged velocity dispersion of 60\,$\rm km\,s^{-1}\;$. Taking all in one, it is possible to construct a consistent model of the Galactic halo, consisting of gas, magnetic fields and cosmic rays. In this paper we discuss the evidence for a gaseous halo in some detail. In Sect. 2 we describe observational evidence for the existence of soft X-ray plasma within the Galactic halo. In Sect. 3 we focus on H\,{\sc i~} observations and model calculations of the Galactic disk/halo. In Sect. 4 we discuss the stability of a multi-phase disk-halo system. \section{X-ray emission from the halo } Since the first detection of the diffuse soft X-ray background in 1968 (Bowyer, Field and Mack) it was a matter of debate whether the radiation originates within the Galactic halo or in the local vicinity of the Sun. Because of the strong energy dependence of the photo-electric cross section on energy ($\sigma\,\propto\,E^{-\frac{8}{3}}$) the soft X-ray emission is strongly attenuated by the ISM on the line of sight. In the so called 1/4\,keV band the cross section is about $\sigma\,=\,0.5\,10^{-20}\,{\rm cm^2}$; thus, towards most of the high latitude sky we have at least an opacity of unity. With the {\it ROSAT\/} X-ray telescope it was possible to study the diffuse soft X-ray background emission on a high signal-to-noise ratio. The angular resolution of 30" was sufficient to reveal individual shadows of clouds in front of an X-ray emitting source. Here, we like to compile the major findings of two recent papers on the Galactic X-ray halo. Both are based on the {\it ROSAT\/} all-sky survey data, both assume that the soft X-ray background radiation is a superposition of the local hot bubble, the Galactic halo and extragalactic background emission. Despite these similarities, they derive a totally different Galactic halo X-ray intensity distribution. What may cause these differences and what are the implications? {\it The data-set:\/} Snowden et al. (1998) analyzed {\it ROSAT\/} 1/4\,keV band data, while Pietz et al. (1998) studied the X-ray intensity distribution of the {\it ROSAT\/} 3/4\,keV and 1/4\,keV energy band. Because of the strong energy dependence of the absorption cross section, the 3/4\,keV emission is only attenuated weakly, while the 1/4\,keV photons are strongly absorbed by the same amount of interstellar matter. For example a neutral hydrogen column density of $N_{\rm H\,{\sc i~}}\,=\,1.5\,10^{20}\,{\rm cm^{-2}}$ attenuates the 3/4\,keV radiation by only 15\%, while the same column density absorbs 75\% of the 1/4\,keV photons. This major difference indicates, that the 3/4\,keV is more appropriate to study the intensity distribution of a hot Galactic halo gas. {\it The plasma temperature:\/} Because of the moderate energy resolution of the {\it ROSAT\/} PSPC detector, the practical way to determine the plasma temperature across the Galactic sky is to evaluate energy-band ratios. Especially the 1/4\,keV to 3/4\,keV band ratio is very sensitive to variations of the plasma temperature. A ${\rm log}(T{\rm [K]})\,=\,6.0$ plasma will emit most of the photons in the 1/4\,keV range, but a minor increase in temperature, for instance to ${\rm log}(T{\rm [K]})\,=\,6.3$ produces about an order of magnitude more 3/4\,keV photons than the ${\rm log}(T{\rm [K]})\,=\,6.0$ plasma. Snowden et al. (1998) derived a LHB temperature of ${\rm log}(T{\rm [K]})\,=\,6.07$ while the Galactic halo temperature is lower and about ${\rm log}(T{\rm [K]})\,=\,6.02$. Both plasma do {\em not} contribute significantly to the 3/4\,keV energy range. In contrast to this finding, Pietz et al. (1998) claimed that the temperature of the LHB is about ${\rm log}(T{\rm [K]})\,=\,5.85$ and that of the halo ${\rm log}(T{\rm [K]})\,=\,6.2$. The LHB plasma does not emit significant amounts of 3/4\,keV photons, while the halo plasma accounts for about half of the observed 3/4\,keV emission. Accordingly, the gap between the observed 3/4\,keV intensity and the known sources of diffuse 3/4\,keV radiation, already identified within the Wisconsin survey (McCammon \& Sanders 1990), is overcome by the approach of Pietz et al. {\it The method:\/} Snowden et al. fitted scatter-diagrams (H\,{\sc i~} column density $N_{\rm H\,{\sc i~}}$ versus X-ray intensity $I_{\rm X-ray}$). For this approach it is necessary to assume an {\it a priori\/} plasma temperature to determine the photo-electric cross section of the ISM. Snowden et al. analyzed the {\it ROSAT} 1/4\,keV radiation splitted into the so-called R1 and R2 energy bands. In the second step, they derived the temperature of the plasma components, by evaluating the R1/R2 energy-band ratio. The R1/R2 band ratio is only a weak function of temperature. Using the same plasma temperatures as in the example above we find for ${\rm log}(T{\rm [K]})\,=\,6.0$ a ratio R1/R2 = 0.9, while for ${\rm log}(T{\rm [K]})\,=\,6.3$, R1/R2 = 0.6. The uncertainties within of the {\it ROSAT} R1 and R2 band are about 10\%, accordingly the R1/R2-band ratio has a statistical uncertainty of about 20\%. Thus, it is a difficult task to constrain the plasma temperature by analyzing the R1/R2-band ratio. In the final step, they derived the Galactic halo intensity by inverting the radiation transfer equation. Within the 1/4\,keV band the product of $N_{\rm H\,{\sc i~}}$ and $\sigma$ is close to or larger as unity, the quotient $I_{\rm halo}\,=\frac{\,I_{\rm LHB}\,-\,I_{\rm ex}\, e^{-\sigma\,N_{\rm H\,{\sc i~}}}}{e^{-\sigma\,N_{\rm H\,{\sc i~}}}}$ becomes uncertain, because the divisor is a small value. The approach of Pietz et al. was totally different. They found evidence for a large scale 3/4\,keV intensity gradient between the Galactic center and the Galactic anti-center direction. Because the 3/4\,keV radiation is only to a minor fraction attenuated by the Galactic interstellar matter towards the high latitude sky, they attributed this gradient to a variation of the emissivity of the Galactic halo plasma. They modeled this 3/4\,keV gradient and optimized the shape of the Galactic halo to fit the 3/4\,keV diffuse X-ray background intensity distribution. The attenuation of the 3/4\,keV radiation is much weaker than in the 1/4\,keV energy range, but present. Pietz et al. estimated the Galactic halo plasma temperature by the spectral fit of the diffuse X-ray background data of a deep pointed PSPC observation. This is the main source of uncertainty, because it is unknown whether this pointed observation is towards a representative direction or not. To overcome this source of uncertainty, they investigated the 1/4\,keV to 3/4\,keV energy ratio, which is, as shown above, a sensitive measure of the plasma temperature. They cross-correlated the 1/4\,keV Galactic halo intensities of Kerp et al. (1999) with that of the modeled 3/4\,keV sky and deduced a plasma temperature of ${\rm log}(T{\rm [K]})\,=\,6.2\,\pm\,0.02$ (compare their Fig.\,10). In following step, they scaled the emissivity of 3/4\,keV model to the 1/4\,keV energy range. {\it The r\^ole of the extragalactic background radiation:\/} The radiation transfer equation of Snowden et al. and Pietz et al. is the same. An important, but up to now not discussed diffuse soft X-ray component is the extragalactic background. The 1/4\,keV to 3/4\,keV energy-band ratio is not only a sensitive measure of the plasma temperature, but also a measure for the brightness of the extragalactic background. As pointed out by Kerp \& Pietz (1998), if the extragalactic background spectrum is steep $I_{\rm extra}(E)\,\simeq\,E^{-2}$ (Hasinger et al. 1993) and the intensity within the 1/4\,keV band high $I_{\rm extra}\,\simeq\,4.4\,10^{-4}\,{\rm cts\,s^{-1}\,arcmin^{-2}}$ (Cui et al. 1996), the quantitative analysis of the {\it ROSAT\/} energy-band ratios allows only a low Galactic halo plasma temperature (${\rm log}(T{\rm [K]})\,<\,6.1$). If one assumes a flatter extragalactic background spectrum ($I_{\rm extra}(E)\,\propto\,E^{-1.5}$, Gendreau et al. 1995) and a fainter 1/4\,keV intensity ($I_{\rm extra}\,\simeq\,2.3\,10^{-4}\, {\rm cts\,s^{-1}\,arcmin^{-2}}$, Barber et al. 1996), a higher Galactic halo plasma temperature is consistent with the energy band ratios. Snowden et al. adopt the steep spectrum high 1/4\,keV intensity set of parameters, and found accordingly a low temperature halo. Moreover, this choice of parameters for the extragalactic background determines that 66\% of the 3/4\,keV diffuse X-ray emission is of extragalactic origin, while low plasma temperatures derived for the Galactic halo and the LHB {\em cannot} account for the rest of the 3/4\,keV emission. Thus, a significant fraction of the 3/4\,keV diffuse X-ray radiation is not investigated. Pietz et al. used the flatter background spectrum and low intensities for the extragalactic background, the second set of parameters, which allows a higher temperature for the Galactic halo plasma. The difference between the 3/4\,keV extragalactic background level (also 66\% of the total observed 3/4\,keV intensity) and the observed 3/4\,keV intensity is attributed to the Galactic halo plasma. Apparently, the 3/4\,keV gap is filled with the Galactic halo plasma emission. {\it The model predictions:\/} Snowden et al. (1998) confirm the LHB model of (Snowden et al. 1990) only slightly modified because of the Galactic halo emission. In the Snowden et al. (1998) model the Galactic halo contains some patches of soft X-ray emitting halo gas, which are distributed across the high latitude sky. The 3/4\,keV emission is not entirely explained by the model, because the gap between the extragalactic X-ray background intensity level and the observed intensity is still present. The Pietz et al. model explains entirely the 3/4\,keV and 1/4\,keV X-ray background emission towards high latitudes. The derived Galactic halo plasma temperature of ${\rm log}(T{\rm [K]})\,=\,6.2$ emits most of the soft X-ray photons in the 1/4\,keV energy band. Accordingly, they scaled the emissivity of the 3/4\,keV model to that of the 1/4\,keV X-ray emission. They attribute the difference between the large-scale averaged observed 1/4\,keV and modeled 1/4\,keV Galactic halo emission to the LHB. The derived LHB emission is smoothly distributed across the sky. {\it Conclusions:\/} Snowden et al. confirmed the LHB model, in which a displacement of X-ray plasma and $N_{\rm HI}$ produces the apparent Galactic plane to Galactic pole X-ray intensity gradient. They found evidence for some low temperature soft X-ray emitting plasma patches within the Galactic halo gas. They conclude that temperature and intensity of the halo emission must be highly variable which excludes any pervasive Galactic halo plasma. Pietz et al. constructed a model of the X-ray halo which is consistent solution for the detected diffuse soft X-ray emission across the entire {\it ROSAT\/} X-ray energy range. They proposed, that the 3/4\,keV emission is a superposition of the Galactic halo and extragalactic background emission. The Galactic halo plasma appears to be isothermal on large angular scales, accordingly it is possible to scale the 3/4\,keV model to the 1/4\,keV energy regime. They derived a smooth intensity distribution of the LHB, which is significantly cooler than the Galactic halo plasma. The smooth LHB intensity distribution is a function of the Galactic latitude. Towards the northern Galactic pole, the LHB is about 1.5 times brighter than towards the southern Galactic pole. The residuals between the observed and modeled X-ray intensity distributions show some individual excess X-ray emitting structures which are identified as star forming regions, supernova remnants or attributed to the high-velocity cloud phenomenon (Kerp et al. 1999). The Pietz et al. Galactic X-ray halo model predicts the existence of a hot gaseous phase within the Galactic halo. This hot gas phase is the environment of cosmic-rays, magnetic fields and neutral clouds, accordingly, we can prove the X-ray model parameters by the quantitative comparison with recent all-sky surveys of H\,{\sc i~}, radio continuum and $\gamma$-ray emission (Kalberla \& Kerp, 1998). \section{H\,{\sc i~} gas in the halo } On large scales, the vertical distribution of H\,{\sc i~} gas in the Galaxy can be characterized by a layered structure (Dickey \& Lockman 1990). The layers which are associated with the disk have an exponential scale height of up to 400 pc. Until recently there was also agreement that H\,{\sc i~} gas associated with the halo must have a scale height of $ 1 < h_z < 1.5 $ kpc (e.g. Kulkarni \& Fich 1985 or Lockman \& Gehman 1991). Such a scale height corresponds to a component with a velocity dispersion of 35 $\rm km\,s^{-1}$. This was questioned by Westphalen (1997) and Kalberla et al. (1998a) who found a velocity dispersion of 60 $\rm km\,s^{-1}$, corresponding to a scale height of 4.4 kpc. Here we discuss the major differences between both approaches. {\it The data-set:\/} A component with a velocity dispersion of $\sigma \approx $ 35 $\rm km\,s^{-1}$ was derived by different authors from H\,{\sc i~} profiles extracted from the Bell Laboratories H\,{\sc i~} survey (BLS, Stark et al. 1992). The main reason for using this horn-antenna for such a kind of analysis was the high main beam efficiency of 92\%, which means that the observations are only little affected by stray radiation from the antenna side-lobes. A high velocity dispersion component of $\sigma \approx $ 60 $\rm km\,s^{-1}$ was found by Westphalen (1997) using the LDS. This survey was observed with a parabolic dish. Usually, such a telescope suffers from spurious emission which disables any analysis of H\,{\sc i~} components with high-velocity dispersion. In the case of the LDS, however, the stray radiation was removed in a numerical way by Hartmann et al. (1996). These authors claim that they reduce spurious emission on average by two orders of magnitude. They demonstrated, that even reflections from the ground can affect H\,{\sc i~} observations. Subsequently, such effects have been corrected by Kalberla et al. (1998a) who suggested that systematic uncertainties due to residual stray radiation in wings of the H\,{\sc i~} spectra are below 15 mK. {\it Data reduction:\/} H\,{\sc i~} emission lines with high-velocity dispersion may be seriously affected by the instrumental baseline and the correction algorithm which was used. Individual BLS profiles were corrected by fitting a second order polynomial to emission free profiles regions which have been determined in an iterative way. In case of the LDS, a third order polynomial fit was applied after determining the emission free regions with polynomials up to $5^{th}$ order. \begin{figure}[t] \centerline{\hspace{-1.5cm}} \psfig{figure=kalberla2_fig1a.ps,width=13.2cm,angle=-90 ,bbllx=50pt,bblly=100pt,bburx=530pt,bbury=730pt} \vspace*{-10.4875cm} \centerline{\hspace*{-1.5cm}} \psfig{figure=kalberla2_fig1b.ps,width=13.2cm,angle=-90 ,bbllx=50pt,bblly=100pt,bburx=530pt,bbury=730pt} \caption[]{ H\,{\sc i~} 21-cm line spectra from the Leiden/Dwingeloo Survey (LDS) averaged over all longitudes and in latitude across $10\deg$. The bottom profile is centered at $b=85\deg$, the top one at $b=5\deg$ in steps of $10\deg$. The zero levels of subsequent profiles are separated by 50 mK. The solid lines mark the H\,{\sc i~} emission of a co-rotating halo with a vertical scale height of $h_z=4.4$ kpc and a radial scale length of $A_1=15$ kpc. \label{fig1} } \end{figure} \begin{figure}[h] \centerline{\hspace{-1.5cm}} \psfig{figure=kalberla2_fig2a.ps,width=13.2cm,angle=-90 ,bbllx=50pt,bblly=100pt,bburx=530pt,bbury=730pt} \vspace*{-10.4875cm} \centerline{\hspace*{-1.5cm}} \psfig{figure=kalberla2_fig2b.ps,width=13.2cm,angle=-90 ,bbllx=50pt,bblly=100pt,bburx=530pt,bbury=730pt} \caption[]{ H\,{\sc i~} 21-cm line spectra averaged across all longitudes as in Fig. 1, but extracted from the Bell Labs Survey (BLS). The solid lines represent the neutral atomic hydrogen layer model of Lockman \& Gehman (1991, their model 2). \label{fig2} } \end{figure} The resulting BLS baselines are believed to be accurate to 50 mK. Averaging observations in time should improve the baseline accuracy. However, the averaging of H\,{\sc i~} spectra of the BLS does not show this expected behavior (Stark et al. 1992). In opposite to this finding, the individual spectra of the LDS are noisier and have a typical rms noise level of 70 mK. Averaging the LDS spectra however leads to the expected improvement of the rms noise up to a limit of a few mK. In comparing BLS and LDS spectra after averaging across regions of $5\deg\,\times\,5\deg$ Kalberla et al. (1998b) found significant deviations, indicating incompatible baselines between both surveys. The majority of the averaged BLS H\,{\sc i~} spectra reveal negative brightness temperatures in the baseline regions. Kalberla et al. (1998b) concluded, that probably an improper baseline correction algorithm was used by the reduction of the BLS data. In turn this led to a subtraction of the faint and broad H\,{\sc i~} line emission of the high-velocity dispersion component. Polynomial baseline corrections are designed to {\em remove} features from the H\,{\sc i~} spectra which are assumed to be artificial. Accidentally, broad emission line components may be affected. The question, whether the baseline correction procedure applied to the LDS may have {\em added} an artificial broad emission line component was studied by Kalberla et al. (1998a) in detail. The complete LDS data analysis was repeated. No evidence was found that the high-velocity dispersion component is due to instrumental or computational artifacts. {\it Data analysis:\/} Both surveys have been analyzed regarding low surface brightness components with high-velocity dispersion. In case of the BLS Kulkarni \& Fich (1985) as well as Lockman \& Gehman (1991) averaged large regions in direction to the Galactic poles ($|b| > 80 \deg $ and $|b| > 45 \deg $). Such averages were decomposed into Gaussian components. Westphalen (1997) considered individual $10\deg\,\times\,10\deg$ fields in all directions with latitudes $|b| > 30 \deg $. After a Gaussian decomposition of the individual averaged profiles, the average property of a component with high-velocity dispersion was determined. In each case the rms deviations between the individual observations have been determined on a channel by channel basis; rms-peaks indicating interference, HVCs or instrumental problems have been used to flag velocity intervals, which need not to be fitted by a Gaussian component. To verify, that the determined components are not affected by residual stray radiation, the stray radiation itself was decomposed into Gaussian components. Since a Gaussian analysis may be biased by initial constraints, each fit was repeated several times using different criteria for the derivation of components with high-velocity dispersion. To compare the BLS and LDS datasets and the models based on a high-velocity dispersion component, we plot data and corresponding models. Fig. 1 is based on LDS data. We include the model derived by Kalberla et al. (1998a). Fig. 2 represents the BLS data and the model 2 published by Lockman \& Gehman (1991) for comparison. \section{Halo models} The basic hydrostatic halo model is due to Parker (1966). In this classical publication it was shown that a single phase halo must be unstable. Since then, attempts have been made by various authors to derive conditions for a stable halo. We consider only the most recent papers. Bloemen (1987) concluded that a gaseous halo is needed to provide the pressure necessary to stabilize the halo. He proposed an extended high temperature halo. At a vertical distance of $1\,<\,|z|\,<\,3$\,kpc the halo gas temperature should be $T\,\simeq \,(2-3)\; 10^5$\,K while within the disk and above $|z|\,>\,3$\,kpc the temperature should be $T\,\simeq \,10^6$\,K. This halo plasma should emit significant amounts of X-ray photons within the 1/4\,keV and 3/4\,keV energy bands. Boulares \& Cox (1990) studied the Galactic hydrostatic equilibrium including magnetic tension and cosmic-ray diffusion. They found, that the gas pressure, based on recent observations at that time, was not sufficient to stabilize the halo against Parker instabilities. For a stable Galactic halo, they proposed a halo H\,{\sc i~} phase with a velocity dispersion of 20 $\rm km\,s^{-1}\;$ in the disk ($z\,=\,0$) and of 60 $\rm km\,s^{-1}$ at $z = 4 $ kpc. This is close to the high-velocity dispersion component established by Kalberla et al. (1998a). The LDS observations indicate however, that the H\,{\sc i~} emission in the halo seems to be isothermal with a velocity dispersion which is independent from $z$-height. We calculated the H\,{\sc i~} distribution as proposed by Boulares \& Cox (1990) and found that this distribution is similar to the distribution derived from the Lockman \& Gehman (1991) model which is plotted in Fig. 2. \subsection{A new approach} To describe the large scale properties of the Milky Way in the sense of a steady-state solution, Kalberla \& Kerp (1998) assumed a hydrostatic halo model, as proposed by Parker (1966). Kalberla \& Kerp (1998) re-analyzed the hydrostatic equilibrium model of the Milky Way, including the most recent all-sky surveys, ranging from the $\gamma$-ray to the radio synchrotron emission regime. They distinguished three main components of the Galaxy which can be characterized by their vertical scale heights $h_z$: the cold and warm neutral interstellar medium (ISM) with $h_z$ = 150 pc and $h_z$ = 400 pc (Dickey \& Lockman, 1990), the diffuse ionised gas (DIG) with $h_z$ = 1 kpc (Reynolds, 1997), and halo gas with $h_z$ = 4.4 kpc. The major difference to previous investigations (e.g. Bloemen 1987, Boulares \& Cox 1990) is, that Kalberla \& Kerp (1998) included the neutral (Kalberla et al. 1998a) and X-ray (Pietz et al. 1998) gas phase, located within the Galactic halo, into their calculations. Such a layered disk-halo model was found to fit the large scale distribution of the neutral and ionized hydrogen gas throughout the whole Galaxy. Gas, magnetic fields and cosmic rays were found to be in pressure equilibrium. As discussed by Kalberla \& Kerp (1998) in detail, such an equilibrium situation does not only explain the large scale distributions of gas, magnetic fields and cosmic rays, but results also in a stable configuration. It is essential, that the multi-phase halo is supported by the pressure of the gaseous layers with low $z$ scale-heights, belonging to the Galactic disk and to the disk-halo interface. Without such layers, the halo would be instable as described by Parker (1966). Kalberla \& Kerp (1998) found that the halo is stable on {\em average} only. Pressure fluctuations may cause instabilities above $z$ distances of $\sim$ 4 kpc, but not below. Considering activities in the Galactic disk (e.g. Norman \& Ikeuchi 1989) one expects that such pressure fluctuations will affect the halo gas. Necessarily, instabilities will trigger the condensation of HVCs in the halo. Kalberla et al. (these proceedings) studied the large scale distribution of HVCs and found that turbulent motions within the halo have considerable effects on the observed column density distribution. Except from turbulence, the scenario, derived from hydrostatic considerations, is in close agreement with predictions of hydro-dynamical models. Fountain parameters which have been favored by Bregman (1980) are in close agreement with the parameters derived by Pietz et al. (1998) from observations. A gaseous multi-layer structure was found also by Avillez (these proceedings) from 3D hydro-dynamical simulations. In his calculations a halo, and at the same time a disk-halo interface is built up after a short period of time, reaching a steady state situation soon. Scale heights, densities and temperatures of these layers are found to be comparable to the values listed by Kalberla \& Kerp (1998).
2024-02-18T23:40:16.642Z
1998-11-27T13:41:36.000Z
algebraic_stack_train_0000
1,909
4,663
proofpile-arXiv_065-9477
\section{0pt}{10pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing\subsection{0pt}{10pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing\subsubsection{0pt}{10pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \begin{document} \title{Spatial gene drives and pushed genetic waves} \author{Hidenori Tanaka} \email{tanaka@g.harvard.edu} \affiliation{School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138} \affiliation{Kavli Institute for Bionano Science and Technology, Harvard University, Cambridge, MA 02138} \author{Howard A. Stone} \affiliation{Department of Mechanical and Aerospace Engineering, Princeton University, NJ 08544, USA} \author{David R. Nelson} \email{drnelson@fas.harvard.edu} \affiliation{School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138} \affiliation{Departments of Physics and Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA} \date{\today} \begin{abstract} Gene drives have the potential to rapidly replace a harmful wild-type allele with a gene drive allele engineered to have desired functionalities. However, an accidental or premature release of a gene drive construct to the natural environment could damage an ecosystem irreversibly. Thus, it is important to understand the spatiotemporal consequences of the super-Mendelian population genetics prior to potential applications. Here, we employ a reaction-diffusion model for sexually reproducing diploid organisms to study how a locally introduced gene drive allele spreads to replace the wild-type allele, even though it possesses a selective disadvantage $s>0$. Using methods developed by N. Barton and collaborators, we show that socially responsible gene drives require $0.5<s<0.697$, a rather narrow range. In this ``pushed wave'' regime, the spatial spreading of gene drives will be initiated only when the initial frequency distribution is above a threshold profile called ``critical propagule'', which acts as a safeguard against accidental release. We also study how the spatial spread of the pushed wave can be stopped by making gene drives uniquely vulnerable (``sensitizing drive'') in a way that is harmless for a wild-type allele. Finally, we show that appropriately sensitized drives in two dimensions can be stopped even by imperfect barriers perforated by a series of gaps. \end{abstract} \maketitle The development of the CRISPR/Cas9 system \cite{cong2013multiplex, jinek2013rna,mali2013rna,wright2016biology}, derived from an adaptive immune system in prokaryotes \cite{marraffini2015crispr}, has received much recent attention, in part due to its exceptional versatility as a gene editor in sexually-reproducing organisms, compared to similar exploitations of homologous recombination such as zinc-finger nucleases (ZFNs) and the TALENS system \cite{jiang2015crispr,wright2016biology}. Part of the appeal is the potential for introducing a novel gene into a population, allowing control of highly pesticide-resistant crop pests and disease vectors such as mosquitoes \cite{alphey2014genetic, burt2014heritable, esvelt2014concerning,gantz2016dawn}. Although the genetic modifications typically introduce a fitness cost or a ``selective disadvantage'', the enhanced inheritance rate embodied in CRISPR/Cas9 gene drives nevertheless allows edited genes to spread, even when the fitness cost of the inserted gene is large. The idea of using constructs that bias gene transmission rates to rapidly introduce novel genes into ecosystems has been discussed for many decades \cite{curtis1968possible, foster1972chromosome, burt2003site, sinkins2006gene, gould2008broadening, deredec2008population}. Similar ``homing endonuclease genes'' (in the case of CRISPR/Cas9, the homing ability is provided by a guide RNA) were considered earlier by ecologists in the context of control of malaria in Africa \cite{north2013modelling,Eckhoff:2017:10.1073/pnas.1611064114}. \begin{figure} \centering \includegraphics[clip,width=0.9\columnwidth]{./Figure_1.pdf} \caption{ Schematics of the gene drive machinery with a perfect conversion efficiency $c=1$. (A) Every time an individual homozygous for the drive construct and a wild-type mate, heterozygotes in the embryo are converted to homozygotes by the mutagenic chain reaction (MCR). (B) Gene drives enhance their inheritance rate beyond that of the conventional Mendelian population genetics and can spread even with a selective disadvantage. } \label{Fig_1} \end{figure} As a hypothetical example of a gene drive applied to a pathogen vector requiring both a vertebrate and insect host, consider plasmodium, carried by mosquitoes and injected with its saliva into humans (Fig. \ref{Fig_1}). Female mosquitoes typically hatch from eggs in small standing pools of water and, after mating, search for a human to feed on. They then lay their eggs and repeat the process, thus spreading the infection over a few gonotrophic cycles. A gene drive could alter the function of a protein manufactured in the salivary gland of female mosquitoes from, say, type $a$, anesthetizing nerve cells when it bites humans, to instead type $A$, clogging up essential chemoreceptors in plasmodium and thus killing these eukaryotes. In the absence of a gene drive, there would be a selective disadvantage or fitness cost $s$ to losing this protein. Even if the fitness cost $s$ were zero, it is unlikely that this new trait would be able to escape genetic drift in large populations. However, as we describe below, the trait could spread easily if linked to a gene drive that converts heterozygotes to homozygotes with efficiency $c$ close to 1 (Fig. \ref{Fig_1}A). Remarkably, high conversion rates have already been achieved with the mutagenic chain reaction (``MCR'') realized by the CRISPR/Cas9 system \cite{cong2013multiplex, jinek2013rna,mali2013rna} for yeast ($c_{\rm{yeast}}>0.995$) \cite{dicarlo2015safeguarding}, fruit flies ($c_{\rm{flies}}=0.97$) \cite{gantz2015mutagenic} and malaria vector mosquito, \emph{Anopheles stephensi} with engineered malaria resistance ($c_{\rm{mosquito}}\geq 0.98$) \cite{gantz2015highly}. However, the gene drives' intrinsic nature of irreversibly altering wild-type populations raises biosafety concerns \cite{esvelt2014concerning}, and calls for confinement strategies to prevent unintentional escape and spread of the gene drive constructs \cite{akbari2015safeguarding}. While various genetic design or containment strategies have been discussed \cite{chan2011insect, henkel2012monitoring, esvelt2014concerning, gantz2015mutagenic}, and a few computational simulations were conducted \cite{huang2011gene,north2013modelling,Eckhoff:2017:10.1073/pnas.1611064114}, the \emph{spatial} spreading of the gene drive alleles has received less attention. To understand such phenomena in a spatial context, we will exploit a methodology developed by N. Barton and collaborators, originally in an effort to understand adaptation and speciation of diploid sexually reproducing organisms in genetic hybrid zones \cite{barton1979dynamics, barton1989adaptation, barton2011spatial}. We apply these techniques to a spatial generalization of a model of diploid CRISPR/Cas9 population genetics proposed by Unckless \emph{et al.} \cite{unckless2015modeling}, and highlight two distinct ways in which gene drive alleles can spread spatially. The non-Mendelian (or ``super-Mendelian'' \cite{Noble057281}) population genetics of gene drives are remarkable because individuals homozygous for a gene drive can in fact spread into wild-type populations even if they carry a positive selective disadvantage $s$ (Fig. \ref{Fig_1}B). First, for small selective disadvantages ($0 < s < 0.5$ in our case), the spatial spreading proceeds via a well-known Fisher-Kolmogorov-Petrovsky-Piskunov wave \cite{fisher1937wave, kolmogorov}. Such pulled genetic waves \cite{stokes1976two,lewis2016finding,gandhi2016range} are driven by growth and diffusive dispersal at the leading edge, and are difficult to slow down and stop. However, for somewhat larger selective disadvantages ($0.5 < s < 0.697$) we find that propagation proceeds instead via a pushed genetic wave \cite{stokes1976two,lewis2016finding,gandhi2016range}, where the genetic wave advances via accentuated growth from populations somewhat behind the front that spill over the leading edge. These waves, characterized by a strong Allee effect \cite{lewis1993allee, taylor2005allee}, are more socially responsible than the pulled Fisher waves because: (i) only inoculations whose spatial size and density exceed a critical nucleus, or ``critical propagule''\cite{barton2011spatial} are able to spread spatially, thus providing protection against a premature or accidental release of a gene drive, (ii) the gene drive pushed waves can be stopped by making them uniquely vulnerable to a specific compound (``sensitizing drive'' \cite{esvelt2014concerning}), which is harmless for a wild-type allele, and (iii) appropriately sensitized gene drives can be stopped even by barriers punctuated by defects, analogous to regularly spaced fire breaks used to contain forest fires. Similar pushed or ``excitable'' waves also arise, for example, in neuroscience, in simplified versions of the Hodgkin-Huxley model of action potentials \cite{nelson2004biological}. When the selective disadvantage associated with the gene drive is too large ($s > 0.697$ in our model) the excitable wave reverses direction and the region occupied by the gene drive homozygotes collapses to zero. \begin{figure} \centering \includegraphics[clip,width=0.8\columnwidth]{./Figure_2.pdf} \caption{ Schematic phase diagram of the spatial evolutionary games in one dimension \cite{frey2010evolutionary, korolev2011competition,lavrentovich2014asymmetric}. The parameters $\alpha$ and $\beta$ control interactions between red and green haploid organisms. Positive $\alpha$ means the presence of the green allele favors the red allele, positive $\beta$ enhances the green allele when red is present, etc. (see SI Appendix for a detailed description of the model.) Pulled Fisher wave regimes (controlling, for example, the dynamics of selective dominance in the second and four quadrants) and the pushed excitable wave regimes (third quadrant, competitive exclusion dynamics) are bounded by the black dashed spinodal lines $\alpha=0, \beta < 0$ and $\alpha<0, \beta = 0$. These two bistable regimes are separated by the first-order phase transition (PT) line $\alpha=\beta<0$, drawn as a black solid line. } \label{Fig_2} \end{figure} The same mathematical analyses applies to spatial evolutionary games of two competing species in one dimension, which are governed by a class of reaction-diffusion equations that resemble the gene drive system. The fitnesses of the two interacting red and green species ($w_R$, $w_G$) are related to their frequencies ($f(x,t)$, $1-f(x,t)$) by $w_R (x,t) = g + \alpha (1-f(x,t)), w_G (x,t) = g + \beta f(x,t)$, where $g$ is a background fitness, assumed identical for the two alleles for simplicity. The mutualistic regime $\alpha>0, \beta>0$ in the first quadrant of Fig.~\ref{Fig_2} has been studied already \cite{korolev2011competition}, including the effect of genetic drift, with two lines of directed ``percolation'' transitions out of a mutualistic phase. Here, we apply the methods of \cite{barton2011spatial} to study the evolutionary dynamics near the line of first-order transitions that characterize the competitive exclusion regime in the third quadrant of Fig.~\ref{Fig_2}. Because the mathematics parallels the analysis inspired by gene drive systems in the main text, we relegate discussion of this topic to the SI Appendix, which also discusses conversion efficiencies $c<1$, an analogy with nucleation theory, laboratory tests and other matters.\\ \subsection*{Mathematical model of the CRISPR gene drives} We start with a Hardy-Weinberg model \cite{hartl1997principles} and incorporate a mutagenic chain reaction (``MCR'') with $100\%$ conversion rate to construct a model for a well-mixed system. This model is the limiting case of ``$c=1$'' in the work of Unckless \emph{et al} \cite{unckless2015modeling}. Conversion efficiencies $c<1$ can be handled by similar techniques. First, we consider a well-mixed diploid system with a wild-type allele $a$ and a gene drive allele $A$ with frequencies $p=p(t)$ and $q=q(t)$ respectively at time $t$, with $p(t)+q(t)=1$. Within a random mating model, the allele frequencies after one generation time $\tau_g$ are given by \begin{equation} (pa+qA)^2 = p^2 (a,a) + 2pq(a,A)+q^2(A,A), \end{equation} and the ratios of fertilized eggs with diploid types $(a,a)$, $(a,A)$ and $(A,A)$ are $p^2:2pq:q^2$. In a heterozygous $(a,A)$ egg, the CRISPR/Cas9 machinery encoded on a gene drive allele $A$ converts the wild-type allele $a$ into a gene drive allele $A$. Here, we assume a perfect conversion rate $(a,A)\xrightarrow[\rm{MCR}]{c=1}(A,A)$ in the embryo, as has been approximated already for yeast \cite{dicarlo2015safeguarding} and fruit flies \cite{gantz2015mutagenic}. Genetic engineering will typically reduce the fitness of individuals carrying the gene drive alleles compared to wild-type organisms, which have already gone through natural evolution and may be near a fitness maximum. The selective disadvantage of a gene drive allele $s$ is defined by the ratio of the fitness $w_{\rm{wild}}$ of wild-type organisms $(a,a)$ to the fitness $w_{\rm{drive}}$ of $(A,A)$ individuals carrying the gene drive, \begin{equation} \frac{w_{\rm{drive}}}{w_{\rm{wild}}}\equiv1-s,~0\leq s. \end{equation} (In the limit $c\rightarrow 1$ no heterozygous $(a,A)$ individuals are born \cite{unckless2015modeling}.) Taking the fitness into account, the allele frequencies after one generation time $\tau_g$ are \begin{equation} p' : q' = w_{\rm{wild}} p^2 : w_{\rm{wild}} (1-s)(q^2 + 2pq), \end{equation} where $p'\equiv p(t+\tau_g)$ and $q'\equiv q(t+\tau_g)$. Upon approximating $q'-q=q(t+\tau_g)-q(t)$ by $\tau_g \frac{dq}{dt}$, we obtain a differential equation \begin{equation} \begin{split} \tau_g \frac{dq}{dt} &= \frac{(1-s)(q^2 + 2pq)}{p^2+(1-s)(q^2 + 2pq)} - q\\ &=\frac{sq(1-q)(q-q^*)}{1-sq(2-q)},\textnormal{ where}~q^* = \frac{2s-1}{s}, \end{split} \label{eq4} \end{equation} which governs population dynamics of the mutagenic chain reaction with $100\%$ conversion efficiency in a well-mixed system. To take spatial dynamics into account, we add a diffusion term \cite{barton2011spatial} and obtain a deterministic reaction-diffusion equation for the MCR model, namely \begin{equation} \label{rdMCR} \tau_g \frac{\partial q}{\partial t} = \tau_g D \frac{\partial^2 q}{\partial x^2}+ \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}, \end{equation} which will be the main focus of this article. For later discussions, we name the reaction term of the reaction-diffusion equation, \begin{equation} f_{\rm{MCR}}(q,s) = \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}. \label{FMCR} \end{equation} The reaction term reduces to a simpler cubic expression \begin{equation} f_{\rm{cubic}}(q,s)=sq(1-q)(q-q^*) \label{Fcubic} \end{equation} by ignoring $-sq(2-q)$ in the denominator, which is a reasonable approximation if the selective disadvantage $s$ is small. This form of the reaction-diffusion equation has been well studied, as reviewed in \cite{barton2011spatial}.\\ Although population genetics is often studied in the limit of small $s$, $s$ is in fact fairly large in the regime of pushed excitable waves of most interest to us here, $0.5 < s < 1.0$. Hence, we will keep the denominator of the reaction term, as was also done in \cite{barton2011spatial} with a different reaction term. Comparison of results for the full nonlinear reaction term with those for the cubic approximation will give us a sense of the robustness of the cubic approximation. Although it might also be of interest to study corrections to the continuous time approximation arising from higher order time derivatives in $(q'-q)/\tau_g = \frac{\partial q}{\partial t} + \frac{1}{2} \tau_g \frac{\partial^2 q}{\partial t^2}+...$ (contributions from $\tau_g \frac{\partial^2 q}{\partial t^2}$ are formally of order $s^2$ ), this complicated problem will be neglected here; see, however, \cite{turellibartonpre} for a study of the robustness of the continuous time approximation, motivated by a model of dengue-suppressing Wolbachia in mosquitoes. \subsection*{Initiation of the pushed waves} \begin{figure}[t!] \centering \includegraphics[clip,width=0.8\columnwidth]{./Figure_3.pdf} \caption{ (A)~Spatial dynamics of gene drives can be determined by both the selective disadvantage $s$ and (when $0.5<s<0.697$), the size and intensity of the initial condition. (B) The energy landscapes $U(q)$ with various selective disadvantages $s$. i) Pulled Fisher wave regime: When $s$ is small, $s\leq s_{\rm{min}}=0.5$ (lowermost red and yellow curves), fixation of the gene drive allele ($q=1$) is the unique stable state and there is no energy barrier between $q=0$ and $1$. Any finite introduction of a gene drive allele is sufficient to initiate a pulled Fisher population wave that spreads through space to saturate the system. ii) Pushed excitable wave regime: When $s$ is slightly larger (green curve), and satisfies $s_{\rm{min}}=0.5<s<s_{\rm{max}}=0.697$, $q=1$ is still the preferred stable state, but an energy barrier at $q=q^*$ appears between $q=0$ and $1$. In this regime, the introduction of the gene drive allele at sufficient concentration and over a sufficiently large spatial extent is required for a pushed wave to spread to global fixation. iii) Wave reverses direction: When $s$ is large, $s>s_{\rm{max}}= 0.697$ (topmost blue and purple curves), $q=0$ is the unique ground state and the gene drive species cannot establish a traveling population wave and so dies out. } \label{Fig_3} \end{figure} The reaction terms $f_{\rm{MCR}}(q,s)$ and $f_{\rm{cubic}}(q,s)$ have three identical fixed points, $q=0,~1$ and $~q^*\big(=\frac{2s-1}{s}\big)$. As discussed in the SI Appendix in connection to classical nucleation theory in physics, and following \cite{barton1979dynamics}, we can define the potential energy function \begin{equation} U(q) = -\frac{1}{\tau_g} \int^{q}_{0} \frac{sq'(1-q')(q'-q^*)}{1-sq'(2-q')} dq' \end{equation} to identify qualitatively different parameter regimes. In a well-mixed system, without spatial structure, the gene drive frequency $q(t)$ obeys Eq.~\ref{eq4}, and evolves in time so that it arrives at a local minimum of $U(q)$. For the spatial model of interest here, $q(x,t)$ shows qualitatively distinct behaviors in three parameter regimes depending on the selective disadvantage $s$ (see Fig.~\ref{Fig_3}\emph{A}). We plot the potential energy functions $U(q)$ in these parameter regimes in Fig.~\ref{Fig_3}\emph{B}.\\ i)~First, when $s<s_{\rm{min}}=0.5$, fixation of a gene drive allele $q(x)=1$ for all $x$ is the unique stable state and there is no energy barrier to reach the ground state starting from $q\approx0$. In this regime, any finite frequency of gene drive allele locally introduced in space (provided it overcomes genetic drift) will spread and replace the wild-type allele. The frequency profile will evolve as a pulled traveling wave $q(x,t)=Q(x-vt)$ with wave velocity $v$. Such a wave was first found by Fisher \cite{fisher1937wave} and by Kolmogorov, Petrovsky and Piskunov \cite{kolmogorov} in the 1930s, in studies of how locally introduced organisms with advantageous genes spatially spread and replace inferior genes. However, the threshold-less initiation of population waves of engineered gene drives with relatively small selective disadvantages seems highly undesirable, since the accidental escape of a single gene drive construct can establish a population wave that spreads freely into the extended environment. ii)~There is a second regime for $0.5<s<0.697$ in which the potential energy function $U(q)$ exhibits an energy barrier between $q=0$ and $q=1$. In this regime, a pushed traveling wave can be excited only when a threshold gene drive allele frequency is introduced over a sufficiently broad region of space that exceeds the size of a critical nucleus, which we investigate in the next section. The existence of this threshold acts as a safeguard against accidental release. In addition, such excitable waves are easier to stop as we will discuss later. It appears that gene drives in this relatively narrow intermediate regime are the most desirable from a biosafety perspective. iii)~When $s>s_{\rm{max}}=0.697$, the fixation of a gene drive allele throughout space is no longer absolutely stable (Fig.~\ref{Fig_3}\emph{B}), and a gene drive population wave cannot be established. Indeed, the excitable wave reverses direction for $s>s_{\rm{max}}$. An implicit equation for $s_{\rm{max}}$ results from equating $U(0)=U(1)=0$, which yields \begin{equation} \begin{split} 0 &= \int^{1}_{0} \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}dq,\\ \textnormal{or } 0 &=\frac{-2+s_{\rm{max}}+2\sqrt{-1+\frac{1}{s_{\rm{max}}}}\arcsin(\sqrt{s_{\rm{max}}})}{2s_{\rm{max}}}\\ &\Rightarrow s_{\rm{max}}\approx 0.697, \end{split} \end{equation} where we used $q^*=(2s-1)/s$. When $s>s_{\rm{max}}$, the locally introduced gene drive allele contracts rather than expands relative to the wild-type allele and simply dies out. See SI Appendix for the analogous results with an arbitrary conversion rate ($0<c<1$). \subsection*{Critical nucleus in the pushed wave regime} \begin{figure}[t!] \centering \includegraphics[clip,width=0.8\columnwidth]{./Figure_4.pdf} \caption{ The excitable population wave carrying a gene drive can be established only when the initial concentration is above a threshold distribution and over a region of sufficient spatial extent (the critical nucleus or ``critical propagule'' \cite{barton2011spatial}). Numerical solutions of $\tau_g \frac{\partial q}{\partial t}=\tau_g D \frac{\partial^2 q}{\partial x^2} + \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}$ with $q^*=\frac{2s-1}{s}$ are plotted with time increment $\Delta t = 2.5 \tau_g$. The early time response is shown in red with later times in blue. Selective disadvantage of the gene drive allele relative to the wild-type allele is set to $s=0.58$. In the case illustrated here, the gene drive allele can either die out or saturate the entire system, depending on the width of initial Gaussian population profile of $q(x,0)=ae^{-(x/B)^2}$. (A) With a narrow distribution of the initially introduced gene drive species $(a=0.5, B=3.0\sqrt{\tau_g D})$, the population quickly fizzles out. (B) With a broader distribution of the initial gene drive allele $(a=0.5, B=6.0\sqrt{\tau_g D})$, the gene drive allele successfully establishes a pushed population wave leading to $q(x)=1$ over the entire system. } \label{Fig_4} \end{figure} When the selective disadvantage $s$ is in the intermediate regime, $s_{\rm{min}}=1/2<s<s_{\rm{max}}= 0.697$, we can control initiation of the pushed excitable wave by the initial frequency profile of the gene drive allele $q(x,0)$ as shown in Fig.~\ref{Fig_4}. For example, in Fig.~\ref{Fig_4}\emph{A}, an initially introduced gene drive allele (in the form of a Gaussian) diminishes and dies out since the width of the initial frequency distribution $q(x,0)$ is not sufficient to excite the population wave. In contrast, the results in Fig.~\ref{Fig_4}\emph{B} show the successful establishment of the excitable wave starting from a sufficiently broad (Gaussian) initial distribution of a gene drive allele. Roughly speaking (provided $\frac{1}{2}<s<s_{\rm{max}}$), two conditions must be satisfied to obtain a critical propagule: (1) The initial condition $q(0,0)$ at the center of the inoculant must exceed $q^* = \frac{2s-1}{s}$, the local maximum of the function $U(q)$ plotted in Fig.~\ref{Fig_3}; and (2) The spatial spread $\Delta x$ of the inoculant $q(x,t=0)$ must satisfy $\Delta x \gtrsim \rm{const}\sqrt{D \tau_g}$ where the dimensionless constant depends on $s$. Thus, the initial width should exceed the width of the pushed wave that is being launched. \begin{figure}[b!] \centering \includegraphics[width=0.8\columnwidth]{./Figure_5.pdf} \caption{ Initial critical frequency profiles of the mutagenic chain reaction (MCR) allele $q_{\rm{c}}(x)$ just sufficient to excite a pushed genetic wave in 1D (critical propagule). Numerically calculated critical propagules for the MCR model of Eq.~\ref{rdMCR} (solid lines) are compared with analytical results available for the cubic model Eq.~\ref{Fcubic} (dashed lines) \cite{barton2011spatial}. When $s=0.51$, the two equations gives almost identical results, but as $s$ increases the critical propagule shape of the MCR model deviates significantly from that of the cubic model. The critical propagule of the cubic equation consistently overestimates the height of the $q_{\rm{c}}(x)$, since the $sq(2-q)>0$ term in the denominator of the MCR model always increases the growth rate. } \label{Fig_5} \end{figure} We show the spatial concentration profile $q_{\rm{c}}(x)$ that constitutes that (Gaussian) critical nucleus just sufficient to initiate an excitable wave in Fig.~\ref{Fig_5}. The solid lines represent numerically obtained critical nuclei of the MCR model. Note the consistency for $s=0.58$ with the pushed excitable waves shown in Fig.~\ref{Fig_4}. The dashed lines represent analytically derived critical propagules of the cubic model as a reference (see SI Appendix for details). Fig.~\ref{Fig_5} shows that the cubic model overestimates the height of critical propagule, particularly for larger $s$. The difference between the reaction terms of the MCR model $f_{\rm{MCR}}(q)$ (see Eq.~\ref{FMCR}) and that of its cubic approximation $f_{\rm{cubic}}(q)$ (see Eq.~\ref{Fcubic}), arises from the term $-sq(2-q)$ in the denominator of Eq.~\ref{rdMCR}. In the biologically relevant regime $(0<s<1,~0<q<1)$, $sq(2-q)$ is always positive and $f_{\rm{MCR}}(q)>f_{\rm{cubic}}(q)$ is satisfied, which explains why there is a larger critical propagule in the cubic approximation, and the discrepancy is larger for larger $s$. The critical nucleus with a step-function-like circular boundary is studied both numerically and analytically in two dimensions in the SI Appendix. \subsection*{Stopping of pushed, excitable waves by a selective disadvantage barrier} Thus far, we have found that (i)~we can control initiation of the spatial spread of a gene drive provided $s_{\rm{min}}=0.5<s<s_{\rm{max}}=0.697$, and (ii)~the pushed population waves in this regime slow down and eventually stop (and reverse direction) when $s>s_{\rm{max}}$, see SI Appendix. In this section, we examine alternative ways to confine an excitable gene drive wave to attain greater control over its spread in this regime. Imagine exploiting the CRISPR/Cas9 system to encode multiple functionalities into the gene drive machinery \cite{cong2013multiplex, jinek2013rna,mali2013rna,gantz2015mutagenic}. For example, one could produce genetically engineered mosquitoes that are not only resistant to malaria, but also specifically vulnerable to an insecticide that is harmless for the wild-type alleles. Such a gene drive, which is uniquely vulnerable to an otherwise harmless compound, is a sensitizing drive \cite{esvelt2014concerning}. The effect of laying down insecticide in a prescribed spatial pattern on a sensitizing drive can be incorporated in our model by increasing the selective disadvantage to a value $s_b(>s)$ within a ``selective disadvantage barrier'' region. \begin{figure}[b!] \centering \includegraphics[clip,width=0.8\columnwidth]{./Figure_6.pdf} \caption{ Numerical simulations of pushed, excitable waves generated by Eq.~\ref{rdMCR} with barriers in one dimension, with time increments $\Delta t = 5.0 \tau_g$. As the waves advance from left to right, the early time response is shown in red with later times in blue. The fitness disadvantage inside the barrier is set to $s_b=0.958$ within a region $25\sqrt{\tau_g D}<x<27\sqrt{\tau_g D}$ (shown as a purple bar). The initial conditions are step-function-like, $q(x,0)=q_0/(1+e^{10(x-x_0)/\sqrt{\tau_g D}})$, with $q_0 = 1.0$ and $x_0=5.0\sqrt{\tau_gD}$, similar to the initial condition Eq.~S30 we used in two dimensions (see SI Appendix). (A) In the case of a Fisher wave with $s =0.479 <s_{\rm{min}}=0.5$, a small number of individuals diffuse through the barrier, which is sufficient to reestablish a robust traveling wave. (B) In the case of the excitable wave $s =0.542>s_{\rm{min}}=0.5$, a small number of individuals also diffuse through the barrier. However, since the tail of the penetrating wave front is insufficient to create a critical nucleus, the barrier causes the excitable wave to die out. } \label{Fig_6} \end{figure} In Fig.~\ref{Fig_6}, we numerically simulate the mutagenic chain reaction model defined by Eq.~\ref{rdMCR} in one dimension with a barrier of strength $s_b=0.958$ placed in a region $25\sqrt{\tau_g D}<x<27\sqrt{\tau_g D}$. When the selective disadvantage outside the barrier is small $(s<0.5)$ and the population wave travels as the pulled Fisher wave, even a tiny fraction of MCR allele diffusing through the insecticide region can easily reestablish the population wave, as shown in Fig.~\ref{Fig_6}\emph{A}. However, when the system is in the pushed wave regime $0.5<s<0.697$, the wave can be stopped provided the spatial profile of the gene drive allele that leaks through does not constitute a critical nucleus, as illustrated in Fig.~\ref{Fig_6}\emph{B}. See the SI Appendix for numerically calculated plots of the critical width and barrier selective disadvantage needed to stop pushed waves for various values of $s$. \subsection*{Excitable Wave Dynamics with Gapped Barriers in Two Dimensions} \begin{figure}[htp] \centering \includegraphics[clip,width=0.9\columnwidth]{./Figure_7.pdf} \caption{ Population waves impeded by a selective disadvantage barrier of strength $s_b = 1.0$ (colored purple) with a gap. This imperfect barrier has a region without insecticide in the middle of width $6\sqrt{\tau_g D}$. (A) The pulled Fisher wave with $s=0.48<0.5$ always leaks through the gap and reestablishes the gene drive wave (colored red and yellow). (B) The pushed wave that arises when $s=0.62>0.5$ is deexcited by a gapped barrier, provided the gap width is comparable to or smaller than the width of the gene drive wave. } \label{Fig_7} \end{figure} In the previous section, we showed that pushed excitable waves can be stopped by a selective disadvantage barrier in one dimension. However, in two dimensions, it may be difficult to make barriers without defects. Hence, we have also studied the effect of a gap in a two-dimensional selective disadvantage barrier. We find that while the gene drive population wave in the Fisher wave regime $s<0.5$ always leaks through the gaps, the excitable wave with $0.5<s<0.697$ can be stopped, provided the gap is comparable or smaller than the width of the traveling wave front. In Fig.~\ref{Fig_7}, we illustrate the gene drive dynamics for two different parameter choices. Both in Fig.~\ref{Fig_7}\emph{A} and \emph{B}, the strength of the selective disadvantage barrier is set to be $s_b=1.0$ and the width of the gap in the barrier is set to be $6\sqrt{\tau_g D}$. The engineered selective disadvantage in the non-barrier region $s$ differs in the two plots. In Fig.~\ref{Fig_7}\emph{A} $s=0.48<0.5$, so the gene drive wave propagates as a pulled Fisher wave and the wave easily leaks through the gap. If genetic drift can be neglected, we expect that Fisher wave excitations will leak through any gap however small. However, when the selective disadvantage barrier is in the pushed wave regime $0.5<s<0.697$, the population wave can be stopped by a gapped selective disadvantage barrier as shown in Fig.~\ref{Fig_7}\emph{B}. To stop a pushed excitable wave, the gap dimensions must be smaller than the front width; alternatively, we can say that the gap must be smaller than size of the critical nucleus. \subsection*{Discussion} The CRISPR/Cas9 system has greatly expanded the design space for genome editing and construction of mutagenic chain reactions with non-Mendelian inheritance. We analyzed the spatial spreading of gene drive constructs, applying reaction-diffusion formulations that have been developed to understand spatial genetic waves with bistable dynamics \cite{barton1979dynamics, barton1989adaptation, barton2011spatial}. For a continuous time and space version of the model of Unckless \emph{et al} \cite{unckless2015modeling}, in the limit of $100\%$ conversion efficiency, we found that a critical nucleus or propagule is required to establish a gene drive population wave when the selective disadvantage satisfies $0.5<s<0.697$. Our model led us to study termination of pushed gene drive waves using a barrier that acts only on gene drive homozygotes, corresponding to an insecticide in the case of mosquitoes. In this parameter regime, the properties of pushed waves allow safeguards against the accidental release and spreading of the gene drives. One can, in effect, construct switches that initiate and terminate the gene drive wave. In the future, it would be interesting to study the stochasticity due to finite population size (genetic drift), which is known to play a role in the first quadrant of Fig.~\ref{Fig_2} \cite{korolev2011competition, lavrentovich2014asymmetric}. We expect that genetic drift can be neglected provided $N_{\rm{eff}} \gg 1$, where $N_{\rm{eff}}$ is an effective population size, say, the number of organisms in a well-mixed critical propagule. See the SI Appendix for a brief discussions on genetic drift. It could also be important to study the effect of additional mutations on an excitable gene drive wave, particularly those that move the organism outside the preferred range $0.5<s<0.697$. Finally we address possible experimental tests of the theoretical predictions. Since it seems inadvisable to conduct field tests without thorough understanding of the system, laboratory experiments with microbes would be a good starting point. Recently, the transition from pulled to pushed waves was qualitatively investigated with haploid microbial populations \cite{gandhi2016range}. Because the mutagenic chain reaction has already been realized in \emph{S. Cerevisiae} \cite{dicarlo2015safeguarding}, it may also be possible to test the theory in the context of range expansions on a Petri dish, as has already been done for haploid mutualistic yeast strains in \cite{muller2014genetic}. Here, the frontier approximates a one dimensional stepping stone model, and jostling of daughter cells at the frontier leads to an effective diffusion constant in one dimension \cite{RevModPhys.82.1691, hallatschek2007genetic}. Finally, as illustrated in Fig.~S2, the mathematics of the spatial evolutionary games in one dimension parallels the dynamics of diploid gene drives in the pushed wave regime, providing another arena for experimental tests, including the effects of genetic drift. \subsection*{Numerical Simulations} To simulate the dynamics governed by Eq.~\ref{FMCR} in Figs.~\ref{Fig_4},\ref{Fig_6},\ref{Fig_7} and S6, we used the method of lines and discretized spatial variables to map the partial differential equation to a system of coupled ordinary equations (``ODE''). Then we solved the coupled ODEs with a standard ODE solver. The width of the spatial grids were varied from $\frac{1}{200}\sqrt{\tau_g D}$ to $\frac{1}{20}\sqrt{\tau_g D}$ always making sure that the mesh size was much smaller than the width of the fronts of the pushed and pulled genetic waves we studied. \subsection*{Acknowledgement} We thank N. Barton, S. Block, S. Sawyer, T. Stearns, and M. Turelli for helpful discussions and two anonymous reviewers for useful suggestions. N. Barton also provided a critical reading of our manuscript. Work by HT and DRN was supported by the National Science Foundation, through grants DMR1608501 and via the Harvard Materials Science Research and Engineering Center via grant DMR1435999. HAS acknowledges support from NSF grants MCB1344191 and DMS1614907. \clearpage \renewcommand{\theequation}{S\arabic{equation}} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{page}{1} \captionsetup[figure]{name=Figure} \captionsetup[table]{name=Table} \renewcommand\thefigure{S\arabic{figure}} \renewcommand\thetable{S\arabic{table}} \onecolumngrid \section*{Supporting Information (SI)} \titlespacing\section{3pt}{10pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing\subsection{3pt}{10pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \titlespacing\subsubsection{3pt}{10pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \tableofcontents \FloatBarrier\subsection{Nucleation theory of the gene drive population waves} Here we identify different parameter regimes of various types of gene drive waves by establishing an analogy between zero temperature nucleation theory and the reaction-diffusion equation of the prescribed mutagenic chain reaction, \begin{equation} \label{rdMCR2} \frac{\partial q}{\partial t} = D \frac{\partial^2 q}{\partial x^2}+ \frac{1}{\tau_g} \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}, \end{equation} using the methods reviewed in \cite{barton2011spatial}. First, we introduce a potential energy function $U(q)$ \begin{equation} U(q) = -\frac{1}{\tau_g} \int^{q}_{0} \frac{sq'(1-q')(q'-q^*)}{1-sq'(2-q')} dq', q^* = \frac{2s-1}{s}, \end{equation} and rewrite Eq.~\ref{rdMCR2} as \begin{equation} \frac{\partial q}{\partial t} = D \frac{\partial^2 q}{\partial x^2} - \frac{dU(q)}{dq}. \end{equation} It is useful to recast the reaction-diffusion dynamics in terms of a functional derivative \begin{equation} \frac{\partial q(x,t)}{\partial t} = -\frac{\delta \mathcal{F}[q(y,t)] }{\delta q(x,t)}, \end{equation} where the functional $\mathcal{F}[q(y,t)]$ is given by \begin{equation} \mathcal{F}[q(y,t)] = \int^{\infty}_{-\infty} \bigg \{ \frac{1}{2} D \Big( \frac{\partial q(y,t)}{ \partial y} \Big)^2 +U[q(y,t)] \bigg \} dy, \end{equation} and we have \begin{equation} \begin{split} &-\frac{\delta \mathcal{F}[q(y,t)] }{\delta q(x,t)} = - \lim_{\epsilon \rightarrow 0} \frac{ \mathcal{F}[q(y,t) + \epsilon \delta(y-x)] - \mathcal{F}[q(y,t)] }{\epsilon}\\ &= - \int_{-\infty}^{\infty} \Big\{ D \frac{\partial q(y,t)}{\partial y} \frac{\partial \delta (y-x)}{\partial y} + \frac{dU[q(y,t)]}{dq} \delta(y-x) \Big\} dy\\ &= D \frac{\partial^2 q(x,t)}{\partial x^2} - \frac{dU[q(x,t)]}{dq}. \end{split} \end{equation} Since $\mathcal{F}(t)$ always decreases in time, \begin{equation} \begin{split} \frac{d \mathcal{F}(t)}{dt} &=\int_{-\infty}^{\infty} \frac{\partial q(x,t)}{\partial t} \frac{\delta \mathcal{F}[q(y,t)]}{\delta q(x,t)}dx\\ &= - \int_{-\infty}^{\infty}\Big( \frac{\partial q(x,t)}{\partial t} \Big)^2 dx \leq 0, \end{split} \end{equation} $\mathcal{F}[q(y,t)]$ plays the role of the free energy in a thermodynamic system. The potential energy function $U(q)$ with various selective disadvantages $s$ is plotted in Fig.~3. $U(1)$ becomes the absolute minimum when $0.5<s$ and population waves behave as pushed waves, because both $U(0)$ and $U(1)$ are locally stable \cite{barton1979dynamics, barton1989adaptation, barton2011spatial}. The pushed gene drive wave stalls out when the two stable points have the same potential energy (blue curve in Fig.~3). The maximum value of the selective disadvantage $s_{\rm{max}}$ supporting the pushed wave of the gene drive allele can be derived by equating $U(0)=U(1)$, which leads to \begin{equation} \begin{split} 0 &=\int^{1}_{0} \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}dq\\ &=\frac{-2+s_{\rm{max}}+2\sqrt{-1+\frac{1}{s_{\rm{max}}}}\arcsin(\sqrt{s_{\rm{max}}})}{2s_{\rm{max}}}.\\ &\Rightarrow s_{\rm{max}} \approx 0.697 \end{split} \end{equation} The excitable gene drive wave of primary interest to us thus arises when the selective disadvantage satisfies \begin{equation} 0.5 < s < 0.697. \end{equation} \subsection{The range of the pushed wave regime with an arbitrary conversion rate} \begin{figure}[htp] \centering \includegraphics[clip,width=0.8\columnwidth]{./Figure_S1.pdf} \caption{ $s_{\rm{min}}$ and $s_{\rm{max}}$ as a function of the conversion rate $c$ when the fitness of heterozygotes individuals is (A) recessive ($h=0$), (B) additive ($h=0.5$) and (C) dominant ($h=1.0$) of gene drives, where the fitness of heterozygotes is $1-hs$. The socially responsible pushed wave regime ($s_{\rm{min}}<s<s_{\rm{max}}$) is always widest when $c=1$, i.e., for $100\%$ conversion efficiency. Note that the results become independent of $h$ when $c=1$. The gene drive wave reverses direction and dies out in the white regions of this diagram. } \label{Fig_S1} \end{figure} In the main text, we assumed perfect conversion efficiency ($c=1$) of the mutagenic chain reaction. However, in reality, some fraction of the reactions can be unsuccessful and the conversion rate $c$ will be $0<c<1$. As a result there will be heterozygous individuals with fitness $1-hs$, where $h$ controls dominance of the gene drive allele. When $h=1$, the gene drive allele is dominant and the fitness of the heterozygous genotype is $1-s$. The choices $h=0, 0.5$ correspond to the recessive and additive cases respectively. As derived by Unckless \emph{et al.} \cite{unckless2015modeling}, the reaction term in Eq.~5 is now given by \begin{equation} \begin{split} &q(t+\tau_g) - q(t) =\bar{f}(q)\\ &=\frac{q^2(1-s)+q(1-q)\big[ (1-c) (1-hs) + 2c(1-s) \big]}{q^2 (1-s) +2q(1-q)(1-c)(1-hs)+2q(1-q)c(1-s)+(1-q)^2} - q \end{split} \end{equation} There are again three fixed points $q=0,1,q^*$ where the third fixed point is \begin{equation} q^*=\frac{c+cs(h-2)-hs}{s(1-2c-2h+2ch)}. \end{equation} Following \cite{unckless2015modeling}, we find that $q^*$ first becomes positive for $s>s_{\rm{min}}$, where \begin{equation} s_{\rm{min}} = \frac{c}{2c - (c-1)h}. \end{equation} For $0\leq s \leq s_{\rm{min}}$, $q^*<0$ and the spatial dynamics is again controlled by pulled waves. We can also calculate $s_{\rm{max}}$ by recalculating the potential function analogy discussed in SI, Sec~\textbf{A} and in the main text, \begin{equation} \bar{U}(q)= -\frac{1}{\tau_g} \int_0^q \bar{f}(q') dq', \end{equation} and numerically solving for $\bar{U}(q=0,c,h,s_{\rm{max}})=\bar{U}(q=1,c,h,s_{\rm{max}})$ to obtain $s_{\rm{max}}(c)$ given $h$, with the results shown in Fig. S1. The gene drive spreads spatially as a pushed excitable wave for $s_{\rm{min}} < s < s_{\rm{max}}$. Note that the relevant range of $s$ when $c < 1$ shrinks compared to $c=1$. \FloatBarrier\subsection{Spatial evolutionary games in one dimension} \begin{figure}[htp] \centering \includegraphics[clip,width=0.5\columnwidth]{./Figure_S2.pdf} \caption{ A schematic phase diagram of the spatial evolutionary games in one dimension ignoring genetic drift. The parameters $\alpha$ and $\beta$ describe interactions between red and green genetic variants, with growth rates written as $w_R (x,t) = g + \alpha (1-f(x,t))$ and $w_G (x,t) = g + \beta f(x,t)$ respectively. (The parameter $g>0$ is a background growth rate.) Inserted graphs show schematically the potential energy function $U(f)$, where each of the green and red dot corresponds to $f=0$ and $f=1$ respectively ($0 \leq f \leq 1$). By searching for barriers in $U(f)$ as a function of $\alpha$ and $\beta$, we identify the bistable regimes that require a critical nucleus and pushed excitable waves to reach a stable dynamical state and the pulled Fisher wave regimes which do not require the nucleation process. The two regimes are separated by two solid black lines $\alpha=0$, $\beta<0$, and $\alpha<0$, $\beta=0$, which correspond limits of metastability. The solid line along $\alpha=\beta<0$ between the two bistable states is analogous to a first-order phase transition line (equal depth minima in $U(q)$), along which the excitable genetic wave separating red and green stalls out. } \label{Fig_S2} \end{figure} In this SI section, we show that genetic waves mathematically quite similar to the pushed gene drive waves studied here arise in spatial evolutionary games of two interacting asexual species that are colored red (``$R$'') and green (``$G$'') using the analogy with nucleation theory introduced in the previous SI section. We start from the continuum description of the one dimensional stepping stone model (following \cite{RevModPhys.82.1691,korolev2011competition}), \begin{equation} \frac{\partial f(x,t)}{\partial t} = D \frac{\partial^2 f(x,t)}{\partial x^2} + s[f]f(1-f) + \sqrt{D_g f(1-f)} \Gamma (x,t), \label{FullRG} \end{equation} where $f(x,t)$ is the frequency of red species and $D$ is the spatial diffusion constant representing migration. The last term, where $\Gamma(x,t)$ is an Ito correlated Gaussian white noise source and $D_g$, proportional to an inverse effective population size, represents genetic drift. We henceforth neglect genetic drift and set this term to zero. The function $s[f]$ represents the difference in relative reproduction rates between the two species, and is given by \cite{RevModPhys.82.1691} \begin{equation} s[f]=w_{\rm{eff}} = \frac{w_R - w_G}{\frac{1}{2} (w_R + w_G)}, \end{equation} where $w_R$ and $w_G$ are fitnesses of alleles $R$ and $G$. If $g$ is a background reproduction rate, we have \begin{equation} \begin{split} w_R (x,t) &= g + \alpha (1-f(x,t)),\\ w_G (x,t) &= g + \beta f(x,t), \end{split} \end{equation} where the interactions between the two competing variants are characterized by constants $\alpha$ and $\beta$. With the definitions above, we have \begin{equation} s[f]=-\frac{(\alpha + \beta)(f-\frac{\alpha}{\alpha+\beta})}{g+\frac{1}{2}\alpha (1-f) + \frac{1}{2} \beta f}, \end{equation} which leads to a reaction term similar to that in Eq.~5 and introduces an additional fixed point into the dynamics of Eq.~\ref{FullRG} at $f^* = \frac{\alpha}{\alpha + \beta}$ in addition to $f=0,1$. A diagram summarizing the dynamics of this model is shown in Fig.~2. This ``phase diagram'' was worked out including genetic drift in Eq.~\ref{FullRG} which affects the shape and location of the phase transition lines in the first quadrant of Fig.~1. \cite{korolev2011competition}. If the genetic drift term in Eq.~\ref{FullRG} is neglected, the lines labelled ``DP'' in Fig. 2 would coincide with the positive $\alpha$ and $\beta$ axes and would merge at the origin. Upon setting $D_g = 0$ in Eq.~\ref{FullRG}, we employ the argument presented above and define a potential energy function, \begin{equation} U_b (f)= -\int^{f}_0 s[f']f'(1-f') df'. \label{Ub} \end{equation} The schematic picture of $U_b (f)$ in different parameter regimes is drawn in Fig.~\ref{Fig_S2}. The mutualistic regime ($\alpha>0, \beta>0$) has already been studied in detail, including effects of genetic drift \cite{korolev2011competition}. By studying shapes of the potential energy function $U[f]$ we identify two important parameter regimes. In the bistable regime (dark green), there is a finite energy barrier between the two locally stable states and a nucleation process is required to establish an excitable wave. However, in the Fisher wave regimes (light green and light red), there is no energy barrier to reach the unique stable configuration and thus nucleation is not required. The two regimes are separated by the two black solid lines $\alpha=0, \beta<0$ or $\alpha<0, \beta=0$, which are limits of metastability. We also draw a solid black line between the two bistable states along $\alpha=\beta<0$, where the pushed waves stall out. This line is analogous to a line of first-order transitions. When $\alpha \neq \beta$, the integral in Eq.~\ref{Ub} for the effective thermodynamic potential is given by \begin{equation} \begin{split} U[f] &= \frac{1}{3(\alpha-\beta)^4}\Bigg( (\alpha-\beta) f \bigg\{ \alpha^3 f (2f-3) + \alpha^2 f (9\beta - 2\beta f +6g)\\ &+ \alpha \Big(\beta^2 \big(12-f(3+2f) \big) + 36\beta g+24g^2 \Big) +\beta \big( \beta^2 f(-3+2f) - 6\beta(-2+f)g+24g^2 \big) \bigg\}\\ &+ 12(\alpha+2g)(\beta+2g) \big( \alpha \beta + (\alpha + \beta)g \big) \log \Big[1-\frac{\alpha-\beta}{\alpha+2g} f \Big] \Bigg). \label{long} \end{split} \end{equation} When $\alpha=\beta$, we can simplify $s[f]$ \begin{equation} s[f] = -\frac{2 \alpha (f-\frac{1}{2})}{g+\frac{1}{2}\alpha}, \end{equation} and the integral gives \begin{equation} U[f] = \frac{2 \alpha }{g+\frac{1}{2}\alpha} \int^{f}_0 f'(1-f') \Big( f'-\frac{1}{2} \Big) df' =-\frac{\alpha}{2g+\alpha} {f}^2 ({f}-1)^2. \end{equation} When $\alpha=-\beta$, $\alpha \ll g$ and $1 \ll \big| \frac{g}{\alpha}+\frac{1}{2} \big|$, we have \begin{equation} s[f] = \frac{\alpha}{g+\frac{1}{2}\alpha (1-2f)} \end{equation} and \begin{equation} \begin{split} &U[f] = -\int^{f}_0 \frac{f'^2 -f'}{f' - \big( \frac{g}{\alpha}+\frac{1}{2} \big)} df'\\ &= -\frac{1}{2}f \Big(f+ \frac{2g}{\alpha} -1 \Big) - \bigg(\frac{g}{\alpha} + \frac{1}{2} \bigg) \bigg(\frac{g}{\alpha} - \frac{1}{2} \bigg) \log \bigg[ 1-\frac{2\alpha f}{\alpha + 2g} \bigg] \label{amib} \end{split} \end{equation} The last term diverges at $f=\frac{g}{\alpha}+\frac{1}{2}$, but we focus on the weak interaction limit $1 \ll \big| \frac{g}{\alpha}+\frac{1}{2} \big|$, where the biologically relevant regime $0 \leq f \leq 1$ will not be affected. If we substitute $\alpha=-\beta$ into Eq.~\ref{long}, we recover Eq.~\ref{amib}, as expected. \FloatBarrier \subsection{Calculation of the critical propagules in one dimension} In this SI section, we describe details of the calculation of the critical propagules shown in Fig.~5. Reaction-diffusion equations in one dimension with a general reaction term $R[q(x,t)]$ can be written as \begin{equation}\label{general} \tau_g \frac{\partial q(x, t)}{\partial t} = \tau_g D \frac{\partial^2 q(x,t)}{\partial x^2} + R[q(x,t)]. \end{equation} The critical propagule profile $q_{\rm{c}}(x)$ can be defined as a stationary solution of Eq.~\ref{general}, i.e., \begin{equation} 0 = \tau_g D \frac{\partial^2 q_c}{\partial x^2} + R[q_c]. \label{ODE} \end{equation} Upon multiplying both sides by $\frac{d q_{\rm{c}}}{d x}$ and integrating we obtain, \begin{equation} \tau_g D \Big ( \frac{d q_{\rm{c}}}{d x} \Big)^2 =2 \int^0_{q} R[\tilde{q}] d\tilde{q}. \end{equation} If we assume a symmetric critical propagule about $x=0$, so that $\frac{d q_{\rm{c}}}{d x} = 0$ at $x=0$, we can obtain $q_{\rm{m}} \equiv q_{\rm{c}}(0) $ from \begin{equation} \int^0_{q_{\rm{m}}} R[\tilde{q}] d\tilde{q}=0. \end{equation} Since the slope $\frac{dx_{\rm{c}} (q)}{dq}$ is given by \begin{equation} \frac{d x_{\rm{c}}(q)}{d q} = \frac{\sqrt{\frac{\tau_g D}{2}}}{\sqrt{ \int^0_{q} R[\tilde{q}] d\tilde{q}}}, \end{equation} we obtain the critical propagule profile $x_{\rm{c}}(q)$ by integrating both sides from $q_{\rm{m}}$ to $q$. The calculations described above can be carried out analytically for the cubic reaction term Eq.~7 and critical propagules for $s=0.66,0.58,0.51$ are plotted in Fig.~5 with dashed lines. For the full MCR equation, the corresponding numerical results are plotted with solid lines. \FloatBarrier \subsection{Critical radius and allele concentration in two dimensions} \begin{figure} \centering \includegraphics[clip,width=0.5\columnwidth]{./Figure_S3.pdf} \caption{ In two dimensions the gene drive allele is introduced uniformly over a disk-shaped region with radius $r_0$ with uniform frequency $q_0$ inside as illustrated in the inset image. We numerically determined the critical frequency $q_0$ and radius $r_0$ just sufficient to initiate an excitable wave in two dimensions.} \label{Fig_S3} \end{figure} In practice, it is important to model the distribution of MCR alleles to be released locally to initiate its traveling genetic wave in a two-dimensional space. Upon assuming circular symmetry of the traveling wave solution, the reaction-diffusion equation governing the radial frequency profile of the MCR allele $q(r,t)$ reads in radial coordinates, \begin{equation} \tau_g \frac{\partial q}{\partial t}= \tau_g D \Big( \frac{\partial^2 q}{\partial r^2}+\frac{1}{r} \frac{\partial q}{\partial r} \Big) + \frac{sq(1-q)(q-q^*)}{1-sq(2-q)}. \end{equation} The only correction to the one dimensional case is the derivative term $\frac{1}{r} \frac{\partial q}{\partial r}$, which can be neglected relative to $\frac{\partial^2 q}{\partial r^2}$ in the limit of $r\rightarrow \infty$. However, we keep this term in the calculation of the critical nucleus as this term is not negligible where $r$ is comparable to or smaller than the width of the excitable wave being launched. In our numerical calculations, instead of a Gaussian initial condition, it is convenient to introduce the gene drive allele with a uniform frequency $q_0$ over a circular region with radius $r_0$. Indeed, in an actual release of a gene drive organism, it is plausible that the release would be implemented by creating a gene drive concentration $q_0$ in a circular region of radius $r_0$ with a sharp boundary. To model the radial frequency profiles, we used a circularly symmetric steep logistic function as an initial condition, \begin{equation}\label{logistic} q(r,t=0)=\frac{q_0}{1+e^{10(r-r_0)/\sqrt{\tau_g D}}}, \end{equation} instead of a step function to insure numerical stability. Fig.~\ref{Fig_S3} shows the parameter regimes where a pushed wave is excited for various selective disadvantages $s$. The pushed waves successfully launched for initial conditions whose parameters are above the curves $q_0(r_0)$, shown for a variety of selective disadvantages $s$ in the pushed wave regime. \FloatBarrier \subsection{Line tension, energy difference and analogy with nucleation theory in two dimensions} The scenario studied in the previous section (sharp boundary, adjustable initial drive concentration $q_0$ and inoculation radius $r_0$) seems appropriate for many engineered releases of gene drives, at least in situations with large effective population sizes $N_{\rm{eff}}$, so that genetic drift can be neglected. (See the discussion of genetic drift in SI Sec.~\textbf{J}.) However, when genetic drift is important, stochastic contributions like the term $\sqrt{D_g f(1-f) \eta(x,t)}$ in, e.g., Eq. \ref{FullRG}, can act on spatial gradients at the interfaces of pushed and pulled waves \cite{polechova2011genetic, polechova2015limits} in a manner somewhat reminiscent of thermal fluctuations near a first-order phase transition. Provided strong genetic drift is able to produce something analogous to local thermal equilibrium after a gene drive release, it is interesting to explore an analogy with classical nucleation theory. Nucleation leads to a pushed wave when $s_{\rm{min}}<s<s_{\rm{max}}$. One might then expect the two-dimensional analog of the total energy function discussed in SI Sec.~\textbf{A} for an equilibrated circular droplet with $q_0=1$ and radius $r_0$ in two dimensions to take the form \begin{equation} \begin{split} \mathcal{F}[q(\bm{r})] &= \int d \bm{r} \bigg\{ \frac{1}{2} D \big( \bm{\nabla} q(\bm{r}) \big)^2 + U[q(\bm{r})] \bigg\}\\ &= 2\pi \int_0^{\infty} dr \frac{rD}{2} \Big( \frac{dq}{dr} \Big)^2 + 2 \pi \int_0^{\infty} dr r U[q(r)]\\ &\approx 2\pi r_0 \int_0^{\infty} dr \frac{D}{2} \Big( \frac{dq}{dr} \Big)^2 + \pi r_0^2 \big(U(1) - U (0) \big)\\ & \equiv 2\pi r_0 \gamma - \pi r_0^2 | \Delta U | \end{split} \end{equation} where we have assumed a sharp interface between saturated gene drive and wild-type states. Here, $\Delta U$, the ``energy'' difference between the gene drive and wild type, causes the droplet to expand, and the role of an energy barrier to nucleation is played by the line tension term $\gamma$ \cite{barton1985analysis}. This is indeed the case. For simplicity, we illustrate the nucleation approach with the cubic reaction term given by Eq.~7 in the main text. First, we assume the logistic form of the spatial profile derived in the 1d limit by Barton and Turelli \cite{barton2011spatial} \begin{equation} q(r) = \frac{1}{1+e^{\sqrt{s/2 \tau_g D}(r-r_0)}}, \end{equation} and the line tension term is \begin{equation} \gamma = \int_0^{\infty} dr \frac{D}{2} \Big( \frac{dq}{dr} \Big)^2 = \frac{\sqrt{sD/2\tau_g } (e^{3 r_0 \sqrt{s/2 \tau_g D}} + 3 e^{2 r_0 \sqrt{s/2 \tau_g D}} ) }{12(e^{r_0 \sqrt{s/2 \tau_g D}}+1)^3} \approx \frac{\sqrt{sD /2 \tau_g }}{12}, \end{equation} in the limit of $1 \ll r_0 \sqrt{s/2\tau_g D}$. The energy difference is given by \begin{equation} \Delta U = U(1) - U(0) = \frac{3s-2}{12 \tau_g}, \end{equation} and the critical radius of the nucleus $r_{\rm{c}}$ which corresponds to the saddle point barrier of the free energy landscape is \begin{equation} r_{\rm{c}} = \frac{\sqrt{s \tau_g D/2}}{2-3s} \end{equation} as plotted in Fig.~\ref{Fig_S4}. This result shows the divergence of $r_{\rm{c}}$ in the limit of $s\rightarrow s_{\rm{max}} (=2/3)$ and the above approximation ($r_0 \sqrt{s/ 2 \tau_g D} \gg 1$) becomes exact in this limit. The diverging $r_{\rm{c}} (s)$ shown in Fig.~\ref{Fig_S4} is qualitatively consistent with the behavior found for the simplified gene drive initial condition in two dimensions shown in Fig.~\ref{Fig_S3} in the limit $q_0 \rightarrow 1$ \begin{figure}[t!] \centering \includegraphics[clip,width=0.5\columnwidth]{./Figure_S4.pdf} \caption{ The critical radius of the nuclei $r_{\rm{c}}$ as a function of the selective disadvantage $s$. } \label{Fig_S4} \end{figure} \FloatBarrier\subsection{Wave velocities of the excitable waves} \begin{figure}[htp] \centering \includegraphics[clip,width=0.5\columnwidth]{./Figure_S5.pdf} \caption{ Asymptotic wave velocities $v$ of the excitable waves are plotted as a function of selective disadvantage $s$. The pink circular dots are numerically calculated wave velocities for the MCR model. The blue curve is an analytically derived result for the simple cubic approximation, $ v(s)=(2-3s) \sqrt{D/2\tau_g s} $ \cite{barton2011spatial} and the blue squares are from numerical calculations, which confirm good agreement with the analytical result.} \label{Fig_S5} \end{figure} The reaction-diffusion equation admits traveling wave solutions with a continuous family of velocities. It selects the slowest speed asymptotically in the large time limit \cite{van2003front}. The pink circular dots in Fig.~\ref{Fig_S5} are numerically calculated asymptotic wave velocities for the MCR model in the pushed wave regime. We also plot the known wave velocity for the cubic approximation $ v(s)=(2-3s) \sqrt{D/2\tau_g s} $ \cite{barton1979dynamics, barton1989adaptation, barton2011spatial} for comparison. Due to the larger reaction term $f_{\rm{MCR}}(q)>f_{\rm{cubic}}(q)$ (see discussion in Fig.~5), the wave velocity for the MCR model is always faster than the cubic approximation given the same selective disadvantage $s$. In both cases, a larger selective disadvantage $s$ decreases the wave velocity, which eventually becomes zero at $s_{\rm{max}} = 0.697$ for the MCR model and the slightly smaller value $s_{\rm{max}}=2/3$ within the cubic approximation. \\ \FloatBarrier \subsection{Calculation of the speed of the excitable waves} In this section, we review the numerical method for calculating the speed of the excitable waves, following \cite{barton2011spatial}. First, we assume a traveling waveform of the solution \begin{equation} q(x,t)=Q(x-vt)=Q(z),~z\equiv x-vt, \end{equation} with boundary conditions \begin{equation} \begin{split} Q(z)\rightarrow 1~(z \rightarrow - \infty),~Q(z)\rightarrow 0~(z \rightarrow + \infty),\\ \frac{dQ}{dz} \rightarrow0~(z \rightarrow \pm \infty). \end{split} \end{equation} By substituting $Q(z)$ into \begin{equation} \tau_g \frac{\partial q}{\partial t} = \tau_g D \frac{\partial^2 q}{\partial x^2} + R[q], \end{equation} we obtain \begin{equation} 0=\tau_g D \frac{d^2 Q}{d z^2} + v \tau_g \frac{d Q}{d z} + R[Q]. \end{equation} If we define the gradient $G$ as a function of $Q$, $G[Q] \equiv \frac{dQ}{dz}$ we arrive an ordinary differential equation \begin{equation} 0=\tau_g D G\frac{dG}{dQ} + v \tau_g G + R[Q], \end{equation} with boundary conditions \begin{equation} G[0]=G[1]=0. \end{equation} It is known that there exists a unique velocity of the excitable wave $v$ that has solution $G[Q]$ of the above differential equation with the boundary condition \cite{keener1998mathematical}. We used a shooting method to determine such $v$ and plotted the results in Fig.~\ref{Fig_S5}. \FloatBarrier \subsection{Critical barrier strength} \begin{figure}[b!] \centering \includegraphics[clip,width=0.5\columnwidth]{./Figure_S6.pdf} \caption{ Stopping power of a selective advantage barrier in one dimension. Numerical solutions of Eq.~5 are shown with time increment $\Delta t = 10.0 \tau_g$. The early time response is shown in red with later times in blue. The selective disadvantage of the barrier is $s_b$ within the purple bar of width $L=5$ occupying the spatial region $25\sqrt{\tau_g D}<x<30\sqrt{\tau_g D}$ (shaded in blue) and $s=0.625$ otherwise. (A) The excitable wave propagates with constant speed when the barrier vanishes for $s_b = 0.625$. (B) With $s_b =0.688>s=0.625$, the wave significantly slows down at the barrier, but recovers and propagates onwards. (C) The excitable wave is stopped when $s_b =0.708$.} \label{Fig_S6} \end{figure} Fig.~\ref{Fig_S6} shows how the excitable wave can be slowed down and finally stopped by increasing the strength of a selective disadvantage barrier $s_b > s$. As a reference, we first show dynamics of the excitable wave without a barrier ($s_b = 0.625$ matches the selective disadvantage $s=0.625$ outside) in Fig.~\ref{Fig_S6}\emph{A}. When a small barrier is erected ($s_b=0.688<0.697$), the excitable wave significantly slows down within the barrier as expected from the results shown in Fig.~\ref{Fig_S5}. However, the wave recovers and propagates through the barrier as in Fig.~\ref{Fig_S6}\emph{B}. When the barrier strength exceeds a critical value (in Fig.~\ref{Fig_S6}\emph{C} we plot the case $s_b = 0.708$) the excitable wave is stopped. \begin{figure}[t!] \centering \includegraphics[clip,width=0.5\columnwidth]{./Figure_S7.pdf} \caption{ Critical width $L$ and the selective disadvantage $s_b$ of a barrier that is just sufficient to stop a pushed gene drive wave in one dimension. The values are numerically obtained by placing the barrier in a region $25\sqrt{\tau_g D}<x<(25+L)\sqrt{\tau_g D}$. Results are plotted for a variety of selective disadvantages $s$ outside the barrier region. Given $s$, the excitable population wave can be stopped by a barrier whose parameters $(s_b, L / \sqrt{\tau_g D})$ lie above the curves. } \label{Fig_S7} \end{figure} In Fig.~\ref{Fig_S7}, we plot the critical width $L$ and selective disadvantage within the one dimensional barrier region $s_b$ just sufficient to stop the excitable population wave of the gene drive species. The values are numerically obtained by placing the barrier in a region $25\sqrt{\tau_g D}<x<(25+L)\sqrt{\tau_g D}$. For example, when the selective disadvantage outside the barrier region is set to be $s \approx 0.65$, the excitable gene drive wave can be stopped by increasing $s$ by $\sim 20\%$ within the barrier region of thickness $\sim \sqrt{\tau_g D/s}$. \FloatBarrier \subsection{Fluctuations due to finite population size} In this section, we estimate effects of fluctuations due to a finite population size using mosquitos as an example. First, we define the effective spatial population size $N_{\rm{eff}}$ to be the number of mosquitos with which an individual might conceivably mate during its generation time $\tau_g$ \cite{hartl1997principles}. Given a diffusion constant $D$, the two dimensional area an individual can explore during its life time $\tau_g$ is $\pi (\sqrt{4D\tau_g})^2$ and the effective population size in two dimensions is \begin{equation} N_{\rm{eff}} \equiv 4 \pi D \tau_g n, \end{equation} where $n$ is the area density of organisms. Here, we estimate $N_{\rm{eff}}$ using parameters appropriate to mosquitos: $\tau_g\sim 10[\text{days}]$ \cite{deredec2011requirements}, $D\sim 0.1 [\text{km}^2/\text{day}]$ and $n\sim 1 [\rm{m}^{-2}] = 10^6 [\rm{km}^{-2}]$ to get $N_{\rm{eff}}\sim 10^5 - 10^6$. With such a large effective population size, we believe that the dynamics can be well described by the deterministic limit explored here. Fluctuations \emph{can} play a role for systems with smaller populations and such effects have been thoroughly investigated in the physics literatures \cite{brunet1997shift, van2003front, brunet2015exactly, cohen2005fluctuation, hallatschek2009fisher}. Pulled waves are more sensitive to fluctuations, with a Fisher wave velocity that changes according to \begin{equation} v=v_F[ 1 - O(1/\ln^2 N_{\rm{eff}})], \end{equation} where $v_F$ is the velocity of the pulled wave in the deterministic limit \cite{brunet1997shift}. \clearpage \twocolumngrid
2024-02-18T23:40:17.137Z
2017-08-23T02:03:14.000Z
algebraic_stack_train_0000
1,933
10,733
proofpile-arXiv_065-9616
\section{Introduction}\label{sec:intro} The classic problem of community detection in a network graph corresponds to an optimization problem which is \textit{global} as it requires knowledge on the \textit{whole} network structure. The problem is known to be computationally difficult to solve, while its approximate solutions have to cope with both accuracy and efficiency issues that become more severe as the network increases in size. Large-scale, web-based environments have indeed traditionally represented a natural scenario for the development and testing of effective community detection approaches. In the last few years, the problem has attracted increasing attention in research contexts related to \textit{complex networks}~\cite{Mucha10,CarchioloLMM10,PapalexakisAI13,Kivela+14,DeDomenico15,Loe15,KimL15,Peixoto15}, whose modeling and analysis is widely recognized as a useful tool to better understand the characteristics and dynamics of multiple, interconnected types of node relations and interactions~\cite{BerlingerioPC13,Magnanibook} Nevertheless, especially in social computing, one important aspect to consider is that we might often want to identify the personalized network of social contacts of interest to a single user only. To this aim, we would like to determine the expanded neighborhood of that user which forms a densely connected, relatively small subgraph. This is known as \textit{local community detection} problem~\cite{Clauset05,ChenZG09}, whose general objective is, given limited information about the network, to identify a community structure which is centered on one or few seed users. Existing studies on this problem have focused, however, on social networks that are built on a single user relation type or context~\cite{ChenZG09,ZakrzewskaB15}. As a consequence, they are not able to profitably exploit the fact that most individuals nowadays have multiple accounts across different social networks, or that relations of different types (i.e., online as well as offline relations) can be available for the same population of a social network~\cite{Magnanibook}. In this work, we propose a novel framework based on the multilayer network model for the problem of local community detection, which overcomes the aforementioned limitations in the literature, i.e., community detection on a multilayer network but from a global perspective, and local community detection but limited to monoplex networks. We have recently brought the local community detection problem into the context of multilayer networks~\cite{ASONAM16}, by providing a preliminary formulation based on an unsupervised approach. A key aspect of our proposal is the definition of similarity-based community relations that exploit both internal and external connectivity of the nodes in the community being constructed for a given seed, while accounting for different layer-specific topological information. Here we push forward our research by introducing a parametric control in the similarity-based community relations for the layer-coverage diversification in the local community being discovered. Our experimental evaluation conducted on three real-world multilayer networks has shown the significance of our approach. \section{Multilayer Local Community Detection} \label{sec:LCD} \subsection{The \textsf{ML-LCD}\xspace method} We refer to the multilayer network model described in~\cite{Kivela+14}. We are given a set of layers $\mathcal{L}$ and a set of entities (e.g., users) $\V$. We denote with $G_{\mathcal{L}} = (V_{\mathcal{L}}, E_{\mathcal{L}}, \V, \mathcal{L})$ the multilayer graph such that $V_{\mathcal{L}}$ is a set of pairs $v \in \V, L \in \mathcal{L}$, and $E_{\mathcal{L}} \subseteq V_{\mathcal{L}} \times V_{\mathcal{L}}$ is the set of undirected edges. Each entity of $V$ appears in at least one layer, but not necessarily in all layers. Moreover, in the following we will consider the specific case for which nodes connected through different layers the same entity in $\V$, i.e., $G_{\mathcal{L}}$ is a multiplex graph. Local community detection approaches generally implement some strategy that at each step considers a node from one of three sets, namely: the community under construction (initialized with the seed node), the ``shell'' of nodes that are neighbors of nodes in the community but do not belong to the community, and the unexplored portion of the network. A key aspect is hence how to select the \textit{best} node in the shell to add to the community to be identified. Most algorithms, which are designed to deal with monoplex graphs, try to maximize a function in terms of the \textit{internal} edges, i.e., edges that involve nodes in the community, and to minimize a function in terms of the \textit{external} edges, i.e., edges to nodes outside the community. By accounting for both types of edges, nodes that are candidates to be added to the community being constructed are penalized in proportion to the amount of links to nodes external to the community~\cite{Clauset05}. % Moreover, as first analyzed in~\cite{ChenZG09}, considering the internal-to-external \textit{connection density} ratio (rather than the absolute amount of internal and external links to the community) allows for alleviating the issue of inserting many weakly-linked nodes (i.e., \textit{outliers}) into the local community being discovered. In this work we follow the above general approach and extend it to identify local communities over a multilayer network. Given $G_{\mathcal{L}} = (V_{\mathcal{L}}, E_{\mathcal{L}}, \V, \mathcal{L})$ and a seed node $v_0$, we denote with $C \subseteq \V$ the node set corresponding to the local community being discovered around node $v_0$; moreover, when the context is clear, we might also use $C$ to refer to the local community subgraph. We denote with $S = \{v \in \V \setminus C \ | \ \exists ((u,L_i),(v,L_j)) \in E_{\mathcal{L}} \ \wedge \ u \in C\}$ the \textit{shell} set of nodes outside $C$, and with $B = \{ u \in C \ | \ \exists ((u,L_i),(v,L_j)) \in E_{\mathcal{L}} \ \wedge \ v \in S\}$ the \textit{boundary} set of nodes in $C$. Our proposed method, named {\em {\bf M}ulti{\bf L}ayer {\bf L}ocal {\bf C}ommunity {\bf D}etection} (\textsf{ML-LCD}\xspace), takes as input the multilayer graph $G_{\mathcal{L}}$ and a seed node $v_0$, and computes the local community $C$ associated to $v_0$ by performing an iterative search that seeks to maximize the value of \textit{similarity-based local community function} for $C$ ($LC(C)$), which is obtained as the ratio of an \textit{internal community relation} $LC^{int}(C)$ to an \textit{external community relation} $LC^{ext}(C)$. We shall formally define these later in Section~\ref{sec:funcsim}. Algorithm \textsf{ML-LCD}\xspace works as follows. Initially, the boundary set $B$ and the community $C$ are initialized with the starting seed, while the shell set $S$ is initialized with the neighborhood set of $v_0$ considering all the layers in $\mathcal{L}$. Afterwards, the algorithm computes the initial value of $LC(C)$ and starts expanding the node set in $C$: it evaluates all the nodes $v$ belonging to the current shell set $S$, then selects the vertex $v^{*}$ that maximizes the value of $LC(C)$. The algorithm checks if \textit{(i)} $v^{*}$ actually increases the quality of $C$ (i.e., $LC(C \cup \{v^{*}\})>LC(C)$) and \textit{(ii)} $v^{*}$ helps to strength the internal connectivity of the community (i.e., $LC^{int}(C \cup \{v^*\}) > LC^{int}(C) $). If both conditions are satisfied, node $v^{*}$ is added to $C$ and the shell set is updated accordingly, otherwise node $v^{*}$ is removed from $S$ as it cannot lead to an increase in the value of $LC(C)$. In any case, the boundary set $B$ and $LC(C)$ are updated. The algorithm terminates when no further improvement in $LC(C)$ is possible. \subsection{Similarity-based local community function} \label{sec:funcsim} To account for the multiplicity of layers, we define the multilayer local community function $LC(\cdot)$ based on a notion of similarity between nodes. In this regard, two major issues are how to choose the analytical form of the similarity function, and how to deal with the different, layer-specific connections that any two nodes might have in the multilayer graph. We address the first issue in an unsupervised fashion, by resorting to any similarity measure that can express the topological affinity of two nodes in a graph. Concerning the second issue, one straightforward solution is to determine the similarity between any two nodes focusing on each layer at a time. The above points are formally captured by the following definitions. We denote with $E^C$ the set of edges between nodes that belong to $C$ and with $E_i^C$ the subset of $E^C$ corresponding to edges in a given layer $L_i$. Analogously, $E^B$ refers to the set of edges between nodes in $B$ and nodes in $S$, and $E_i^B$ to its subset corresponding to $L_i$. Given a community $C$, we define the \textit{similarity-based local community function} $LC(C)$ as the ratio between the \textit{internal community relation} and \textit{external community relation}, respectively defined as: \begin{equation}\label{eq:Def2_Lin} LC^{int}(C)=\frac{1}{|C|}\sum_{v \in C} \sum_{L_i \in \mathcal{L}} \sum_{\substack{(u,v) \in E_i^C \ \wedge \ u \in C}} sim_i(u,v) \end{equation} \begin{equation}\label{eq:Def2_Lex} LC^{ext}(C) = \frac{1}{|B|} \sum_{v \in B} \sum_{L_i \in \mathcal{L}} \sum_{\substack{(u,v) \in E_i^B \ \wedge \ u \in S}} sim_i(u,v) \end{equation} In the above equations, function $sim_i(u,v)$ computes the similarity between any two nodes $u,v$ contextually to layer $L_i$. In this work, we define it in terms of Jaccard coefficient, i.e., $sim_i(u,v) = \frac{|N_i(u) \cap N_i(v)|}{|N_i(u) \cup N_i(v)|}$, where $N_i(u)$ denotes the set of neighbors of node $u$ in layer $L_i$. \subsection{Layer-coverage diversification bias} When discovering a multilayer local community centered on a seed node, the iterative search process in \textsf{ML-LCD}\xspace that seeks to maximize the similarity-based local community measure, explores the different layers of the network. This implies that the various layers might contribute very differently from each other in terms of edges constituting the local community structure. In many cases, it can be desirable to control the degree of heterogeneity of relations (i.e., layers) inside the local community being discovered. In this regard, we identify two main approaches: \begin{itemize} \item \textbf{Diversification-oriented approach.} This approach relies on the assumption that a local community is better defined by increasing as much as possible the number of edges belonging to different layers. More specifically, we might want to obtain a local community characterized by high diversification in terms of presence of layers and variability of edges coming from different layers. \item \textbf{Balance-oriented approach.} Conversely to the previous case, the aim is to produce a local community that shows a certain \textit{balance} in the presence of layers, i.e., low variability of edges over the different layers. This approach relies on the assumption that a local community might be well suited to real cases when it is uniformly distributed among the different edge types taken into account. \end{itemize} Following the above observations, here we propose a methodology to incorporate a parametric control of the layer-coverage diversification in the local community being discovered. To this purpose, we introduce a \textit{bias factor} $\beta$ in \textsf{ML-LCD}\xspace which impacts on the node similarity measure according to the following logic: \begin{equation} \beta= \begin{cases} (0, 1], & \textit{diversification-oriented bias}\\ 0, & \textit{no bias}\\ [-1,0), & \textit{balance-oriented bias} \end{cases} \end{equation} \noindent Positive values of $\beta$ push the community expansion process towards a diversi\-fication-oriented approach, and, conversely, negative $\beta$ lead to different levels of balance-oriented scheme. Note that the \textit{no bias} case corresponds to handling the node similarity ``as is''. Note also that, by assuming values in a continuous range, at each iteration \textsf{ML-LCD}\xspace is enabled to make a decision by accounting for a wider spectrum of degrees of layer-coverage diversification. Given a node $v \in B$ and a node $u \in S$, for any $L_i \in \mathcal{L}$, we define the $\beta$-biased similarity $sim_{\beta, i}(u,v)$ as follows: \begin{eqnarray} sim_{\beta, i}(u,v) = \frac{2sim_i(u,v)}{1+e^{-bf}},\\ bf=\beta[f(C \cup \{u\})-f(C)] \end{eqnarray} \noindent where $bf$ is a \textit{diversification factor} and $f(C)$ is a function that measures the current diversification between the different layers in the community $C$; in the following, we assume it is defined as the standard deviation of the number of edges for each layer in the community. The difference $f(C \cup \{u\})-f(C)$ is positive when the insertion of node $u$ into the community increases the coverage over a subset of layers, thus diversifying the presence of layers in the local community. Consequently, when $\beta$ is positive, the diversification effect is desired, i.e., there is a boost in the value of $sim_{\beta, i}$ (and vice versa for negative values of $\beta$). Note that $\beta$ introduces a bias on the similarity between two nodes only when evaluating the inclusion of a shell node into a community $C$, i.e., when calculating $LC^{ext}(C)$. \section{Experimental Evaluation} \label{sec:results} We used three multilayer network datasets, namely \textit{Airlines} (417 nodes corresponding to airport locations, 3588 edges, 37 layers corresponding to airline companies)~\cite{Cardillo}, \textit{AUCS} (61 employees as nodes, 620 edges, 5 acquaintance relations as layers)~\cite{Magnanibook}, and \textit{RealityMining} (88 users as nodes, 355 edges, 3 media types employed to communicate as layers)~\cite{KimL15}. All network graphs are undirected, and inter-layer links are regarded as coupling edges. \begin{table}[t!] \caption{Mean and standard deviation size of communities by varying $\beta$ (with step of 0.1).} \centering \scalebox{0.8}{ \begin{tabular}{|l|l||c|c|c|c|c|c|c|c|c|c|c|} \hline dataset & & -1.0 & -0.9 & -0.8 & -0.7 & -0.6 & -0.5 & -0.4 & -0.3 & -0.2 & -0.1 & \cellcolor{blue!15}0.0 \\ \hline \multirow{ 2}{*}{\textit{Airlines}} & \textit{mean} & 5.73 & 5.91 & 6.20 & 6.47 & 6.74 & 7.06 & 7.57 & 8.10 & 9.13 & 10.33 & \cellcolor{blue!15}11.33 \\ & \textit{sd} & 4.68 & 4.97 & 5.45 & 5.83 & 6.39 & 6.81 & 7.63 & 8.62 & 10.58 & 12.80 & \cellcolor{blue!15}14.78 \\ \hline \multirow{ 2}{*}{\textit{AUCS}} & \textit{mean} & 6.38 & 6.59 & 6.64 & 6.75 & 6.84 & 6.85 & 6.92 & 7.13 & 7.16 & 7.77 & \cellcolor{blue!15}7.90 \\ & \textit{sd} & 1.48 & 1.51 & 1.59 & 1.69 & 1.85 & 1.85 & 1.87 & 2.15 & 2.18 & 2.40 & \cellcolor{blue!15}2.74 \\ \hline \textit{Reality-} & \textit{mean} & 3.21 & 3.24 & 3.25 & 3.25 & 3.32 & 3.32 & 3.34 & 3.34 & 3.34 & 3.37 & \cellcolor{blue!15}3.37 \\ \textit{Mining} & \textit{sd} & 1.61 & 1.64 & 1.66 & 1.66 & 1.73 & 1.73 & 1.74 & 1.74 & 1.74 & 1.77 & \cellcolor{blue!15}1.77 \\ \hline \end{tabular} } \\ \vspace{2mm} \scalebox{0.8}{ \begin{tabular}{|l|l||c|c|c|c|c|c|c|c|c|c|} \hline dataset & & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 \\ \hline \multirow{ 2}{*}{\textit{Airlines}} & \textit{mean} & 9.80 & 9.02 & 8.82 & 8.37 & 8.20 & 7.93 & 7.53 & 7.26 & 7.06 & 7.06 \\ & \textit{sd} & 12.10 & 10.61 & 10.07 & 9.39 & 9.15 & 8.67 & 7.82 & 7.46 & 7.35 & 7.27 \\ \hline \multirow{ 2}{*}{\textit{AUCS}} & \textit{mean} & 8.77 & 8.92 & 8.92 & 8.89 & 8.89 & 8.89 & 8.87 & 8.85 & 8.85 & 8.85 \\ & \textit{sd} & 3.16 & 3.33 & 3.33 & 3.27 & 3.27 & 3.27 & 3.26 & 3.23 & 3.23 & 3.23 \\ \hline \textit{Reality-} & \textit{mean} & 3.38 & 3.39 & 3.39 & 3.39 & 3.36 & 3.36 & 3.32 & 3.18 & 3.17 & 3.17 \\ \textit{Mining} & \textit{sd} & 1.78 & 1.78 & 1.78 & 1.78 & 1.74 & 1.74 & 1.71 & 1.60 & 1.59 & 1.59 \\ \hline \end{tabular} } \label{tab:size} \end{table} \textbf{Size and structural characteristics of local communities.\ } We first analyzed the size of the local communities extracted by \textsf{ML-LCD}\xspace for each node. Table~\ref{tab:size} reports on the mean and standard deviation of the size of the local communities by varying of $\beta$. As regards the \textit{no bias} solution (i.e, $\beta=0.0$), largest local communities correspond to \textit{Airlines} (mean 11.33 $\pm$ 14.78), while medium size communities (7.90 $\pm$ 2.74) are found for \textit{AUCS} and relatively small communities (3.37 $\pm$ 1.77) for \textit{RealityMining}. The impact of $\beta$ on the community size is roughly proportional to the number of layers, i.e., high on \textit{Airlines}, medium on \textit{AUCS} and low on \textit{RealityMining}. For \textit{Airlines} and \textit{AUCS}, smallest communities are obtained with the solution corresponding to $\beta=-1.0$, thus suggesting that the discovery process becomes more xenophobic (i.e., less inclusive) while shifting towards a balance-oriented scheme. Moreover, on \textit{Airlines}, the mean size follows a roughly normal distribution, with most inclusive solution (i.e., largest size) corresponding to the unbiased one. A near normal distribution (centered on $0.2 \leq \beta \leq 0.4$) is also observed for \textit{RealityMining}, while mean size values linearly increase with $\beta$ for \textit{AUCS}. To understand the effect of $\beta$ on the structure of the local communities, we analyzed the distributions of per-layer mean \textit{average path length} and mean \textit{clustering coefficient} of the identified communities (results not shown). One major remark is that on the networks with a small number of layers, the two types of distributions tend to follow an increasing trend for balance-oriented bias (i.e., negative $\beta$), which becomes roughly constant for the diversification-oriented bias (i.e., positive $\beta$). On \textit{Airlines}, variability happens to be much higher for some layers, which in the case of mean average path length ranges between 0.1 and 0.5 (as shown by a rapidly decreasing trend for negative $\beta$, followed by a peak for $\beta=0.2$, then again a decreasing trend). \begin{figure}[t!] \centering \subfigure[\textit{Airlines}]{\includegraphics[width=0.32\columnwidth]{layers_per_communities_per_beta_lcd2_airlines.png}} \subfigure[\textit{AUCS}]{\includegraphics[width=0.32\columnwidth]{layers_per_communities_per_beta_lcd2_aucs.png}} \subfigure[\textit{RealityMining}]{\includegraphics[width=0.32\columnwidth]{layers_per_communities_per_beta_lcd2_rm.png}} \caption{Distribution of number of layers over communities by varying $\beta$. Communities are sorted by decreasing number of layers.} \label{fig:betalayers} \end{figure} \textbf{Distribution of layers over communities.\ } We also studied how the bias factor impacts on the distribution of number of layers over communities, as shown in Figure~\ref{fig:betalayers}. This analysis confirmed that using positive values of $\beta$ produces local communities that lay on a higher number of layers. This outcome can be easily explained since positive values of $\beta$ favor the inclusion of nodes into the community which increase layer-coverage diversification, thus enabling the exploration of further layers also in an advanced phase of the discovering process. Conversely, negative values of $\beta$ are supposed to yield a roughly uniform distribution of the layers which are covered by the community, thus preventing the discovery process from including nodes coming from unexplored layers once the local community is already characterized by a certain subset of layers. As regards the effects of the bias factor on the layer-coverage diversification, we analyzed the standard deviation of the per-layer number of edges by varying $\beta$ (results not shown, due to space limits of this paper). As expected, standard deviation values are roughly proportional to the setting of the bias factor for all datasets. Considering the local communities obtained with negative $\beta$, the layers on which they lay are characterized by a similar presence (in terms of number of edges) in the induced community subgraph. Conversely, for the local communities obtained using positive $\beta$, the induced community subgraph may be characterized by a small subset of layers, while other layers may be present with a smaller number of relations. \begin{figure}[t!] \centering \subfigure[\textit{Airlines}]{\includegraphics[width=0.45\columnwidth]{jac_log2_filter_airlines.png}} \subfigure[\textit{AUCS}]{\includegraphics[width=0.45\columnwidth]{jac_log2_filter_aucs.png}} \caption{Average Jaccard similarity between solutions obtained by varying $\beta$.} \label{fig:betalayers_jac} \end{figure} \textbf{Similarity between communities.\ } The smooth effect due to the diversification-oriented bias is confirmed when analyzing the similarity between the discovered local communities. Figure~\ref{fig:betalayers_jac} shows the average Jaccard similarity between solutions obtained by varying $\beta$ (i.e., in terms of nodes included in each local community). Jaccard similarities vary in the range $[0.75,1.0]$ for \textit{AUCS} and \textit{Airlines}, and in the range $[0.9,1.0]$ for \textit{RealityMining} (results not shown). For datasets with a lower number of layers (i.e., \textit{AUCS} and \textit{RealityMining}), there is a strong separation between the solutions obtained for $\beta>0$ and the ones obtained with $\beta<0$. On \textit{AUCS}, the local communities obtained using a diversification-oriented bias show Jaccard similarities close to $1$, while there is more variability among the solutions obtained with the balance-oriented bias. Effects of the bias factor are lower on \textit{RealityMining}, with generally high Jaccard similarities. On \textit{Airlines}, the effects of the bias factor are still present but smoother, with gradual similarity variations in the range $[0.75,1.0]$. \section{Conclusion} \label{sec:conclusion} We addressed the novel problem of local community detection in multilayer networks, providing a greedy heuristic that iteratively attempts to maximize the internal-to-external connection density ratio by accounting for layer-specific topological information. Our method is also able to control the layer-coverage diversification in the local community being discovered, by means of a bias factor embedded in the similarity-based local community function. Evaluation was conducted on real-world multilayer networks. As future work, we plan to study alternative objective functions for the ML-LCD problem. It would also be interesting to enrich the evaluation part based on data with ground-truth information. We also envisage a number of application problems for which ML-LCD methods can profitably be used, such as friendship prediction, targeted influence propagation, and more in general, mining in incomplete networks.
2024-02-18T23:40:17.620Z
2017-04-12T02:11:05.000Z
algebraic_stack_train_0000
1,955
3,771
proofpile-arXiv_065-9693
\section{Conclusions}\label{s:conclusions} We propose a comprehensive, formal framework for learning explanations as meta-predictors. We also present a novel image saliency paradigm that learns \emph{where} an algorithm \emph{looks} by discovering which parts of an image most affect its output score when perturbed. Unlike many saliency techniques, our method explicitly edits to the image, making it interpretable and testable. We demonstrate numerous applications of our method, and contribute new insights into the fragility of neural networks and their susceptibility to artifacts. \section{Experiments}\label{s:experiments} \subsection{Interpretability}\label{s:interpretability} \begin{figure}\centering \includegraphics[width=0.9\linewidth, trim=0 1em 0 1em]{chocolate_masked_example.pdf}\\ \includegraphics[width=0.9\linewidth, trim=0 1em 0 0]{truck_masked_example.pdf} \caption{Interrogating suppressive effects. Left to right: original image with the learned mask overlaid; a boxed perturbation chosen out of interest (the truck's middle bounding box was chosen based on the contrastive excitation backprop heatmap from~\cref{f:comparison}, row 6); another boxed perturbation based on the learned mask (target softmax probabilities of for the original and perturbed images are listed above).} \label{f:manual_edit} \end{figure} An advantage of the proposed framework is that the generated visualizations are clearly interpretable. For example, the deletion game produces a minimal mask that prevents the network from recognizing the object. When compared to other techniques (\cref{f:comparison}), this method can pinpoint the reason why a certain object is recognized without highlighting non-essential evidence. This can be noted in \cref{f:comparison} for the CD player (row 7) where other visualizations also emphasize the neighboring speakers, and similarly for the cliff (row 3), the street sign (row 4), and the sunglasses (row 8). Sometimes this shows that only a part of an object is essential: the face of the Pekenese dog (row 2), the upper half of the truck (row 6), and the spoon on the chocolate sauce plate (row 1) are all found to be minimally sufficient parts. While contrastive excitation backprop generated heatmaps that were most similar to our masks, our method introduces a quantitative criterion (i.e., maximally suppressing a target class score), and its verifiable nature (i.e., direct edits to an image), allows us to compare differing proposed saliency explanations and demonstrate that our learned masks are better on this metric. In~\cref{f:manual_edit}, row 2, we show that applying a bounded perturbation informed by our learned mask significantly suppresses the truck softmax score, whereas a boxed perturbation on the truck's back bumper, which is highlighted by contrastive excitation backprop in~\cref{f:comparison}, row 6, actually increases the score from $0.717$ to $0.850$. The principled interpretability of our method also allows us to identify instances when an algorithm may have learned the wrong association. In the case of the chocolate sauce in~\cref{f:manual_edit}, row 1, it is surprising that the spoon is highlighted by our learned mask, as one might expect the sauce-filled jar to be more salient. However, manually perturbing the image reveals that indeed the spoon is more suppressive than the jar. One explanation is that the ImageNet ``chocolate sauce'' images contain more spoons than jars, which appears to be true upon examining some images. More generally, our method allows us to diagnose highly-predictive yet non-intuitive and possibly misleading correlations by identified machine learning algorithms in the data. \subsection{Deletion region representativeness}\label{s:generality} To test that our learned masks are generalizable and robust against artifacts, we simplify our masks by further blurring them and then slicing them into binary masks by thresholding the smoothed masks by $\alpha \in [0:0.05:0.95]$ (\cref{f:sanity_check_graph}, top; $\alpha \in [0.2,0.6]$ tends to cover the salient part identified by the learned mask). We then use these simplified masks to edit a set of 5,000 ImageNet images with constant, noise, and blur perturbations. Using GoogLeNet~\cite{szegedy2015going}, we compute normalized softmax probabilities\footnote{\label{n:normalize}$p'=\dfrac{p-p_0}{p_0-p_b}$, where $p,p_0,p_b$ are the masked, original, and fully blurred images' scores} (\cref{f:sanity_check_graph}, bottom). The fact that these simplified masks quickly suppress scores as $\alpha$ increases for all three perturbations gives confidence that the learned masks are identifying the right regions to perturb and are generalizable to a set of extracted masks and other perturbations that they were not trained on. \begin{figure}[t]\centering \includegraphics[width=0.9\linewidth]{sanity_check_test_5.pdf} \includegraphics[width=0.9\linewidth,trim=0 1em 0 0]{sanity_check_test_graph_shrink.pdf} \caption{\textbf{(Top)} Left to right: original image, learned mask, and simplified masks for~\cref{s:generality} (not shown: further smoothed mask). \textbf{(Bottom)} Swift softmax score suppression is observed when using all three perturbations with simplified binary masks (top) derived from our learned masks, thereby showing the generality of our masks.} \label{f:sanity_check_graph} \end{figure} \subsection{Minimality of deletions}\label{s:deletion} In this experiments we assess the ability of our method to correctly identify a minimal region that suppresses the object. Given the output saliency map, we normalize its intensities to lie in the range $[0,1]$, threshold it with $h \in [0:0.1:1]$, and fit the tightest bounding box around the resulting heatmap. We then blur the image in the box and compute the normalized\textsuperscript{\ref{n:normalize}} target softmax probability from GoogLeNet~\cite{szegedy2015going} of the partially blurred image. From these bounding boxes and normalized scores, for a given amount of score suppression, we find the smallest bounding box that achieves that amount of suppression. \Cref{f:deletion} shows that, on average, our method yields the smallest minimal bounding boxes when considering suppressive effects of $80\%,90\%,95\%,\text{ and }99\%$. These results show that our method finds a small salient area that strongly impacts the network. \begin{figure}[t]\centering \includegraphics[width=0.9\linewidth]{deletion_game_bar_w_gradcam_i5_occlusion.pdf} \caption{On average, our method generates the smallest bounding boxes that, when used to blur the original images, highly suppress their normalized softmax probabilities (standard error included).} \label{f:deletion} \end{figure} \subsection{Testing hypotheses: animal part saliency}\label{s:parts_exp} From qualitatively examining learned masks for different animal images, we noticed that faces appeared to be more salient than appendages like feet. Because we produce dense heatmaps, we can test this hypothesis. From an annotated subset of the ImageNet dataset that identifies the keypoint locations of non-occluded eyes and feet of vertebrate animals~\cite{novotny2016have}, we select images from classes that have at least 10 images which each contain at least one eye and foot annotation, resulting in a set of 3558 images from 76 animal classes (\cref{f:animal}). For every keypoint, we calculate the average heatmap intensity of a $5\times 5$ window around the keypoint. For all 76 classes, the mean average intensity of eyes were lower and thus more salient than that of feet (see supplementary materials for class-specific results). \begin{figure}[t]\centering \includegraphics[width=0.9\linewidth]{animal_parts_2.pdf} \caption{``tiger'' (left two) and ``bison'' (right two) images with eyes and feet annotations from~\cite{novotny2016have}; our learned masks are overlaid. The mean average feet:eyes intensity ratio for ``tigers'' ($N = 25$) is 3.82, while that for bisons ($N = 22$) is 1.07.} \label{f:animal} \end{figure} \subsection{Adversarial defense} Adversarial examples~\cite{kurakin2016adversarial} are often generated using a complementary optimization procedure to our method that learns a imperceptible pattern of noise which causes an image to be misclassified when added to it. Using our re-implementation of the highly effective one-step iterative method ($\epsilon=8$)~\cite{kurakin2016adversarial} to generate adversarial examples, our method yielded visually distinct, abnormal masks compared to those produced on natural images (\cref{f:adv}, left). We train an Alexnet~\cite{krizhevsky2012imagenet} classifier (learning rate $\lambda_{lr} = 10^{-2}$, weight decay $\lambda_{L1} = 10^{-4}$, and momentum $\gamma = 0.9$) to distinguish between clean and adversarial images by using a given heatmap visualization with respect to the top predicted class on the clean and adversarial images (\cref{f:adv}, right); our method greatly outperforms the other methods and achieves a discriminating accuracy of $93.6\%$. Lastly, when our learned masks are applied back to their corresponding adversarial images, they not only minimize the adversarial label but often allow the original, predicted label from the clean image to rise back as the top predicted class. Our method recovers the original label predicted on the clean image 40.64\% of time and the ground truth label 37.32\% ($N = 5000$). Moreover, 100\% of the time the original, predicted label was recovered as one of top-5 predicted labels in the ``mask+adversarial'' setting. To our knowledge, this is the first work that is able to recover originally predicted labels without any modification to the training set-up and/or network architecture. \begin{figure}[t]\centering \includegraphics[width=0.23\linewidth]{clean_vs_adv_trombone_example.pdf} \includegraphics[width=0.67\linewidth]{clean_vs_adv_graph.pdf} \caption{\textbf{(Left)} Difference between learned masks for clean (middle) and adversarial (bottom) images ($28 \times 28$ masks shown without bilinear upsampling). \textbf{(Right)} Classification accuracy for discriminating between clean vs. adversarial images using heatmap visualizations ($N_{trn} = 4000, N_{val} = 1000$).} \label{f:adv} \end{figure} \subsection{Localization and pointing} Saliency methods are often assessed in terms of weakly-supervised localization and a pointing game~\cite{zhang2016top}, which tests how discriminative a heatmap method is by calculating the precision with which a heatmap's maximum point lies on an instance of a given object class, for more harder datasets like COCO~\cite{lin2014microsoft}. Because the deletion game is meant to discover minimal salient part and/or spurious correlation, we do not expect it to be particularly competitive on localization and pointing but tested them for completeness. For localization, similar to~\cite{zhang2016top,cao2015look}, we predict a bounding box for the most dominant object in each of $\sim$50k ImageNet~\cite{russakovsky2015imagenet} validation images and employ three simple thresholding methods for fitting bounding boxes. First, for value thresholding, we normalize heatmaps to be in the range of $[0,1]$ and then threshold them by their value with $\alpha \in [0:0.05:0.95]$. Second, for energy thresholding~\cite{cao2015look}, we threshold heatmaps by the percentage of energy their most salient subset covered with $\alpha \in [0:0.05:0.95]$. Finally, with mean thresholding~\cite{zhang2016top}, we threshold a heatmap by $\tau = \alpha \mu_{I}$, where $\mu_{I}$ is the mean intensity of the heatmap and $\alpha \in [0:0.5:10]$. For each thresholding method, we search for the optimal $\alpha$ value on a heldout set. Localization error was calculated as the IOU with a threshold of $0.5$. ~\Cref{t:localization} confirms that our method performs reasonably and shows that the three thresholding techniques affect each method differently. Non-contrastive, excitation backprop~\cite{zhang2016top} performs best when using energy and mean thresholding; however, our method performs best with value thresholding and is competitive when using the other methods: It beats gradient~\cite{simonyan14deep} and guided backprop~\cite{springenberg2014striving} when using energy thresholding; beats LRP~\cite{bach2015pixel}, CAM~\cite{zhou2016learning}, and contrastive excitation backprop~\cite{zhang2016top} when using mean thresholding (recall from~\cref{f:comparison} that the contrastive method is visually most similar to mask); and out-performs Grad-CAM~\cite{selvaraju2016grad} and occlusion~\cite{zeiler2014visualizing} for all thresholding methods. \setlength{\tabcolsep}{3pt} \begin{table} \begin{center} \begin{tabular}{|r||c|c||c|c||c|c|} \hline & \footnotesize{Val-$\alpha$*} & \footnotesize{Err (\%)} & \footnotesize{Ene-$\alpha$*} & \footnotesize{Err} & \footnotesize{Mea-$\alpha$*} & \footnotesize{Err} \\ \footnotesize{Grad~\cite{simonyan14deep}} & \footnotesize{0.25} & \footnotesize{46.0} & \footnotesize{0.10} & \footnotesize{43.9} & \footnotesize{5.0} & \footnotesize{41.7${}^\mathsection$} \\ \footnotesize{Guid~\cite{springenberg2014striving,mahendran2016salient}} & \footnotesize{0.05} & \footnotesize{50.2} & \footnotesize{0.30} & \footnotesize{47.0} & \footnotesize{4.5} & \footnotesize{42.0${}^\mathsection$} \\ \footnotesize{Exc~\cite{zhang2016top}} & \footnotesize{0.15} & \footnotesize{46.1} & \footnotesize{0.60} & \footnotesize{\textbf{38.7}} & \footnotesize{1.5} & \footnotesize{\textbf{39.0}${}^\mathsection$ }\\ \footnotesize{C Exc~\cite{zhang2016top}} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{0.0} & \footnotesize{57.0${}^\dagger$} \\ \footnotesize{Feed~\cite{cao2015look}} & \footnotesize{---} & \footnotesize{---} & \footnotesize{0.95} & \footnotesize{38.8${}^\dagger$} & \footnotesize{---} & \footnotesize{---} \\ \footnotesize{LRP~\cite{bach2015pixel}} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{1.0} & \footnotesize{57.8${}^\dagger$} \\ \footnotesize{CAM~\cite{zhou2016learning}}& \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} &\footnotesize{1.0} &\footnotesize{48.1${}^\dagger$}\\ \footnotesize{Grad-CAM~\cite{selvaraju2016grad}} & \footnotesize{0.30} & \footnotesize{48.1} & \footnotesize{0.70} & \footnotesize{48.0} & \footnotesize{1.0} & \footnotesize{47.5} \\ \footnotesize{Occlusion~\cite{zeiler2014visualizing}} & \footnotesize{0.30} & \footnotesize{51.2} & \footnotesize{0.55} & \footnotesize{49.4} & \footnotesize{1.0} & \footnotesize{48.6} \\ \footnotesize{Mask${}^\ddagger$} & \footnotesize{0.10}&\footnotesize{\textbf{44.0}}& \footnotesize{0.95}& \footnotesize{43.1}& \footnotesize{0.5}& \footnotesize{43.2} \\ \hline \end{tabular} \end{center} \caption{Optimal $\alpha$ thresholds and error rates from the weak localization task on the ImageNet validation set using saliency heatmaps to generate bounding boxes. ${}^\dagger$Feedback error rate are taken from~\cite{cao2015look}; all others (contrastive excitation BP, LRP, and CAM) are taken from~\cite{zhang2016top}. ${}^\mathsection$Using~\cite{zhang2016top}'s code, we recalculated these errors, which are $\leq 0.4\%$ of the originally reported rates. ${}^\ddagger$Minimized top5 predicted classes' softmax scores and used $\lambda_1 = 10^{-3}$ and $\beta = 2.0$ (examples in supplementary materials).} \label{t:localization} \end{table} For pointing,~\cref{t:pointing} shows that our method outperforms the center baseline, gradient, and guided backprop methods and beats Grad-CAM on the set of difficult images (images for which 1) the total area of the target category is less than $25\%$ of the image and 2) there are at least two different object classes). We noticed qualitatively that our method did not produce salient heatmaps when objects were very small. This is due to L1 and TV regularization, which yield well-formed masks for easily visible objects. We test two variants of occlusion~\cite{zeiler2014visualizing}, blur and variable occlusion, to interrogate if 1) the blur perturbation with smoothed masks is most effective, and 2) using the smallest, highly suppressive mask is sufficient (Occ${}^\mathsection$ and V-Occ in~\cref{t:pointing} respectively). Blur occlusion outperforms all methods except contrast excitation backprop while variable while variable occlusion outperforms all except contrast excitation backprop and the other occlusion methods, suggesting that our perturbation choice of blur and principle of identifying the smallest, highly suppressive mask is sound even if our implementation struggles on this task (see supplementary materials for examples and implementation details). \setlength{\tabcolsep}{1pt} \begin{table} \begin{center} \begin{tabular}{|r|c|c|c|c|c|c|c|c|c|c|} \hline & \footnotesize{Ctr} & \footnotesize{Grad}& \footnotesize{Guid} & \footnotesize{Exc} & \footnotesize{CExc} & \footnotesize{G-CAM} & \footnotesize{Occ} & \footnotesize{Occ${}^\mathsection$} & \footnotesize{V-Occ${}^\dagger$} & \footnotesize{Mask${}^\ddagger$} \\ \hline \footnotesize{All} & \footnotesize{27.93} & \footnotesize{36.40} & \footnotesize{32.68} & \footnotesize{41.78} & \footnotesize{\textbf{50.95}} & \footnotesize{41.10} & \footnotesize{44.50} & \footnotesize{45.41} & \footnotesize{42.31} & \footnotesize{37.49}\\ \footnotesize{Diff} & \footnotesize{17.86} & \footnotesize{28.21} & \footnotesize{26.16} & \footnotesize{32.73} & \footnotesize{\textbf{41.99}} & \footnotesize{30.59} & \footnotesize{36.45} & \footnotesize{37.45} & \footnotesize{33.87} & \footnotesize{30.64}\\ \hline \end{tabular} \end{center} \caption{Pointing Game~\cite{zhang2016top} Precision on COCO Val Subset ($N \approx 20\text{k}$). ${}^\mathsection$Occluded with circles ($r = 35/2$) softened by $g_{\sigma_m = 10}$ and used to perturb with blur ($\sigma = 10$). ${}^\dagger$Occluded with variable-sized blur circles; from the top $10\%$ most suppressive occlusions, the one with the smallest radius is chosen and its center is used as the point. ${}^\ddagger$Used min. top-5 hyper-parameters ($\lambda_1 = 10^{-3}$, $\beta = 2.0$).} \label{t:pointing} \end{table} \section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the ICCV 2017 web page for a discussion of the policy on dual submissions. \subsection{Paper length} For ICCV 2017, the rules about paper length have changed, so please read this section carefully. Papers, excluding the references section, must be no longer than eight pages in length. One additional page containing {\em only} cited references is allowed, for a total maximal length of nine pages. Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ICCV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. Page numbers should be in footer with page numbers, centered and .75 inches from the bottom of the page and make it start at the correct page number rather than the 4321 in the example. To do this fine the line (around line 23) \begin{verbatim} \setcounter{page}{4321} \end{verbatim} where the number 4321 is your assigned starting page. Make sure the first page is numbered by commenting out the first page being empty on line 46 \begin{verbatim} \end{verbatim} \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the ICCV 2017 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee} \section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the ICCV 2017 web page for a discussion of the policy on dual submissions. \subsection{Paper length} For ICCV 2017, the rules about paper length have changed, so please read this section carefully. Papers, excluding the references section, must be no longer than eight pages in length. One additional page containing {\em only} cited references is allowed, for a total maximal length of nine pages. Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ICCV70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the ICCV 2017 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or Fax (714) 761-1784. {\small \bibliographystyle{ieee} \section{Introduction}\label{s:intro} Given the powerful but often opaque nature of modern black box predictors such as deep neural networks~\cite{krizhevsky2012imagenet,kurakin2016adversarial}, there is a considerable interest in \emph{explaining} and \emph{understanding} predictors \emph{a-posteriori}, after they have been learned. This remains largely an open problem. One reason is that we lack a formal understanding of what it means to explain a classifier. Most of the existing approaches~\cite{zeiler2014visualizing,springenberg2014striving,mahendran2016salient,mahendran2015understanding,mahendran2016visualizing,zeiler2014visualizing}, etc., often produce intuitive visualizations; however, since such visualizations are primarily heuristic, their meaning remains unclear. In this paper, we revisit the concept of ``explanation'' at a formal level, with the goal of developing principles and methods to explain any black box function $f$, e.g. a neural network object classifier. Since such a function is learned automatically from data, we would like to understand \emph{what} $f$ has learned to do and \emph{how} it does it. Answering the ``what'' question means determining the properties of the map. The ``how'' question investigates the internal mechanisms that allow the map to achieve these properties. We focus mainly on the ``what'' question and argue that it can be answered by providing \emph{interpretable rules} that describe the input-output relationship captured by $f$. For example, one rule could be that $f$ is rotation invariant, in the sense that ``$f(x)=f(x')$ whenever images $x$ and $x'$ are related by a rotation''. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth, trim=0 .9em 0 .9em]{splash_7.pdf} \end{center} \caption{An example of a mask learned (right) by blurring an image (middle) to suppress the softmax probability of its target class (left: original image; softmax scores above images).} \label{f:splash} \end{figure} In this paper, we make several contributions. First, we propose the general framework of explanations as meta-predictors (\cref{s:related}), extending~\cite{turner15a-model}'s work. Second, we identify several pitfalls in designing automatic explanation systems. We show in particular that neural network artifacts are a major attractor for explanations. While artifacts are informative since they explain part of the network behavior, characterizing other properties of the network requires careful calibration of the \emph{generality} and \emph{interpretability} of explanations. Third, we reinterpret network saliency in our framework. We show that this provides a natural generalization of the gradient-based saliency technique of~\cite{simonyan14deep} by \emph{integrating} information over several rounds of backpropagation in order to learn an explanation. We also compare this technique to other methods~\cite{simonyan14deep, springenberg2014striving,zhang2016top,selvaraju2016grad,zeiler2014visualizing} in terms of their meaning and obtained results. \section{Explaining black boxes with meta-learning}\label{s:method} A \emph{black box} is a map $f: \mathcal{X}\rightarrow\mathcal{Y}$ from an input space $\mathcal{X}$ to an output space $\mathcal{Y}$, typically obtained from an opaque learning process. To make the discussion more concrete, consider as input color images $x : \Lambda \rightarrow \mathbb{R}^3$ where $\Lambda=\{1,\dots,H\}\times\{1,\dots,W\}$ is a discrete domain. The output $y\in\mathcal{Y}$ can be a boolean $\{-1,+1\}$ telling whether the image contains an object of a certain type (e.g.\ a \emph{robin}), the probability of such an event, or some other interpretation of the image content. \subsection{Explanations as meta-predictors}\label{s:meta} An \emph{explanation} is a rule that predicts the response of a black box $f$ to certain inputs. For example, we can explain a behavior of a \emph{robin} classifier by the rule $Q_1(x;f) = \{ x \in\mathcal{X}_c \Leftrightarrow f(x) = +1 \}$, where $\mathcal{X}_c \subset \mathcal{X}$ is the subset of all the \emph{robin} images. Since $f$ is imperfect, any such rule applies only approximately. We can measure the faithfulness of the explanation as its expected prediction error: $\mathcal{L}_1 = \mathbb{E}[1 - \delta_{Q_1(x;f)}]$, where $\delta_Q$ is the indicator function of event $Q$. Note that $Q_1$ implicitly requires a distribution $p(x)$ over possible images $\mathcal{X}$. Note also that $\mathcal{L}_1$ is simply the expected prediction error of the classifier. Unless we did not know that $f$ was trained as a \emph{robin} classifier, $Q_1$ is not very insightful, but it is interpretable since $\mathcal{X}_c$ is. Explanations can also make relative statements about black box outcomes. For example, a black box $f$, could be rotation invariant: $Q_2(x,x';f) = \{ x \sim_\text{rot} x' \Rightarrow f(x) = f(x') \}$, where $x \sim_\text{rot} x'$ means that $x$ and $x'$ are related by a rotation. Just like before, we can measure the faithfulness of this explanation as $\mathcal{L}_2 = \mathbb{E}[1-\delta_{Q_2(x,x';f)}|x \sim x']$.\footnote{For rotation invariance we condition on $x \sim x'$ because the probability of independently sampling rotated $x$ and $x'$ is zero, so that, without conditioning, $Q_2$ would be true with probability 1.} This rule is interpretable because the relation $\sim_\text{rot}$ is. \paragraph{Learning explanations.} A significant advantage of formulating explanations as meta predictors is that their faithfulness can be measured as prediction accuracy. Furthermore, machine learning algorithms can be used to \emph{discover explanations} automatically, by finding explanatory rules $Q$ that apply to a certain classifier $f$ out of a large pool of possible rules $\mathcal{Q}$. In particular, finding the \emph{most accurate} explanation $Q$ is similar to a traditional learning problem and can be formulated computationally as a \emph{regularized empirical risk minimization} such as: \begin{equation}\label{e:learn} \min_{Q\in\mathcal{Q}} \lambda\mathcal{R}(Q) + \frac{1}{n}\sum_{i=1}^n \mathcal{L}(Q(x_i;f),x_i,f), \ x_i \sim p(x). \end{equation} Here, the regularizer $\mathcal{R}(Q)$ has two goals: to allow the explanation $Q$ to generalize beyond the $n$ samples $x_1,\dots,x_n$ considered in the optimization and to pick an explanation $Q$ which is simple and thus, hopefully, more interpretable. \paragraph{Maximally informative explanations.} Simplicity and interpretability are often not sufficient to find good explanations and must be paired with informativeness. Consider the following variant of rule $Q_2$: $Q_3(x,x';f,\theta) = \{ x\sim_\theta x' \Rightarrow f(x) = f(x') \}$, where $x \sim_\theta x'$ means that $x$ and $x'$ are related by a rotation of an angle $\leq \theta$. Explanations for larger angles imply the ones for smaller ones, with $\theta=0$ being trivially satisfied. The regularizer $\mathcal{R}(Q_3(\cdot;\theta))=-\theta$ can then be used to select a maximal angle and thus find an explanation that is as informative as possible.\footnote{Naively, strict invariance for any $\theta >0$ implies invariance to arbitrary rotations as small rotations compose into larger ones. However, the formulation can still be used to describe rotation insensitivity (when $f$ varies slowly with rotation), or $\sim_\theta$'s meaning can be changed to indicate rotation w.r.t.\ a canonical ``upright'' direction for a certain object classes, etc.} \subsection{Local explanations}\label{s:local} \begin{figure}\centering \includegraphics[width=0.45\linewidth]{saliency_0_v2.pdf} \includegraphics[width=0.45\linewidth]{saliency_14_v2.pdf} \caption{Gradient saliency maps of~\cite{simonyan14deep}. A red bounding box highlight the object which is meant to be recognized in the image. Note the strong response in apparently non-relevant image regions.} \label{f:simonyan} \end{figure} A \emph{local explanation} is a rule $Q(x;f,x_0)$ that predicts the response of $f$ in a neighborhood of a certain point $x_0$. If $f$ is smooth at $x_0$, it is natural to construct $Q$ by using the first-order Taylor expansion of $f$: \begin{equation}\label{e:simo} f(x)\approx Q(x;f,x_0) = f(x_0) + \langle \nabla f(x_0),x - x_0\rangle. \end{equation} This formulation provides an interpretation of ~\cite{simonyan14deep}'s saliency maps, which visualize the gradient $S_1(x_0) = \nabla f(x_0)$ as an indication of salient image regions. They argue that large values of the gradient identify pixels that strongly affect the network output. However, an issue is that this interpretation \emph{breaks for a linear classifier}: If $f(x) = \langle w, x \rangle + b$, $S_1(x_0)=\nabla f(x_0) = w$ is independent of the image $x_0$ and hence cannot be interpreted as saliency. The reason for this failure is that~\cref{e:simo} studies the variation of $f$ for arbitrary displacements $\Delta_x = x-x_0$ from $x_0$ and, for a linear classifier, the change is the same regardless of the starting point $x_0$. For a non-linear black box $f$ such as a neural network, this problem is reduced but not eliminated, and can explain why the saliency map $S_1$ is rather diffuse, with strong responses even where no obvious information can be found in the image (\cref{f:simonyan}). We argue that the meaning of explanations depends in large part on the~\emph{meaning of varying the input $x$ to the black box}. For example, explanations in~\cref{s:meta} are based on letting $x$ vary in image category or in rotation. For saliency, one is interested in finding image regions that impact $f$'s output. Thus, it is natural to consider perturbations $x$ obtained by deleting subregions of $x_0$. If we model deletion by multiplying $x_0$ point-wise by a mask $m$, this amounts to studying the function $f(x_0 \odot m)$\footnote{$\odot$ is the Hadamard or element-wise product of vectors.}. The Taylor expansion of $f$ at $m=(1,1,\dots,1)$ is $ S_2(x_0) = \left.df(x_0 \odot m)/dm\right|_{m=(1,\dots,1)}= \nabla f(x_0) \odot x_0. $ For a linear classifier $f$, this results in the saliency $S_2(x_0) = w \odot x_0$, which is large for pixels for which $x_0$ and $w$ are large simultaneously. We refine this idea for non-linear classifiers in the next section. \section{Saliency revisited}\label{s:sal} \begin{figure}\centering \includegraphics[width=.7\linewidth]{perturbations_87.pdf} \includegraphics[width=.7\linewidth]{perturbation_masks_87.pdf} \caption{Perturbation types. Bottom: perturbation mask; top: effect of blur, constant, and noise perturbations.}\label{f:perturb} \end{figure} \subsection{Meaningful image perturbations}\label{s:perturb} In order to define an explanatory rule for a black box $f(x)$, one must start by specifying which variations of the input $x$ will be used to study $f$. The aim of saliency is to identify which regions of an image $x_0$ are used by the black box to produce the output value $f(x_0)$. We can do so by observing how the value of $f(x)$ changes as $x$ is obtained ``deleting'' different regions $R$ of $x_0$. For example, if $f(x_0)=+1$ denotes a \emph{robin} image, we expect that $f(x)=+1$ as well unless the choice of $R$ deletes the robin from the image. Given that $x$ is a perturbation of $x_0$, this is a local explanation (\cref{s:local}) and we expect the explanation to characterize the relationship between $f$ and $x_0$. While conceptually simple, there are several problems with this idea. The first one is to specify what it means ``delete'' information. As discussed in detail in~\cref{s:artifacts}, we are generally interested in simulating naturalistic or plausible imaging effect, leading to more meaningful perturbations and hence explanations. Since we do not have access to the image generation process, we consider three obvious proxies: replacing the region $R$ with a constant value, injecting noise, and blurring the image (\cref{f:perturb}). Formally, let $m :\Lambda \rightarrow [0,1]$ be a \emph{mask}, associating each pixel $u \in \Lambda$ with a scalar value $m(u)$. Then the perturbation operator is defined as $$ [\Phi(x_0;m)](u) = \begin{cases} m(u)x_0(u) + (1-m(u)) \mu_0, & \text{constant}, \\ m(u)x_0(u) + (1-m(u)) \eta(u), & \text{noise}, \\ \int g_{\sigma_0 m(u)}(v-u) x_0(v)\,dv, & \text{blur}, \\ \end{cases} $$ where $\mu_0$ is an average color, $\eta(u)$ are i.i.d.\ Gaussian noise samples for each pixel and $\sigma_0$ is the maximum isotropic standard deviation of the Gaussian blur kernel $g_\sigma$ (we use $\sigma_0 = 10$, which yields a significantly blurred image). \subsection{Deletion and preservation}\label{s:minimal} Given an image $x_0$, our goal is to summarize compactly the effect of deleting image regions in order to explain the behavior of the black box. One approach to this problem is to find deletion regions that are maximally informative. In order to simplify the discussion, in the rest of the paper we consider black boxes $f(x)\in\mathbb{R}^C$ that generate a vector of scores for different hypotheses about the content of the image (e.g.\ as a softmax probability layer in a neural network). Then, we consider a ``deletion game'' where the goal is to find the smallest deletion mask $m$ that causes the score $f_c(\Phi(x_0;m)) \ll f_c(x_0)$ to drop significantly, where $c$ is the target class. Finding $m$ can be formulated as the following learning problem: \begin{equation}\label{e:opt1} m^\ast = \operatornamewithlimits{argmin}_{m \in [0,1]^\Lambda} \lambda \|\mathbf{1}-m\|_1 + f_c(\Phi(x_0;m)) \end{equation} where $\lambda$ encourages most of the mask to be turned off (hence deleting a small subset of $x_0$). In this manner, we can find a highly informative region for the network. One can also play an symmetric ``preservation game'', where the goal is to find the smallest subset of the image that must be retained to preserve the score $f_c(\Phi(x_0;m)) \geq f_c(x_0)$: m^\ast = \operatornamewithlimits{argmin}_m \lambda \|m\|_1 - f_c(\Phi(x_0;m)) $. The main difference is that the deletion game removes enough evidence to prevent the network from recognizing the object in the image, whereas the preservation game finds a minimal subset of sufficient evidence. \paragraph{Iterated gradients.} Both optimization problems are solved by using a local search by means of gradient descent methods. In this manner, our method extracts information from the black box $f$ by computing its gradient, similar to the approach of~\cite{simonyan14deep}. However, it differs in that it extracts this information progressively, over several gradient evaluations, accumulating increasingly more information over time. \subsection{Dealing with artifacts}\label{s:artifacts} \begin{figure}\centering \includegraphics[width=0.9\linewidth,trim=0 .9em 0 .9em]{artifacts_random_noise_9.pdf} \includegraphics[width=0.9\linewidth,trim=0 .9em 0 .9em]{artifacts_constant_4.pdf} \caption{From left to right: an image correctly classified with large confidence by GoogLeNet~\cite{szegedy2015going}; a perturbed image that is not recognized correctly anymore; the deletion mask learned with artifacts. Top: A mask learned by minimizing the top five predicted classes by jointly applying the constant, random noise, and blur perturbations. Note that the mask learns to add highly structured swirls along the rim of the cup ($\gamma = 1, {\lambda}_1 = 10^{-5}, {\lambda}_2 = 10^{-3}, \beta = 3$). Bottom: A minimizing-top5 mask learned by applying a constant perturbation. Notice that the mask learns to introduce sharp, unnatural artifacts in the sky instead of deleting the pole ($\gamma = 0.1, {\lambda}_1 = 10^{-4}, {\lambda}_2 = 10^{-2}, \beta = 3$).} \label{f:artifacts} \end{figure} By committing to finding a single representative perturbation, our approach incurs the risk of triggering artifacts of the black box. Neural networks, in particular, are known to be affected by surprising artifacts~\cite{kurakin2016adversarial,nguyen2015deep,mahendran2015understanding}; these works demonstrate that it is possible to find particular inputs that can drive the neural network to generate nonsensical or unexpected outputs. This is not entirely surprising since neural networks are trained discriminatively on natural image statistics. While not all artifacts look ``unnatural'', nevertheless they form a subset of images that is sampled with negligible probability when the network is operated normally. Although the existence and characterization of artifacts is an interesting problem \emph{per se}, we wish to characterize the behavior of black boxes under normal operating conditions. Unfortunately, as illustrated in~\cref{f:artifacts}, objectives such as~\cref{e:opt1} are strongly attracted by such artifacts, and naively learn subtly-structured deletion masks that trigger them. This is particularly true for the noise and constant perturbations as they can more easily than blur create artifacts using sharp color contrasts (\cref{f:artifacts}, bottom row). We suggests two approaches to avoid such artifacts in generating explanations. The first one is that powerful explanations should, just like any predictor, generalize as much as possible. For the deletion game, this means not relying on the details of a singly-learned mask $m$. Hence, we reformulate the problem to apply the mask $m$ stochastically, up to small random jitter. Second, we argue that masks co-adapted with network artifacts are \emph{not representative of natural perturbations}. As noted before, the meaning of an explanation depends on the meaning of the changes applied to the input $x$; to obtain a mask more representative of natural perturbations we can encourage it to have a simple, regular structure which cannot be co-adapted to artifacts. We do so by regularizing $m$ in total-variation (TV) norm and upsampling it from a low resolution version. With these two modifications,~\cref{e:opt1} becomes: \begin{multline}\label{e:opt2} \min_{m\in[0,1]^\Lambda} \lambda_1 \|\mathbf{1}-m\|_1 + \lambda_2 \sum_{u\in\Lambda} \|\nabla m(u)\|^\beta_\beta \\ + \mathbb{E}_\tau [ f_c(\Phi(x_0(\cdot - \tau),m)) ], \end{multline} where $M(v) = \sum_{u} g_{\sigma_m}(v/s - u) m(u)$. is the upsampled mask and $g_{\sigma_m}$ is a 2D Gaussian kernel. \Cref{e:opt2} can be optimized using stochastic gradient descent. \paragraph{Implementation details.} Unless otherwise specified, the visualizations shown were generated using Adam~\cite{kingma2014adam} to minimize GoogLeNet's~\cite{szegedy2015going} softmax probability of the target class by using the blur perturbation with the following parameters: learning rate $\gamma = 0.1, N = 300$ iterations, ${\lambda}_1 = 10^{-4}, {\lambda}_2 = 10^{-2}, \beta = 3,$ upsampling a mask ($28\times 28$ for GoogLeNet) by a factor of $\delta = 8$, blurring the upsampled mask with $g_{\sigma_{m} = 5}$, and jittering the mask by drawing an integer from the discrete uniform distribution on $[0,\tau)$ where $\tau = 4$. We initialize the mask as the smallest centered circular mask that suppresses the score of the original image by $99\%$ when compared to that of the fully perturbed image, i.e. a fully blurred image. \section{Introduction}\label{s:intro} Given the powerful but often opaque nature of modern black-box predictors such as deep convolutional neural networks, there is a considerable interest \emph{explaining and understanding} predictors \emph{a-posteriori}, after they have been learned. This remains largely an open problem. One reason is that we lack a proper formalisation of what it means to explain a classifier. Most of the existing approaches, such as DeConvNet~\cite{}, natural pre-images~\cite{}, masking~\cite{}, and others, often produce intuitive visualisations; however, since such visualisations are heuristic, their meaning remains unclear. \section{Related work}\label{s:related} \section{Method}\label{s:method} \subsection{Explaining predictors} A \emph{predictor} is a map $f: \mathcal{X}\rightarrow\mathcal{Y}$ from an input space $\mathcal{X}$ to an output space $\mathcal{Y}$. To make the discussion more concrete, consider as inputs $x \in \mathcal{X}=\mathbb{R}^{N \times N \times 3}$ digital images with $N\times N$ pixels and three colour channels. The output $y\in\mathcal{Y}=\{-1,+1\}$ can be a boolean telling whether the image contains an object of a certain type (e.g.\ a robin), the probability or belief of such an event, or some other interpretation of the image content. When the predictor $f$ is obtained automatically from a learning algorithm, it is natural to ask \emph{what} $f$ does and \emph{how} it does it. In this paper, we focus mainly on the ``what'' question and consider $f$ as a black-box (although $f$ can also be a slice of a neural network, therefore opening the box). Still, understanding what a classifier does can help to understand how the predictor works too. The ``what'' question is answered by providing \emph{interpretable descriptions} of the mapping $f$. For example, a binary classifier $f$ trained to recognise robins can be expected to be \emph{discriminative} for robins, in the sense that ``if, and only if, image $x$ contains a robin, then $f(x)=+1$''. $f$ could also be rotation invariant, in the sense that ``if images $x$ and $x'$ are related by a rotation, then $f(x)=f(x')$''. Generalising these examples, an explanation is a rule that predicts the response of the map $f$ to different inputs. Rules with a strong explanatory power are simple, interpretable, general, and informative. For example, the fact that the output of a binary classifier $f$ is either $-1$ or $+1$ is trivial and thus uninformative. The map $f$ perfectly predicts itself, but it does not provide an interpretable or a simple rule. Discrimination is a non-trivial, mildly informative (because we would expect a classifier to be correct) and interpretable rule. Discrimination is also somewhat complex, as it requires to define the set $\mathcal{X}_+ \subset \mathcal{X}$ of images containing the target class (importantly, this definition is given independently of $f$). Rotation invariance is, by comparison, a simpler rule. It is easy to provide a partial formalisation of such ideas. Discrimination at $x\in\mathcal{X}$ is expressed by the predicate \[ Q_1(x;f) = \ensuremath\llbracket x\in\mathcal{X}_c \Leftrightarrow f(x) = +1 \ensuremath\rrbracket. \] Rotation invariance is expressed as $Q_2(x,x';f) = \ensuremath\llbracket x \sim_\text{rot} x' \Rightarrow f(x) = f(x') \Leftrightarrow \ensuremath\rrbracket$ where $x \sim_\text{rot} x'$ means that images $x$ and $x'$ are related by a rotation. Since the learned map $f$ is usually imperfect, it is unreasonable to expect similar rules to be correct everywhere. If $p(x)$ is a distribution on images, we can measure the validity of such explanations by their prediction errors $1 - P[Q_1(x;f)]$ or $1 - P[Q_2(x,x';f)]$. The latter suggests that explanatory rules should be interpreted as predictors. Note that the map $f$ is also a predictor, learned from data to solve a certain problem. In our discussion, however, the map $f$ is fixed, and we look to compactly predict and hence explain its output. Note that, differently from most machine learning scenarios, in this scenario one is often interested in \emph{relative predictions} (see rotation invariance). \subsubsection{Finding explanations}\label{s:finding} By using the prediction error of an explanation, we can test its validity. Given a large pool of \emph{possible explanations} $Q \in \mathcal{Q}$, we may be interested in determining automatically \emph{which} explanations apply accurately to a given predictor $f$. In particular, finding the \emph{most accurate} explanation $Q\in\mathcal{Q}$ is similar to a traditional learning problem and can be formulated computationally as a \emph{regularised empirical risk minimization} task: \[ \min_{Q\in\mathcal{Q}} \lambda\mathcal{R}(Q) + \frac{1}{n}\sum_{i=1}^n Q(x_i;f), \qquad x_i \sim p(x). \] Here, the regularisation term $\mathcal{R}(Q)$ has two goals: to allow the explanation $Q$ to generalize beyond the $n$ samples $x_1,\dots,x_n$ considered in the optimization and to pick an explanation $Q$ which is simple and hence, hopefully, interpretable. \subsubsection{Other explanation types}\label{s:types} So far, we have considered (semi-)categorical explanations, but many other interesting cases are possible. For instance, if $f$ is a regressor and the output space $\mathcal{Y}\subset\mathcal{R}^d$ is continuous, categorical predictions are not useful. Instead, explanations can also perform predictions in the same space $\mathcal{Y}$ and their accuracy can be measured by suitable error terms other than the boolean prediction error used above. Here is a particularly relevant example. Consider a neural network $f$ producing a confidence score in $\mathcal{R}$. Then we can \emph{explain $f$ locally} by means of the function derivatives: \[ f(x + \delta_x) \approx f(x) + \langle \nabla f(x),\delta_x\rangle. \] The explanation is the gradient $\nabla f(x)$ which describes how the map $f$ behaves locally. It corresponds to the \emph{network saliency} of~\cite{simonyan14deep} and, as suggested by those authors, it can be visualized as an \emph{image}. \paragraph{Derivation of saliency in this framework.} Here is how the concept of network saliency emerges in our framework. To express the fact that we are seeking a local approximation, we consider a \emph{distribution} $p(x|x_0)$ of images $x$ around a reference image $x_0$. We then approximate $f(x)$ by means of a linear function $\langle w, x-x_0\rangle$ as follows: \[ \min_w \lambda |w|^2 + E_{p(x|x_0)} \left[ (f(x) - f(x_0) - \langle w, x-x_0 \rangle)^2 \right] \] We see that the optimum is given by zeroing the gradient w.r.t.\ $w$: \[ 0= \lambda w + E \left[ (x-x_0) (f(x) - f(x_0) - \langle w, x-x_0 \rangle) \right] \] Hence \[ w = \left\{\lambda I + E[(x-x_0)(x-x_0)^\top]\right\}^{-1} E[(x-x_0) (f(x) - f(x_0))]. \] Assume that $p(x|x_0)$ is a Gaussian centred on $x_0$ and with variance $\Sigma_x$. Assume also that $f(x)-f(x_0)= \langle \nabla f(x_0), x-x_0\rangle$. Then \[ w = (\lambda I + \Sigma_x)^{-1}\Sigma_x \nabla f(x_0). \] If the variance is isotropic, then this is simply a scaled version of Simonyan et al.\ saliency. \paragraph{Generalization 1 (bad?).} Consider now a distrubution over $x$ obtained by applying a translational jitter $\delta_u \sim \mathcal{N}(0,\sigma_u^2 I)$ to $x_0$: \[ x(u) = x_0(u) + \langle \nabla x_{0}(u), \delta_u \rangle \] \[ E[(x(u)-x_0(u))(x(v) - x_0(v))] = \sigma_u^2 \langle \nabla x_{0}(u), \nabla x_{0}(v) \rangle \] Hence in this case $\Sigma_x$ has a particularly simple form. Let $\nabla x$ be the $2 \times N^2$ matrix of the gradients of the image at each pixel; then: \[ \Sigma_x = \sigma_u^2 (\nabla x)^\top(\nabla x) = \sigma_u^2 U^\top S U \] where $U$ is a $2\times N^2$ orthonormal matrix ($U U^\top = I_{2\times 2}$). For the majority of images, we can assume that $S= s^2I$ is diagonal, so that \[ w = \sigma_u^2 s^2(\lambda I + \sigma_u^2 s^2 U^\top U)^{-1} U^\top U \nabla f(x_0). \] \paragraph{Explaining a linear predictor.} A linear predictor is a simple rule $f(x) = \langle w, x \rangle$. The rule itself is mechanically straightforward: the prediction is the sum the weighted data components $w_ix_i$. Applying the saliency framework discussed above, the \emph{explanation} of the classifier is simply $w$ itself. This explanation is also independent of the particular input $x$. This is because, since the predictor is linear, any change $\delta_x$ to the pattern results in the same variation $\langle w, \delta_x \rangle$ of the prediction score regardless of the specific value of $x$. \bibliographystyle{plain} \section{Old text} Answering the ``what'' question means describing the properties of the mapping $f$. For example, a classifier $f$ is often characterized in terms of its expected error rate. Other characterizations include: whether $f$ is invariant with a viewpoint change, if there are intra-class variations of the object that may confuse $f$, if $f$ recognizes the object by using contextual cues, and so on. Answering the ``how'' question means clarifying which computational mechanisms are used by $f$ in order to achieve such properties. While the discussion has so far been informal, the next section attempts to formalize these concepts, focusing on the ``what'' question. \subsection{Explaining what a predictor does} An explanation of the predictor map $f:\mathcal{X}\rightarrow\mathcal{Y}$ is a succinct and meaningful characterization of some of its properties. For the a binary classifier, the most obvious property is to be correct. Formally, if $\mathcal{X}_+$ is the subset of images contain the object and $\mathcal{X}_-$ is its complement, correctness is given by the property: \[ \forall x\in\mathcal{X}:\quad e_1(x) = [f(x) = 1 \Leftrightarrow x \in \mathcal{X}_+]. \] Another property is that ``$f$ is invariant to rotations of the image'', i.e.\ \[ \forall x\in\mathcal{X},\ \theta\in[0,2\pi):\quad e_2(x,\theta) = [f(x) = f(R_\theta[x])], \] where $R_\theta$ rotates an image by the angle $\theta$. Since predictors are rarely perfect, correct classification, invariance, or any other property may not be strictly satisfied; still, such explanations may still be useful if they apply with sufficiently large probabilities $P[e_1(x)]$, $P[e_2(x,\theta)]$. Note that computing such probabilities requires defining background probabilities $p(x)$ over images and/or over parameters such as rotations $\theta$. Explanations should be informative. There are many uninformative true statements about $f$. For example, $e_3(x) = [|f(x)| \leq 1]$ is trivially satisfied by a binary classifier. Intuitively, a non-trivial explanation is one that applies to a small number of mappings $\mathcal{X}\rightarrow\mathcal{Y}$ and therefore distinguishes $f$ from other possible mappings. Making a quantitative theory out of this intuition is beyond the scope of this work. Explanations may be expressed in terms of logical statements as done in the examples above, but this is not necessarily the case. More in general, an explanation may be built as an interpretable approximation of the predictor $f$, possibly valid only locally, in the vicinity $\mathcal{N}(x_0)$ of some point $x_0$. For instance: \[ \forall x \in \mathcal{N}(x_0): \quad e_3(x) = [ \| f(x) - f_e(x) \|^2 \leq \tau ] \] where $f_e(x)$ is a mapping that we understand. For instance, if $f:\mathbb{R}^D \rightarrow \mathbb{R}$ is an image and $f(x) \in \mathbb{R}$ is the score computed by a convolutional neural network, then the notion of \emph{network saliency} of~\cite{simonyan14deep} \[ f_e(x) = \langle \nabla f(x_0), x-x_0\rangle + f(x_0) \] is a local linear approximation of the classifier $f$ at image $x_0$. Note that~\cite{simonyan14deep} interpret $\nabla f(x)$ as a saliency measure; here, instead, the same gradient is used to define an approximation of the predictor $f$ that explains its structure locally. The latter use of the predictor gradient is due to~\cite{turner15a-model}. In the same paper, the authors suggests that, for other typologies of data for which individual dimensions of $x$ have a meaning, the approximating predictor $f_e$ may be expressed as a small decision tree or another simple interpretable rule. TODO: Find more references to this concept. Interpretability is an important property of explanations since the main motivation for constructing them is to understand a predictor. For example, the gradient $\nabla f(x_0)$ can be displayed as an image to illustrate intuitively what changes to the image $x_0$ result in a change of the classifier output, and which ones do not. Explanations such as correctness and invariance use notions such as image class or image rotation that are also intuitive and interpretable. In all examples above, explanations are interpretable rules that make predictions on the output of the mapping $f$. We readily distinguish two types of such predictors: \begin{itemize} \item {Absolute predictions} such as correctness expect the mapping $f$ to attain specific values at specific points in the domain. \item {Relative predictions} such as invariance expect the value $f(x)$ to be related to the value $f(x')$ where points $x'$ and $x$ are related in some way. One can have equivariance relations of the type $f(gx)=gf(x)$, where a certain transformation $g$ acts on both input and output spaces, and invariance $f(gx)=f(x)$ as a special case. \end{itemize} \subsection{Invariance} Most predictors are deemed to be \emph{invariant} to certain variability factors affecting the data. For example, in image recognition, a predictor $f$ is often \emph{translation invariant}, in the sense that $f(x) \approx f(\mathcal{L}_t(x))$ where $\mathcal{L}_t$ is an operator that translates the image $x$ by an amount $t$. More in general, if $g : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ is a bijection such as a rotation, a scaling, or a more general affine distortion of the plane, then one may check for invariances $\forall x \in X: f(x) \approx f(gx)$. This definition of invariance, which involves only mathematical concepts, is not very useful in practice. Note that a function which is invariant to warps $g$ and $g'$ must be invariant to to $g'g$ as well (because $f((g'g)x) = f(g'(gx)) = f(gx) = f(x)$). Thus invariance to image warps, as well as any other class of composable transformations, is only possible for entire entire subgroups $G$ of the transformation group $\mathbb{R}^2 \rightarrow \mathbb{R}^2$. Transformation groups are generally very large. The most general group $\mathcal{G}(\mathbb{R}^2)$ contains all invertible maps $\mathbb{R}^2 \rightarrow \mathbb{R}^2$ (think of these as permutations of all points of the plane), and usual restrictions consider the subgroups of continuous $\mathcal{C}^0(\mathbb{R}^2)$ or $k$-times differentiable $\mathcal{C}^k(\mathbb{R}^2)$ (diffeomorphisms) maps. Other common restrictions consider Lie groups such as affine transformations, dilations, similarities, isometries (Euclidean transformations), translations, etc. Even if we fix a single transformation, say a translation $g$ by two pixels to the right, the smallest group containing $g$, i.e.\ the cyclic group generated by $g$, contains all iterated translations $g^n$. Since the image $x$ is in practice finite (in resolution and extent), any of such group is clearly too much: a predictor cannot be invariant to a translation that moves the object out of the image, or to a scaling that makes it smaller than a single pixel. Thus, such notions are useful only in an idealized sense, but are not directly applicable to actual predictors. In order to make invariance a more useful concept, one needs to add to mathematical constructions a number of statistical assumptions. For example, note that, if $g$ is a transformation of the plane, $gx$ is undefined if $x$ is a digital image, which has a finite resolution and extent. The finite resolution of $x$ is usually addressed by making the \emph{statistical} assumption that $x$ is the sampled version of a band-limited image, so that one can use interpolation. The finite extent of the image is addressed by continuing the image by using padding or reflection. We may ask, are such assumptions reasonable? One approach to answer this question, which will be considered in more detail later, is to realize that transformations of the image arise as proxies to transformations occurring in the physical world; for example, a visual classifier should be invariant to motions of the 3D camera, which can sometimes be approximated by certain image transformations. In this sense, the assumptions above are not particularly good: moving a physical camera translation will reveal new information (by unoccluding portions of the scene or by revealing small details) which cannot, and in fact is not, recovered by emulating transformations in image space. One can define invariance in very general terms by considering any partition of the set of images $X$, which in turns induces an equivalence relation $x \sim x'$ between images. The mapping $f$ is invariant with $\sim$ when \[ x\sim x' \quad\Rightarrow\quad f(x) = f(x'). \] \subsection{The decision support of image predictors} We now focus our attention on predictors $f :\mathcal{X}\rightarrow\mathcal{Y}$ operating on the set $X = \mathbb{R}^{H\times W \times C}$ of $H \times W$ digital images with $C$ color channels. Deep convolutional neural networks are the most important family of such predictors. Compared to a general predictor, one obvious property of an image predictor is the fact that the input data has a \emph{spatial structure}. Most visual object are local phenomena, in the sense that their projection on the image has a finite extent. One natural question is whether a predictor focuses on the extent of such objects, or whether it uses contextual cues in the image. As noted above, given an example image $x_0$, one may study the \emph{sensitivity} of the predictor function $f(x)$ to local changes in the input $x_0$. \begin{itemize} \item $S(\Omega) = E[|f(x_0 + \eta_\Omega) - f(x_0)|]$. \item $\operatorname{supp}_\tau f = \cup_{S(\Omega) \geq \tau} \Omega$. \end{itemize} \begin{itemize} \item Definition of saliency/what is a good explanation? How do you measure the quality of an explanation? Why do we want to explain what the networks are doing/learning? (i.e. correlation not causation, human examples; systematic bias -- cardigan example) \item Single-shot change vs summary of multiple changes \item Optimization vs bulk change (box occluder) \item Meaning of editing operation \item Deletion vs preservation of object \item Mask regularization \item Eliminate mask-introduced artifacts (i.e. regularize so that the network prediction is well-distributed across classes) \end{itemize} \section{Unraveling black-box decisions} Let $x \in \mathcal{X}$ denote an image. In general, a grayscale image is a function $x : D \rightarrow \mathbb{R}$ where $D = \{1,2,\dots,N\}^2$ is a two-dimensional $N\times N$ lattice. It is also possible to consider 1D images (time signals) as well. We are given a black-box binary classifier function $f : \mathcal{X} \rightarrow \{+1,-1\}$ separating the set of images in positive and negative. Usually the positive class, i.e.\ the subset of images that are assigned positive label, is usually much smaller than the negative one. Given a positive image $x$, such that $f(x)=+1$, we would like to find an \emph{explanation} of why the classifier assigns a positive label to $x$. Defining explanation is challenging. We consider below several approaches below. \subsection{Explanation as simplified interpretable classification rules} We consider first the approach of~\cite{turner15a-model}. For them, an explanation $e$ of a binary classifier $f$ is also a binary classifier $e : \mathcal{X}\rightarrow\{+1,-1\}$, but with a particularly simple and interpretable structure (e.g.\ a conjunction of a few binary tests). The collection of explanation functions $e$ is denoted $\mathcal{E}$. Each explanation is regarded as an ``approximation'' $e \approx f$ of the black-box binary classifier $f$ that one wants to understand. The quality of the approximation is measured based on how well the responses of $e$ and $f$ agree \emph{on average} with respect to a certain background data distribution $p(x)$. The agreement can be measured, for example, as the mutual information $I(e,f)$. In order to explain a \emph{particular} decision $f(x)=+1$ for a certain data point $x$, one looks for the explanation $e\in\mathcal{E}$ that is the best approximation to $f$ in general while matching the output of $f$ at $x$ (i.e.\ $e(x) = f(x) = +1$). Hence, the explanation is a semi-local interpretable approximation of the classifier. Note that, in this framework but likely in general, while one seeks an explanation for a particular data point $x$, the latter must somehow reflect the response of the classifier $f$ on other points as well. In the formulation above, for example, it is trivial to construct a simple rule that matches any data point $x$ (e.g.\ $e(z) = \delta_{\{z=x\}}$); instead, a useful explanation must be interpretable while at the same time being general, i.e.\ predictive of as many other decisions $f(z)=e(z)$ as possible. This formulation requires a mechanism for generating interpretable explanations. The authors of~\cite{turner15a-model} propose to use a small decision tree or conjunction of simple tests, which in turn must be based on interpretable data statistics. This idea is of difficult applicability to images. However,~\cite{turner15a-model} demonstrates a simple image-based experiment. There, explanations $e(x;w,b) = \operatornamewithlimits{sign} \langle w,x \rangle +b$ are eigenfaces $w \in \{w_1,\dots,w_n\}$. In this framework, a face classification $f(x)=1$ is explained by choosing $i \in \{1,\dots,n\}$ and $b \in \mathbb{R}$ such that \[ \max_{k,b} I(e(z;w_k,b),f(z))_{z\sim p} \quad \text{such that\ } e(x;w_k,b) = +1 \] Such explanations are in effect linear classifiers. They are interpretable in the sense that they produce pixel weight masks or heatmaps. This has a connection with the standard notion of network saliency: \[ f(z) \approx e(z;w,b) = \langle w,z \rangle + b, \] where \[ w = \frac{df(x)}{dx}, \qquad b = f(x) - \langle w, x \rangle. \] Note that $w$ is also a local linear approximation of the classifier $f(x)$. Generalizing these considerations, we may think of these as \emph{explanation via local classifier simplification}. \subsection{Explanation support} With images and similar distributed signals $x : D \rightarrow \mathbb{R}$, often only a subset $\Omega \subset D$ of the pixels should be relevant for a given decision $f(x)=1$. A typical example is object recognition: only foreground pixels, i.e.\ the pixels belonging to the object, should be directly responsible for the decision, whereas background pixels should provide at most indirect evidence. The question, then, is how one can identify which pixels $x$ are ``directly responsible'' for a given decision. This apparently innocuous question is fraught with difficulties. The starting point is simple enough: if only pixels $\Omega$ are responsible for a decision, then: 1) deleting the pixels in $D / \Omega$ should not change the decision while 2) deleting the pixels in $\Omega$ should change the decision. However: \begin{itemize} \item ``Deleting'' pixels is not clearly defined. We may delete them by setting them to zero, to noise, to some other image, by smooth infilling the hole, and so on. \item Informative pixels are often at the boundary of the object as the silhouette is often very informative. Since the silhouette is recognized by contrasting foreground and background pixels, clearly both pixel types are responsible for the recognition of the object. For example, consider a black disk on a white background. Neither the white or black infills are very informative; rather, the shape is recognized by observing the transition from white to black. \item Deleting a small subset of strategically deleted pixels may be enough to make the object invisible for the. For instance, one may just delete the boundary and preserve the interior, or perhaps delete every other row. While not all object pixels are deleted, the network does fail to recognized the object. \item Conversely, conserving a subset of strategically-placed pixels may be enough to allow the network to still recognize the object. For example, preserving the object silhouette when deleting only the interior, or preserving only a distinctive part of the object while deleting the rest. \item Deletion, just like any other image editing operation, may introduce artifacts. E.g. an image of an object can be carved out by deleting bits of a uniform background. More subtle artifact in the way neural networks parse iamges may be triggered by highly-specialized (overfitted) forms of image editing driven for example by gradient descent. \end{itemize} \begin{example} Let's consider a 1D function $x: [0,1] \rightarrow \mathbb{R}$. Consider a classifier $$ f(x) = \begin{cases} 1, & \exists u\in [0,1] : x(u) > 0, \\ 0, & \text{otherwise}. \end{cases} $$ I.e.\ the classifier fires if the ``image'' $x$ has at least one pixel $u$ such that $x(u) > 0$. Now we see that \emph{any} such pixel is a support for the decision $f(x)=+1$, in the sense that we can change the value of any other pixel obtaining a second image $x'$ such that $f(x')=+1$ as well. However, a better explanation is probably inclusive of all such sufficient pixels. \end{example} \paragraph{Multi-category classification} In reality, most black-box algorithms are not solving simple binary classification problems. In the multi-category classification setting (i.e. ImageNet classification), there may be competition between categories. \begin{example} Consider a multi-category classifier that classifies a shape as one of three possibilities: a triangle, a square, or a circle. \end{example} \section{Related work}\label{s:related} Our work builds on~\cite{simonyan14deep}'s gradient-based method, which backpropagates the gradient for a class label to the image layer. Other backpropagation methods include DeConvNet~\cite{zeiler2014visualizing} and Guided Backprop~\cite{springenberg2014striving,mahendran2016salient}, which builds off of DeConvNet~\cite{zeiler2014visualizing} and~\cite{simonyan14deep}'s gradient method to produce sharper visualizations. Another set of techniques incorporate network activations into their visualizations: Class Activation Mapping (CAM)~\cite{zhou2016learning} and its relaxed generalization Grad-CAM~\cite{selvaraju2016grad} visualize the linear combination of a late layer's activations and class-specific weights (or gradients for~\cite{selvaraju2016grad}), while Layer-Wise Relevance Propagation (LRP)~\cite{bach2015pixel} and Excitation Backprop~\cite{zhang2016top} backpropagate an class-specific error signal though a network while multiplying it with each convolutional layer's activations. With the exception of~\cite{simonyan14deep}'s gradient method, the above techniques introduce different backpropagation heuristics, which results in aesthetically pleasing but heuristic notions of image saliency. They also are not model-agnostic, with most being limited to neural networks (all except~\cite{simonyan14deep,bach2015pixel}) and many requiring architectural modifications~\cite{zeiler2014visualizing,springenberg2014striving,mahendran2016salient,zhou2016learning} and/or access to intermediate layers~\cite{zhou2016learning,selvaraju2016grad,bach2015pixel,zhang2016top}. A few techniques examine the relationship between inputs and outputs by editing an input image and observing its effect on the output. These include greedily graying out segments of an image until it is misclassified ~\cite{zhou2014object} and visualizing the classification score drop when an image is occluded at fixed regions~\cite{zeiler2014visualizing}. However, these techniques are limited by their approximate nature; we introduce a differentiable method that allows for the effect of the joint inclusion/exclusion of different image regions to be considered. Our research also builds on the work of~\cite{turner15a-model,ribeiro2016should,cao2015look}. The idea of explanations as predictors is inspired by the work of~\cite{turner15a-model}, which we generalize to new types of explanations, from classification to invariance. The Local Intepretable Model-Agnostic Explanation (LIME) framework~\cite{ribeiro2016should} is relevant to our local explanation paradigm and saliency method (sections~\ref{s:local},~\ref{s:sal}) in that both use an function's output with respect to inputs from a neighborhood around an input $x_0$ that are generated by perturbing the image. However, their method takes much longer to converge ($N = 5000$ vs. our $300$ iterations) and produces a coarse heatmap defined by fixed super-pixels. Similar to how our paradigm aims to learn an image perturbation mask that minimizes a class score, feedback networks~\cite{cao2015look} learn gating masks after every ReLU in a network to maximize a class score. However, our masks are plainly interpretable as they directly edit the image while ~\cite{cao2015look}'s ReLU gates are not and can not be directly used as a visual explanation; furthermore, their method requires architectural modification and may yield different results for different networks, while ours is model-agnostic. \begin{figure*}\centering \includegraphics[width=\linewidth]{compare_saliency_12_w_titles.pdf} \includegraphics[width=\linewidth]{compare_saliency_10.pdf} \includegraphics[width=\linewidth]{compare_saliency_11.pdf} \includegraphics[width=\linewidth]{compare_saliency_17.pdf} \includegraphics[width=\linewidth]{compare_saliency_151.pdf} \includegraphics[width=\linewidth]{compare_saliency_30.pdf} \includegraphics[width=\linewidth]{compare_saliency_79.pdf} \includegraphics[width=\linewidth]{compare_saliency_322.pdf} \includegraphics[width=\linewidth]{compare_saliency_186.pdf} \includegraphics[width=\linewidth]{compare_saliency_156.pdf} \includegraphics[width=\linewidth]{compare_saliency_206.pdf} \caption{Comparison with other saliency methods. From left to right: original image with ground truth bounding box, learned mask subtracted from 1 (our method), gradient-based saliency~\cite{simonyan14deep}, guided backprop~\cite{springenberg2014striving,mahendran2016salient}, contrastive excitation backprop~\cite{zhang2016top}, Grad-CAM~\cite{selvaraju2016grad}, and occlusion~\cite{zeiler2014visualizing}.} \label{f:comparison} \end{figure*} \section{Scratch} \begin{table} \begin{center} \begin{tabular}{|r|c|c|c|c|c|c|c|c|} \hline & \footnotesize{~\cite{simonyan14deep}} & \footnotesize{~\cite{springenberg2014striving,mahendran2016salient}} & \footnotesize{~\cite{zhang2016top}} & \footnotesize{C-~\cite{zhang2016top}} & \footnotesize{~\cite{cao2015look}} & \footnotesize{~\cite{bach2015pixel}} & \footnotesize{~\cite{zhou2016learning}}& \footnotesize{Mask} \\ \hline\hline \footnotesize{Value-$\alpha$*} & \footnotesize{0.25} & \footnotesize{0.05} & \footnotesize{0.15} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} &\footnotesize{0.10} \\ \footnotesize{Err (\%)} & \footnotesize{47.0} & \footnotesize{51.2} & \footnotesize{47.1} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} & \footnotesize{---} &\footnotesize{45.1} \\ \hline\hline \footnotesize{Energy-$\alpha$*} & \footnotesize{0.10} & \footnotesize{0.30} & \footnotesize{0.60} & \footnotesize{---} & \footnotesize{0.95} & \footnotesize{---} & \footnotesize{---} & \footnotesize{0.95} \\ \footnotesize{Err (\%)} & \footnotesize{44.9} & \footnotesize{48.0} & \footnotesize{39.9} & \footnotesize{---} & \footnotesize{38.8${}^\dagger$} & \footnotesize{---} & \footnotesize{---} & \footnotesize{44.3} \\ \hline\hline \footnotesize{Mean-$\alpha$*} & \footnotesize{5.0} & \footnotesize{4.5} & \footnotesize{1.5} & \footnotesize{0.0} & \footnotesize{---} & \footnotesize{1.0} & \footnotesize{1.0} & \footnotesize{0.5} \\ \footnotesize{Err (\%)} & \footnotesize{41.7${}^\mathsection$} & \footnotesize{42.0${}^\mathsection$} & \footnotesize{39.0${}^\mathsection$} & \footnotesize{57.0${}^\dagger$} & \footnotesize{---} & \footnotesize{57.8${}^\dagger$} & \footnotesize{48.1${}^\dagger$} & \footnotesize{44.3}\\ \hline \end{tabular} \end{center} \caption{Optimal alpha thresholds and error rates from the weak localization task on the ImageNet validation set using saliency heatmaps to generate bounding boxes. *Feedback error rate are taken from~\cite{cao2015look}; contrastive excitation backprop, LRP and CAM error rates are taken from~\cite{zhang2016top}. ${}^\mathsection$Using public code provided in~\cite{zhang2016top}, we recalculated these error rates, which are within $0.4\%$ of the originally reported rates.} \label{t:localization} \end{table} Generalizing these examples, an explanation is a rule that predicts the response of the map $f$ to certain inputs. Rules with a strong explanatory power are simple, general, interpretable, and informative. For example, a tautology such as ``the output of the binary classifier $f$ is either $-1$ or $+1$'' is uninformative. Using $f$ as a rule to predict its own values is equally useless as it is not simple nor interpretable. Discrimination is interpretable but only mildly informative because we would expect a classifier to be correct for its task. The discrimination rule above is complex as it requires to define the set $\mathcal{X}_+ \subset \mathcal{X}$ of images containing the target class. Importantly, while the set $\mathcal{X}_+$ can be very complex, it is defined independently of $f$ \emph{a-priori}. Rotation invariance is, by comparison, a simpler rule. Since the learned map $f$ is usually imperfect, it is unreasonable to expect explanatory rules to be strictly correct. To allow some slack, we can allow the rule to work correctly for most inputs, or with a certain degree of accuracy, or both. For example, rules such as discrimination can be expressed by predicates \begin{align*} Q_1(x;f) &= \ensuremath\llbracket x \in\mathcal{X}_c \Leftrightarrow f(x) = +1 \ensuremath\rrbracket, \\ Q_2(x,x';f) &= \ensuremath\llbracket x \sim_\text{rot} x' \Rightarrow f(x) = f(x') \ensuremath\rrbracket, \end{align*} where $x \sim_\text{rot} x'$ means that images $x$ and $x'$ are related by a rotation. Given a distribution $p(x)$ over images, the \emph{faithfulness} of the explanations can be measured as the probabilities $P[Q_1(x;f)]$ and $P[Q_2(x,x';f)|x \sim x']$ that the predicates are true. Predictions need not to be binary either; for example, if the output of $f$ is continuous, then faithfulness of rotation invariance could be measured by the expected squared residual $E[(f(x) - f(x'))^2|x \sim x']$ (where lower means more faithful). \subsubsection{Other explanation types}\label{s:types} So far, we have considered (semi-)categorical explanations, but many other interesting cases are possible. For instance, if $f$ is a regressor and the output space $\mathcal{Y}\subset\mathcal{R}^d$ is continuous, categorical predictions are not useful. Instead, explanations can also perform predictions in the same space $\mathcal{Y}$ and their accuracy can be measured by suitable error terms other than the boolean prediction error used above. Here is a particularly relevant example. Consider a neural network $f$ producing a confidence score in $\mathcal{R}$. Then we can \emph{explain $f$ locally} by means of the function derivatives: \[ f(x + \delta_x) \approx f(x) + \langle \nabla f(x),\delta_x\rangle. \] The explanation is the gradient $\nabla f(x)$ which describes how the map $f$ behaves locally. It corresponds to the \emph{network saliency} of~\cite{simonyan14deep} and, as suggested by those authors, it can be visualized as an \emph{image}. \paragraph{Derivation of saliency in this framework.} Here is how the concept of network saliency emerges in our framework. To express the fact that we are seeking a local approximation, we consider a \emph{distribution} $p(x|x_0)$ of images $x$ around a reference image $x_0$. We then approximate $f(x)$ by means of a linear function $\langle w, x-x_0\rangle$ as follows: \[ \min_w \lambda |w|^2 + E_{p(x|x_0)} \left[ (f(x) - f(x_0) - \langle w, x-x_0 \rangle)^2 \right] \] We see that the optimum is given by zeroing the gradient w.r.t.\ $w$: \[ 0= \lambda w + E \left[ (x-x_0) (f(x) - f(x_0) - \langle w, x-x_0 \rangle) \right] \] Hence \[ w = \left\{\lambda I + E[(x-x_0)(x-x_0)^\top]\right\}^{-1} E[(x-x_0) (f(x) - f(x_0))]. \] Assume that $p(x|x_0)$ is a Gaussian centred on $x_0$ and with variance $\Sigma_x$. Assume also that $f(x)-f(x_0)= \langle \nabla f(x_0), x-x_0\rangle$. Then \[ w = (\lambda I + \Sigma_x)^{-1}\Sigma_x \nabla f(x_0). \] If the variance is isotropic, then this is simply a scaled version of Simonyan et al.\ saliency. \paragraph{Generalization 1 (bad?).} Consider now a distrubution over $x$ obtained by applying a translational jitter $\delta_u \sim \mathcal{N}(0,\sigma_u^2 I)$ to $x_0$: \[ x(u) = x_0(u) + \langle \nabla x_{0}(u), \delta_u \rangle \] \[ E[(x(u)-x_0(u))(x(v) - x_0(v))] = \sigma_u^2 \langle \nabla x_{0}(u), \nabla x_{0}(v) \rangle \] Hence in this case $\Sigma_x$ has a particularly simple form. Let $\nabla x$ be the $2 \times N^2$ matrix of the gradients of the image at each pixel; then: \[ \Sigma_x = \sigma_u^2 (\nabla x)^\top(\nabla x) = \sigma_u^2 U^\top S U \] where $U$ is a $2\times N^2$ orthonormal matrix ($U U^\top = I_{2\times 2}$). For the majority of images, we can assume that $S= s^2I$ is diagonal, so that \[ w = \sigma_u^2 s^2(\lambda I + \sigma_u^2 s^2 U^\top U)^{-1} U^\top U \nabla f(x_0). \] \paragraph{Explaining a linear predictor.} A linear predictor is a simple rule $f(x) = \langle w, x \rangle$. The rule itself is mechanically straightforward: the prediction is the sum the weighted data components $w_ix_i$. Applying the saliency framework discussed above, the \emph{explanation} of the classifier is simply $w$ itself. This explanation is also independent of the particular input $x$. This is because, since the predictor is linear, any change $\delta_x$ to the pattern results in the same variation $\langle w, \delta_x \rangle$ of the prediction score regardless of the specific value of $x$. \section{Supplementary Materials}
2024-02-18T23:40:17.910Z
2018-01-11T02:09:13.000Z
algebraic_stack_train_0000
1,974
16,747
proofpile-arXiv_065-9720
\section{Introduction \label{Intro}} Properties of strongly coupled plasmas are of high interest for laser-induced dense plasmas (warm dense matter, WDM) and astrophysical systems. Up to now, the description of such partially ionized plasmas (PIP) is still a challenge in plasma physics because the quantum-statistical treatment of electron-atom ($e$--$a$) collisions in dense matter is complex. Transport properties are an important input for simulations. In this work, the electrical conductivity is investigated for dense noble gases at temperatures of $T \approx 1 {\rm eV}$ where the plasma is non-ideal and partially ionized. Fully ionized plasmas (FIP) are analysed by the kinetic theory approach. The Boltzmann equation is solved e.g. by Spitzer \cite{Spitzer53} using a Fokker-Planck equation, Brooks-Herring \cite{Brooks} or Ziman \cite{Ziman} using the relaxation time approximation, commonly known analytical formulas for the electrical conductivity are given. The electron-ion ($e$--$i$) collisions are well described in a wide parameter range, whereas the treatment of $e$--$a$ collisions is not trivial within an analytical approach. Within the Linear-Response-Theory (LRT) the treatment and inclusion of different collision mechanisms is vivid. In recent years, the influence of electron-electron ($e$--$e$) collisions at arbitrary degeneracy was discussed in Born approximation for a fully ionized plasma (FIP) by Reinholz {\it et al.} \cite{R4} and Karakhtanov \cite{Karakhtanov16}, leading to a systematic interpolation between the Spitzer and the Ziman limit, see also Esser {\it et al.} \cite{Esser98}. A correction factor is obtained which depends in contrast to the Spitzer value on density and temperature. Nevertheless these formulas valid for a FIP are not sufficient for the description of dense partially ionized noble gases, because bound states play an important role. Their treatment in a quantum-statistical approach is possible if considering a chemical picture. Here, the electron-ion bound-states are identified as atomic species. We consider a PIP containing atoms, electrons and singly charged ions ($n_{\rm e}=n_{\rm i}$) with a heavy particle density $n_{\rm heavy} = n_{\rm a}+n_{\rm i}$. For the considered densities multiple ionized ions are not relevant, a generalization is possible. Results for hydrogen are analyzed in \cite{Roepke89,Reinholz95,Redmer97}. The scattering mechanism between electrons and atoms in the correlation functions are immediatly included. The electron-atom ($e$--$a$) collisions are in particular relevant for non-degenerate plasmas if the system is below the Mott density. Above the Mott density the bound-states are dissolved. For the analysis of $e$--$a$ collisions, two important points have to be solved. In the chemical picture, the $e$--$a$ correlation function depends on the composition of the plasma, which can be determined from the mass-action laws. The Saha equations have been studied for a long time by many auhors, e.g. Ebeling {\it et al.}, see \cite{Ebeling82,Ebeling88} and F\"orster \cite{Foerster92} using Pad\'e techniques. The program package COMPTRA, for details see \cite{Kuhlbrodt04}, allows the calculation of composition and transport properties in nonideal plasmas. It has been successfully applied to metal plasmas, see also \cite{Kuhlbrodt00,Kuhlbrodt01}. In contrast to this, experimental data for noble gases have been understood only qualitatively, see \cite{Kuhlbrodt05}. A temperature dependent minimum in the conductivity can be observed as a consequence of the composition, but discrepancies with the experiment can be up to two orders of magnitude. Kuhlbrodt {\it et al.} \cite{Kuhlbrodt05} believes that this results from using the polarization potential for the $e$--$a$ collisions. In \cite{Adams07}, Adams {\it et al.} verified this statement comparing the calculated momentum-transer cross sections using a polarization potential with measured data. Subsequently, using the experimental data obtained for the isolated $e$--$a$ momentum-transfer cross section and the composition calculated by COMPTRA04, Adams obtained quantitatively good agreement with experimental results for the conductivity of noble gases. Recently, results from swarm-derived cross sections using different Boltzmann solvers and Monte Carlo simulations have been obtained and compared with experimental data for the isolated $e$--$a$ scattering process, see \cite{SimuAr,SimuHeNe,SimuKrXe}. Especially for low energies, a good agreement has been found. However, plasma effects like screening cannot be included by using the experimental transport cross sections for isolated systems. An alternative to the polarization potential is the construction of a so-called optical potential \cite{Vanderpoorten1975,Paikeday76,SurGosh1982,Pangantiwar1989}. Such a potential model has been successfully applied to describing the $e$--$a$ momentum-transfer cross sections of all noble gases in a wide energy region, see \cite{Adibzadeh05}. The downside is the large number of free parameters and the fact, that no unique expressions for all noble gases is known. In this work, we show that a modification in the approximation of the local exchange term leads to a universal expression for the cross sections for all noble gases. Adapting only one free parameter we find good agreement with the experimental data. In order to take into account the plasma environment, the optical potential for isolated systems is further modified by a systematic treatment of statical screening. Specific screening effects on the $e$--$a$ potential for a hydrogen plasma in the second order of perturbation, neglecting the exchange effects, have been discussed by Karakhtanov \cite{Karakhtanov2006}. Similar characteristics, e.g. a repulsive $e$--$a$ potential at large distances, are found for the noble gases in this work. \section{Electrical conductivity in T matrix approximation} \label{sec:GMA} Within the LRT transport coefficients of Coulomb systems are obtained by equilibrium correlation functions. According to the Zubarev formalism \cite{ZMR2,Roepke13} the static electrical conductivity is represented by \begin{align} \label{eq:cond} \sigma^{(L)} = -\frac{e^2\beta}{m^2 \Omega} \begin{vmatrix} 0 & N_{11} & \cdots & N_{1L} \\ N_{11} & d_{11} & \cdots & d_{1L} \\ \vdots & \vdots & \ddots & \vdots \\ N_{L1} & d_{L1} & \cdots & d_{LL} \end{vmatrix} / \begin{vmatrix} d_{11} & \cdots & d_{1L} \\ \vdots & \ddots & \vdots \\ d_{L1} & \cdots & d_{LL} \end{vmatrix} \, . \end{align} $N_{ll'}=\frac{1}{3}(\textbf{P}_l,\textbf{P}_{l'})$ is the Kubo scalar product \begin{align} N_{ll'} &= N_{l'l} = \frac{n_{\rm e} \Omega_{\rm N} m}{\beta} \frac{\Gamma(\frac{l+l'+3}{2})}{\Gamma(\frac{5}{2})} \frac{I_{\frac{l+l'-1}{2}}(\alpha)}{I_{\frac{1}{2}}(\alpha)} \, , \label{eq:Nll} \end{align} where $\Omega_{\rm N}$ is the normalization volume, $\Gamma(x)$ is the Gamma functions, $I_{\nu}(\alpha)$ are the Fermi integrals and $\alpha=\beta \mu_{\rm e}^{\rm id}$ is the degeneracy of electrons. The degeneracy $\alpha$ is related to the degeneracy parameter $\Theta=(3\pi^2 n_{\rm e})^{-2/3} (2mk_{\rm B}T/\hbar^2)$. The plasma is further characterized by the coupling parameter $\Gamma=(4\pi n_{\rm e}/3)^{1/3}e^2/(4\pi\epsilon_0 k_{\rm B}T)$. Considering the dominant contributions by the electron current we use the generalized moments $\textbf{P}_l$ as relevant observables \cite{Roepke13}: \begin{equation} \textbf{P}_l = \sum \limits_k \hbar \textbf{k}(\beta E_k)^{\frac{l-1}{2}}n_k \, , \end{equation} with $\beta=(k_{\rm B}T)^{-1}$, $\hbar {\bf k}$ the moment of electrons and $E_k=\hbar^2k^2/2m$ the kinetic energy of the electrons. The generalized force-force correlation functions $d_{ll'}=\frac{1}{3} \braket{\dot{\textbf{P}}_l; \dot{\textbf{P}}_{l'}}_{i\varepsilon}$ contain the generalized forces $\dot{\bf P}_l=i [H,{\bf P}_l]/\hbar=i [V,{\bf P}_l]/\hbar$. Separating the interaction potential into the $e$--$i$, $e$--$e$ and $e$--$a$ contributions ($V=V_{\rm ei}+V_{\rm ee}+V_{\rm ea}$), the generalized force-force correlation functions are splitted into the different scattering mechanisms \begin{align} d_{ll'} &= d_{l'l} = d_{ll'}^{\rm ei} + d_{ll'}^{\rm ee} + d_{ll'}^{\rm ea} \, . \end{align} The contributions $d_{ll'}^{{\rm e}c}$ are calculated in T matrix approximation, see also \cite{Meister82,Redmer97,Roepke88}. For electron-ion and electron-atom collisions we treat the ions and atoms classical. Within the adiabatic limit the correlation functions $d_{ll'}^{\rm ei}$ and $d_{ll'}^{\rm ea}$ can be simplified to ($c\neq {\rm e}$) \begin{align} d_{ll'}^{{\rm e}c} &= \frac{\hbar^3 n_c \Omega_{\rm N}}{3\pi^2m} \int\limits_0^{\infty} dk \, k^5 (\beta E_k)^{\frac{l+l'}{2}-1} f_k(1- f_k) Q_{\rm T}^{{\rm e}c}[V_{{\rm e}c}] \, , \end{align} with the Fermi distribution function for electrons $f_k$. $Q_{\rm T}^{{\rm e}c}[V_{{\rm e}c}]$ is the momentum-transfer cross section: \begin{align} Q_{\rm T}^{{\rm e}c} &= 2\pi \int\limits_{-1}^1 d(\cos\vartheta) \, \left[ 1- \cos\vartheta \right] \left( \frac{d \sigma^{{\rm e}c}}{d \Omega} \right) \, . \end{align} The interaction between electrons and ions in the plasma is described by a dynamically screened Coulomb potential, see \cite{Roepke88}. In the adiabatic limit the dynamically screened Coulomb potential is replaced by a statically screened Coulomb potential (Debye potential) \begin{align} V_{\rm ei}(r) &= V_{\rm D}(r) = -\frac{e^2}{4\pi \epsilon_0} \frac{e^{-\kappa r}}{r} \, , \end{align} with the inverse screening length $\kappa^2=(2\Lambda_{\rm e}^{-3}I_{-1/2}(\alpha)+n_{\rm i})\beta e^2/\epsilon_0$ and the thermal wavelength $\Lambda_{\rm e}=(2\pi\beta\hbar^2/m)^{1/2}$. A recent discussion of the ionic contribution on the correlation functions is given by Rosmej \cite{Rosmej16}. The $e$--$e$ contribution $d_{ll'}^{\rm ee}$ is more complex, see \cite{Redmer97,R4}. In the region of partially ionized noble gases the free-electron density in the plasma is low ($\Theta>1$), the $e$--$e$ correlation function can be simplfied to \begin{align} d_{33}^{{\rm ee}} &= \frac{8 n_{\rm e}^2 \Omega_{\rm N}}{3\sqrt{\pi}\beta} \sqrt{\frac{m}{\beta}} \int\limits_0^{\infty} dP \, P^7 e^{-P^2} Q_{\rm v}^{{\rm ee}}[V_{\rm ee}] \, . \end{align} $Q_{\rm v}^{{\rm ee}}[V_{\rm ee}]$ is the viscosity cross section: \begin{align} Q_{\rm v}^{{\rm ee}} &= 2\pi \int\limits_{-1}^1 d(\cos\vartheta) \, \left[ 1- \cos^2\vartheta \right] \left( \frac{d \sigma^{{\rm ee}}}{d \Omega} \right) \, . \end{align} In general, the $e$--$e$ interaction is also described by a dynamically screened Coulomb potential. For partially ionized plasma the free electron density $n_{\rm e}$ is low and the replacement of the dynamical screening by a statical screening is applicable $V_{\rm ee}(r)=-V_{\rm D}(r)$, see \cite{Roepke88}. The effect of dynamical screening by an effective screening radius, see \cite{Roepke88}, is discussed in \cite{R4} which leads to a difference in the results by less than $2 \%$. Alternatively, dynamical screening can be taken into account by a Gould-DeWitt sheme, see \cite{GdW}. The cross sections are calculated via the partial wave decomposition ($c={\rm i,a}$) \begin{align} Q_{\rm T}^{{\rm e}c}(k) &= \frac{4\pi}{k^2} \sum_{\ell = 1}^{\infty} \ell \sin^2\left[ \delta_{\ell-1}^{{\rm e}c}(k) - \delta_{\ell}^{{\rm e}c}(k) \right] \, , \\ \begin{split} Q_{\rm v}^{\rm ee}(k) &= \frac{4\pi}{k^2} \sum_{\ell = 1}^{\infty} \left[ 1+ \frac{(-1)^{\ell}}{2}\right] \frac{\ell(\ell+1)}{2\ell +1} \times \\ & \qquad \qquad \quad \sin^2\left[ \delta_{\ell-1}^{\rm ee}(k) - \delta_{\ell+1}^{\rm ee}(k) \right] \, , \end{split} \end{align} with the scattering phase shifts $\delta_{\ell}$. The phase shift calculations are performed using Numerov's method. The convergence of these cross sections depends on the temperatures. With increasing temperature the number of relevant phase shifts increases. For high temperatures $T>10^6 {\rm K}$ the Born approximation (BA) is applicable: \begin{align} \label{eq:dqdwBorn} \left(\frac{d\sigma^{{\rm e}c}}{d\Omega}\right)_{\rm BA} &= \frac{\mu^2_{{\rm e}c}\Omega^2_{\rm N}}{4 \pi^2 \hbar^4} | \tilde V_s(q) |^2 \left( 1 - \frac{\delta_{{\rm e}c}}{2} \left| \frac{\tilde V_s(q')}{\tilde V_s(q)} \right| \right) \, , \end{align} with the Fourier transformed potential $\tilde V(q)=\Omega_{\rm N}^{-1} \int d^3{\bf r} \, e^{i{\bf qr}} V(r)$, the relative mass $\mu_{{\rm e}c}= m_{\rm e}m_c/(m_{\rm e}+m_c)$, the transferred momenta $q={\bf |q|=|k-k'|}=2k\sin(\vartheta/2)$ and $q'={\bf |q'|=|k+k'|}=2k\cos(\vartheta/2)$. \section{Electron-atom interaction in dense plasma} \subsection{Optical potential for isolated systems} Within the Green functions technique \cite{RRZ87} the interaction between electrons and atoms is expanded up to the second order of perturbation, see also \cite{Karakhtanov2006} \begin{align} \label{eq:2ndOrderP} V_{\rm ea} &= V^{(1)} + V^{(2)} + V_{\rm ex} \, . \end{align} In \cite{RRZ87,Karakhtanov2006} the non-local exchange part $V_{\rm ex}$ is neglected. In this work we consider the role of exchange in a local approximation, see Sec.~\ref{sec:ex}. The first order term $V^{(1)}$ describes the Coulomb interaction between the free electron with the nucleus as well as with the bound electrons. This part is commonly known as the Hartree-Fock potential $V^{(1)} \equiv V_{\rm HF}(r)$ given by \begin{align} \label{eq:VHF} V_{\rm HF}(r) &= -\frac{e^2}{4\pi \epsilon_0 r} \int\limits_r^{\infty} dr' \, 4\pi r' \rho(r') \left( r' - r \right) \, , \end{align} where $\rho(r)$ is the bound electron density in the target atom, which is here calculated taking the atomic Roothaan-Hartree-Fock wave functions \cite{Clementi74}. For large distances $V_{\rm HF}(r)$ decreases exponentially. In contrast to this result the large distances behavior for the interaction between electrons and neutral particles is obtained by Born and Heisenberg \cite{BornHeisenberg1924} as $V_{\rm ea}(r \rightarrow \infty) \propto -r^{-4}$. Using the Green functions technique Redmer {\it et al.} found this behavior by the second order of perturbation which is related to the polarization potential $V^{(2)} \equiv V_{\rm P}$, see \cite{RRZ87}. For the polarization potential we take the form used by Paikeday \cite{Paikeday2000}: \begin{align} \label{eq:VPP} V_{\rm P}(r) &= -\frac{e^2 \alpha}{8\pi \epsilon_0 (r+r_0)^4} \, , \end{align} where $\alpha$ is the dipole polarizability and $r_0$ is a cut-off parameter which we take here as a free parameter in the scale of the Bohr radius $a_0$. In general the polarization potential depends on the energy. However, Paikeday obtained a weak dependence on the energy, which is negligible for our considerations, see \cite{Paikeday2000}. In contrast to the form Eq.~(\ref{eq:VPP}) a lot of different analytical forms for the polarization potential are used, see \cite{Paikeday76,RRZ87,Bottcher71,Schrader79}. The cut-off parameter $r_0$ is also treated in a different way. In \cite{RRZ87} this parameter is taken to give the correct value for $V_{\rm P}(0)$. Mittleman and Watson \cite{Mittleman59} derived an analytical formula considering semi-classical electrons $r_0=(\alpha a_0/(2Z^{1/3}))^{1/4}$. Nevertheless for short distances the finite polarization part is negligible in contrast to the dominant divgerent Coulomb interaction in the Hartree-Fock term. So it seems to be desirable to choose $r_0$ optimal for intermediate distances in contrast to the short distances case $r=0$. Alternatively, the cut-off parameter is adjusted by the experimental data for the differential cross section for electron-atom collisions by Paikeday at intermediate energies, see \cite{Paikeday2000}. For positron-atom collisions (no exchange part), Schrader adjusted the cut-off parameter by experimental data for the scattering length, see \cite{Schrader79}. In this work the cut-off parameter is determined by a correct low energy behavior for the ligther elements helium and neon and by a correct description of the position and hight of the Ramsauer minimum for the havier elements argon, krypton and xenon. In Sec.~\ref{sec:QT}, the results are compared with the experimental data for the momentum-tranfer cross section. The values for the cut-off parameter $r_0$ are given in Tab.~\ref{tab:r0}. \begin{center} \begin{table}[h] \begin{tabular}{ | c | c c c c c |} \hline $r_0 [a_0]$ & He & Ne & Ar & Kr & Xe \\ \hline present work & 1.00 & 1.00 & 0.86 & 0.92 & 1.00 \\ Mittleman \cite{Mittleman59} & 0.86 & 0.89 & 1.21 & 1.27 & 1.38 \\ Paikeday \cite{Paikeday2000} & 0.92 & 1.00 & 2.89 & 3.40 & - \\ Schrader \cite{Schrader79} & 1.77 & 1.90 & 2.23 & 2.37 & 2.54 \\ \hline \end{tabular} \caption{Cut-off parameter $r_0$ in units of $a_0$.} \label{tab:r0} \end{table} \end{center} With Eqs.~(\ref{eq:VHF},\ref{eq:VPP}) the electron-atom interaction Eq.~(\ref{eq:2ndOrderP}) is refered as the optical potential \cite{Vanderpoorten1975,Pangantiwar1989} \begin{align} \label{eq:Vopt} V_{\rm opt}(r) &= V_{\rm HF}(r) + V_{\rm P}(r) + V_{\rm ex}(r) \, , \end{align} where the exchange contribution is approximated by a local field. \subsection{Approximation of the exchange potential} \label{sec:ex} In the optical potential model the exchange contribution is considered in a local field approximation. The general accepted exchange potential is derived in a semiclassical approximation (SCA) \cite{Furness1973,RileyTruhlar1976,Yau78}: \begin{align} \begin{split} V_{\rm ex}^{\rm SCA}(r,K_{\rm RT}(r)) &= \frac{\hbar^2}{4m} \left\{ K_{\rm RT}^2(r) \right. \\ & \quad \left. - \sqrt{K_{\rm RT}^4(r) + \frac{4me^2}{\hbar^2 \epsilon_0} \rho(r) } \right\} \, , \end{split} \end{align} with the local electron-momentum $K(r)$ in the introduced version of Riley and Truhlar \cite{RileyTruhlar1976, Yau78} \begin{align} \label{eq:KRT} K_{\rm RT}^2(r) &= k^2+ \frac{2m}{\hbar^2} [|V_{\rm HF}(r)|+|V_{\rm P}(r)|] \, . \end{align} In competition to the SCA for a free-electron gas Mittleman and Watson \cite{Mittleman60} derived a local exchange potential, see also \cite{RileyTruhlar1976} \begin{align} \label{eq:Vex} V_{\rm ex}^{\rm M}(r,K(r)) &= - \frac{e^2}{4\pi \epsilon_0} \frac{2}{\pi} K_{\rm F}(r) F(K(r)/K_{\rm F}(r)) \, , \end{align} with the Fermi momentum $K_{\rm F}(r)= \left[3\pi^2 \rho(r)\right]^{1/3}$ and the function \begin{align} F(\eta) &= \frac{1}{2} + \frac{1-\eta^2}{4\eta} \ln\left| \frac{\eta+1}{\eta-1} \right| \, . \end{align} The treatment of the local electron-momentum $K(r)$ is different. In contrast to Riley and Truhlar $K(r)=K_{\rm RT}(r)$, Eq.~(\ref{eq:KRT}), Hara assumed the inclusion of the ionization potential $I$ into the local momentum \cite{Hara1967} \begin{align} K_{\rm H}^2(r) &= k^2+2 \frac{2m}{\hbar^2}I+K_{\rm F}^2(r) \, . \end{align} It is discussed by Hara \cite{Hara1967} as well as by Riley and Truhlar \cite{RileyTruhlar1976}, that $K_{\rm H}(r)$ leads to a wrong asymptotic behavior of the exchange potential for large distances. Some modifications are performed in \cite{SurGosh1982,RileyTruhlar1976}. In this work the local momentum of Riley and Truhlar is modified by \begin{align} \label{eq:Kex} K_{\rm RRR}^2(r) &= k^2+ \frac{2m}{\hbar^2} [|V_{\rm HF}(r)|+|V_{\rm P}(r)|+|V_{\rm ex}^{\rm M}(r,0)|] \, . \end{align} In contrast to Riley-Truhlar's free-electron-gas exchange approximation (FER) we include the momentum-free exchange part $V_{\rm ex}^{\rm M}(r,0)=-\frac{e^2}{4\pi \epsilon_0}\frac{2}{\pi} K_{\rm F}(r)$ into the local momentum of Eq.~(\ref{eq:Vex}). \begin{widetext} \begin{center} \begin{table}[h] \begin{tabular}{ | c | c c c c c c c | c c c c c c c | c c c c c c c |} \hline Helium & & & & $\delta_0$ & & & & & & & $\delta_1$ & & & & & & & $\delta_2$ & & & \\ $ka_0$ & & (1) & (2) & (3) & (4) & (5) & & & (1) & (2) & (3) & (4) & (5) & & & (1) & (2) & (3) & (4) & (5) & \\ \hline 0.10 &&2.994 &2.993 &3.006 &2.957 &3.337 & & & & & & & & & & & & & & & \\ 0.25 &&2.776 &2.770 &2.783 &2.691 &3.043 & & & & & & & & & & & & & & &\\ 0.50 &&2.436 &2.412 &2.422 &2.304 &2.648 & & &0.043 &0.045 &0.076 &0.023 &0.218 & & & & & & & &\\ 0.75 &&2.139 &2.093 &2.111 &2.001 &2.332 & & &0.110 &0.101 &0.146 &0.064 &0.316 & & &0.005 &0.006 &0.009 &0.004 &0.013 &\\ 1.00 &&1.890 &1.835 &1.856 &1.769 &2.071 & & &0.183 &0.159 &0.205 &0.116 &0.359 & & &0.014 &0.015 &0.020 &0.010 &0.025 &\\ 1.50 &&1.522 &1.473 &1.491 &1.446 &1.654 & & &0.284 &0.247 &0.279 &0.212 &0.367 & & &0.042 &0.041 &0.047 &0.035 &0.053 &\\ \hline \hline Neon & & & & $\delta_0$ & & & & & & & $\delta_1$ & & & & & & & $\delta_2$ & & & \\ $ka_0$ & & (1) & (2) & (3) & (4) & (5) & & & (1) & (2) & (3) & (4) & (5) & & & (1) & (2) & (3) & (4) & (5) & \\ \hline 0.20 &&6.072 &6.104 &6.179 &6.028 &7.716 & & & & & & & & & & & & & & & \\ 0.30 &&5.965 &6.004 &6.069 &5.902 &7.056 & & & & & & & & & & & & & & &\\ 0.40 &&5.857 &5.899 &5.947 &5.779 &6.648 & & & & & & & & & & & & & & &\\ 0.50 &&5.748 &5.789 &5.820 &5.659 &6.361 & & &3.040 &3.052 &3.030 &2.917 &3.196 & & &0.004 &0.005 &0.008 &0.002 &0.016 & \\ 0.70 && & & & & & & &2.933 &2.949 &2.909 &2.785 &3.124 & & & & & & & &\\ 0.80 && & & & & & & &2.873 &2.890 &2.845 &2.724 &3.073 & & & & & & & &\\ 0.90 &&5.321 &5.349 &5.333 &5.215 &5.626 & & &2.812 &2.828 &2.781 &2.667 &3.017 & & & & & & & &\\ 1.00 &&5.219 &5.243 &5.222 &5.114 &5.480 & & &2.751 &2.766 &2.719 &2.615 &2.958 & & &0.065 &0.059 &0.075 &0.034 &0.127 &\\ \hline \hline Argon & & & & $\delta_0$ & & & & & & & $\delta_1$ & & & & & & & $\delta_2$ & & & \\ $ka_0$ & & (1) & (2) & (3) & (4) & (5) & & & (1) & (2) & (3) & (4) & (5) & & & (1) & (2) & (3) & (4) & (5) & \\ \hline 0.10 &&9.274 &9.266 &9.374 &9.249 &11.858 & & &6.279 &6.276 &6.279 &6.275 &6.318 & & & & & & & & \\ 0.25 &&9.045 &9.015 &9.141 &8.987 &10.628 & & &6.227 &6.193 &6.213 &6.192 &6.381 & & &0.002 &0.002 &0.004 &0.001 &0.031 &\\ 0.50 &&8.647 &8.580 &8.658 &8.561 &9.250 & & &6.001 &5.901 &5.938 &5.909 &6.214 & & &0.045 &0.035 &0.061 &0.024 &0.815 &\\ 0.75 &&8.249 &8.164 &8.206 &8.162 &8.536 & & &5.702 &5.583 &5.613 &5.604 &5.901 & & &0.277 &0.185 &0.284 &0.150 &1.963 &\\ 1.00 &&7.875 &7.790 &7.808 &7.798 &8.030 & & &5.411 &5.305 &5.320 &5.333 &5.575 & & &0.860 &0.581 &0.782 &0.536 &2.008 &\\ 1.50 &&7.252 &7.170 &7.163 &7.182 &7.288 & & &4.923 &4.861 &4.854 &4.892 &4.996 & & &1.644 &1.539 &1.614 &1.594 &1.877 &\\ \hline \end{tabular} \caption{e-He, e-Ne and e-Ar partial wave phase shifts $\delta_{\ell}$ (in rad) for $\ell=0,1,2$ omitting polarization part $V_{\rm P}=0$. \\ (1): static-exchange approximation (SEA); \\ (2): present work (RRR); \\ (3): semiclassical exchange approximation (SCA); \\ (4): Hara's free-electron-gas exchange approximation (FEH); \\ (5): Riley-Truhlar's free-electron-gas exchange approximation (FER).} \label{tab:PSex} \end{table} \end{center} \end{widetext} The quality of the proposed local exchange approximations is verified due to a comparison of the scattering phase shifts with the static-exchange approximation (SEA) using the non-local exchange term. The SEA is the solution of the scattering process in first order of perturbation neglecting polarization effects. For a consistent comparison we omit the polarization potential ($V_{\rm PP}=0$) in this subsection. In Tab.~\ref{tab:PSex} phase shifts calculations are performed for the different proposed local exchange potentials with the non-local exchange term performed for helium by Duxler {\it et al.} \cite{Duxler71}, for neon by Thompson \cite{Thompson66}, and for argon by Thompson again and by Pindzola and Kelly \cite{Pindzola74}. Hara's free-electron-gas exchange approximation underestimates the effects of exchange, wheras Riley-Truhlar's free-electron-gas exchange approximation overestimates the effects of exchange. For helium and argon this result was already obtained by Riley and Truhlar \cite{RileyTruhlar1976}. The best agreement with the SEA phase shifts yield the popular SCE potential as well as the proposed exchange potential RRR Eq.~(\ref{eq:Vex},\ref{eq:Kex}). Both potentials leads to the same high-energy asymptotic limit. For small wave numbers $\kappa a_0 \leq 0.5$ the proposed exchange potential leads to the best results with an error $\leq 0.1 \, {\rm rad}$. In particular for helium and neon the proposed potential works very fine. The exchange part for both noble gases is more of interest as for argon because their polarizability is one order of magnitude lower as for argon. The polarization part for argon is of higher interest as for helium and neon. So for explicit calculations the momentum-transfer cross sections for helium and neon is more influenced by the exchange part as for argon. \subsection{Plasma effects} The dominant plasma effect is the screening of the electron-atom potential. The screened optical potential is given by the screening of each contribution \begin{align} \label{eq:sVopt} V_{\rm opt}^{\rm s}(r) &= V_{\rm HF}^{\rm s}(r) + V_{\rm P}^{\rm s}(r) + V_{\rm ex}^{\rm s}(r) \, . \end{align} The effect of screening was extensively analysed for the polarization potential and a screening factor $e^{-2\kappa r} (1+\kappa r)^2$ was obtained by Redmer and R\"opke \cite{RR85} \begin{align} \label{eq:sVPP} V_{\rm P}^{\rm s}(r) &= V_{\rm P}(r) \, e^{-2\kappa r} (1+\kappa r)^2 \, . \end{align} The Hartree-Fock part for isolated systems is determined by the Coulomb interaction between the incoming electron with the core as well as with the shell electrons. Screening effects for this part can be considered by replacing the fundamental Coulomb interaction by a Debye potential. For partially ionized hydrogen plasma this is done by Karakhtanov \cite{Karakhtanov2006}. The screened Hartree-Fock potential is \begin{align} \label{eq:sVHF} V_{\rm HF}^{\rm s}(r) &= -\frac{Ze^{-\kappa r}}{r} + I_1 + I_2 + I_ 3 \, ,\\ I_1 &= \frac{1}{2\kappa r} e^{-\kappa r} \int\limits_0^{r} \frac{\rho(r_1)}{r_1} e^{\kappa r_1} \, dr_1 \, , \\ I_2 &= - \frac{1}{2\kappa r} e^{-\kappa r} \int\limits_0^{\infty} \frac{\rho(r_1)}{r_1} e^{-\kappa r_1} \, dr_1 \, , \\ I_3 &= \frac{1}{2\kappa r} e^{\kappa r} \int\limits_r^{\infty} \frac{\rho(r_1)}{r_1} e^{-\kappa r_1} \, dr_1 \, , \end{align} which is derived in App.~\ref{AppA}. For large distances this potential becomes repulsive, see App.~\ref{AppB}. In App.~\ref{AppB} the asymptotic behavior at large distances $r \rightarrow \infty$ for the screened Hartree-Fock part is derived for intermediate screening parameters $0<\kappa a_0<1$ \begin{align} \label{eq:asVHF} V_{\rm HF}^{\rm s}(r) &= \frac{Ze^2}{4\pi \epsilon_0} \frac{e^{-\kappa r}}{r} (\kappa a_0)^2 \, {\cal C}_0 \, , \\ {\cal C}_0 &= \frac{Z^{-1}}{6} \int\limits_0^{\infty} \left( \frac{r_1}{a_0}\right)^2 4\pi r_1^2 \rho(r_1) \, dr_1 \, . \end{align} ${\cal C}_0$ is a characteristic quantity for each element. For hydrogen we obtain ${\cal C}_0^{\rm H} = 1/2$ which is in agreement with the results in \cite{Karakhtanov2006}. For the noble gases we find ${\cal C}_0^{\rm He} = 0.1974$, ${\cal C}_0^{\rm Ne} = 0.1563$, ${\cal C}_0^{\rm Ar} = 0.2411$, ${\cal C}_0^{\rm Kr} = 0.1830$ and ${\cal C}_0^{\rm Xe} = 0.1933$, see Tab.~\ref{tab:coef} in App.~\ref{AppB}. For the screened exchange part we replace the Hartree-Fock and polarization part in Eq.~(\ref{eq:Kex}) by their screened ones Eq.~(\ref{eq:sVPP}) and Eq.~(\ref{eq:sVHF}). \begin{figure}[htb] \begin{center} \includegraphics[width=0.99\linewidth,clip=true]{ArSPotneu-crop.pdf} \caption{(Color online) Electron-argon interaction described by the screened optical potential Eq.~(\ref{eq:sVopt}) for $\kappa a_0 =0.1$ and $\kappa a_0 = 0.05$ in comparison with the unscreened optical potential, Eq.~(\ref{eq:Vopt}). The asymptotes Eq.~(\ref{eq:asVHF}) are also shown.} \label{fig:soptAr} \end{center} \end{figure} In contrast to the unscreened case in which the polarization part is the main contribution to the optical potential at large distances we obtain the Hartree-Fock term is dominant, the screened optical potential become repulsive for all noble gases. In Fig.~\ref{fig:soptAr} this effect is illustrated for argon. Karakhtanov \cite{Karakhtanov2006} obtained the same effect for partially ionized hydrogen. For short distances screening effects are not visible. We estimate for the momentum-transfer cross section small differences at higher energies. Great differences and qualitative different behaviors of the momentum-transfer cross sections are obtained at low energies. The results are given in Sec.~\ref{sec:sQT}. \section{Results} \subsection{Momentum-transfer cross section} \label{sec:QT} The momentum-transfer cross section for the electron-atom interaction is determined by the phase shifts of the optical potential using the values for the cut-off parameter $r_0$ in Tab.~\ref{tab:r0}. In Fig.~\ref{fig:QTall} the results are compared with experimental data. As a first result the form of the momentum-transfer cross section for all noble gases is in agreement with the experimental data. We conclude that the simple structure of helium as well as the specific structure of the other noble gases is well described by the optical potential. For low and intermediate energies the results are in good agreement with the experimental data. For instance, the shoulder for neon is reproduced as well as the Ramsauer minimum obtained for argon, krypton and xenon are reproduced. For large wavenumbers the discrepancies between the calculated and measured values increases for argon and krypton. \begin{widetext} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\linewidth,clip=true]{HeliumD-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{NeonD-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{ArgonD-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{KryptonD-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{XenonD-crop.pdf} \caption{(Color online) Momentum-transfer cross section for the noble gases. The calculations are performed using the optical potential Eq.~(\ref{eq:Vopt}) in the present work Eq.~(\ref{eq:VHF}),(\ref{eq:VPP}),(\ref{eq:Vex}) and (\ref{eq:Kex}) in comparison with the optical potential used in \cite{Adibzadeh05}. Also shown are experimental data \cite{Crompton70He,Milloy77He,Register80He,Williams79HeNeAr,Robertson72Ne,Register84Ne,Milloy77Ar,Gibson96Ar,Srivastava81ArKr,Panajo97Ar,England88Kr,Danjo88Kr,Koizumi86Xe,Gibson98Xe,Register86Xe}.} \label{fig:QTall} \end{center} \end{figure} \end{widetext} To compare with other theoretical results calculations by Adibzadeh and Theodosiou \cite{Adibzadeh05} are given. The optical potential is used in a different way as in the present work. For each element different exchange and polarization potentials are adjusted including four parameters. The discrepancies with the experimental data are in the same order as the present results which only includes one adjusted parameter. Adibzadeh and Theodosiou calculated for the heavier noble gases argon, kypton and xenon a smaller maximum in contrast to the proposed form for the optical potential. This difference can be explained by the different choice of the exchange and polarization term. \subsection{Screening effects on the momentum-transfer cross section} \label{sec:sQT} Fig.~\ref{fig:sQTAr} shows the results of the momentum-transfer cross section using the screened optical potential for helium and argon. As known from the $e$--$i$ interaction also in the case of $e$--$a$ collisions the plasma environment affects the transport of slower electrons more than for faster electrons. First we consider argon. For thin plasmas $\kappa a_0 < 0.025$ the momentum-transfer cross section decreases rapidly for slow electrons, the Ramsauer minimum is resolved at $\kappa a_0 \approx 0.025$. This is a consequence of the reduced zeroth scattering phase shift $\delta_0(k)$ for the repulsive screened optical potential at large distances. Around intermediate and high energies $ka_0 \geq 0.2$ discrepancies to the momentum-transfer cross section of isolated systems bescome small. The influence on the correlation functions is smaller than $1 \%$ for $T \approx 10^4 {\rm K}$. For high dense plasmas $\kappa a_0> 0.025$ the repulsive screened optical potential reduces the zeroth scattering phase shift $\delta_0(k)$ to values less than $3\pi$ so that the momentum-transfer cross section $Q_{\rm T}^{\rm ea}(k)$ increases. The new minimum which appears in the momentum-transfer cross section $Q_{\rm T}^{\rm ea}(k)$ is a result of higher phase shifts which is different to the Ramsauer effect. The minimum depth increases with the screening parameter $\kappa$ and the minimum position shifts to higher wave numbers. Furthermore the maximum position is also shifted to higher wavenumbers and the height decreases. The influence on the correlation functions is smaller than $2 \%$ for $T \approx 10^4 {\rm K}$. The effecte becomes more relevant for small temperaures. \begin{widetext} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\linewidth,clip=true]{HeQTSCneuB-crop.pdf} \,\,\,\,\,\, \includegraphics[width=0.4\linewidth,clip=true]{ArQTall5-crop.pdf} \caption{(Color online) Screening effects on the momentum-transfer cross section for helium and argon at various screening parameters $\kappa$.} \label{fig:sQTAr} \end{center} \end{figure} \end{widetext} We obtain similar results for helium. As a consequence of the first scattering phase shift $\delta_1$ a minimum appears in the momentum-transfer cross section. For higher densities the minimum depth increases and its position shifts to higher wave numbers as well as for argon. At $\kappa a_0 \approx 0.1$ the minimum changes to a saddle point and for higher screening parameters $\kappa$ to an inflection point. For low temperatures $T \approx 5 \, 000 {\rm K}$ the influence on the correlation functions is around $10 \%$. \subsection{Correlation functions} For partially ionized systems the composition is an ingrediant to calculate the influence of the electron-atom contribution. The composition of the noble gases for a given temperature $T$ and mass density $\rho$ is calculated with Comptra04 \cite{Kuhlbrodt04}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\linewidth,clip=true]{CompoNEU-crop.pdf} \caption{(Color online) Ionization degree of the partially ionized noble gases at $T=20 \, 000 {\rm K}$.} \label{fig:Comp} \end{center} \end{figure} Fig.~\ref{fig:Comp} shows the ionization degree $\alpha_{\rm e}=n_{\rm e}/n_{\rm heavy}$ of the noble gases He, Ne, Ar, Kr and Xe at $T=20 \, 000 \, {\rm K}$ for heavy particle densities $n_{\rm heavy} = \rho N_{\rm A}/M=n_{\rm a}+n_{\rm i}$ between $10^{18} {\rm cm}^{-3} < n_{\rm heavy} < 10^{22} {\rm cm}^{-3}$ on the left side and in dependence on the degeneracy parameter $\Theta$ between $1 <\Theta < 1000$ on the right side. At low densities the ionization degree is $\approx 1$, we estimate a small contribution due to $e$--$a$ collisions for the electrical conductivity. For intermediate densities the ionization degree has a minimum and we estimate a dominant contribution by $e$--$a$ collisions. For high densities bound states are dissolved by the Mott effect the plasma becomes fully ionized $n_{\rm ea} \approx 0$. Because of Pauli blocking, also $e$--$e$ contributions can be neglected and only the $e$--$i$ contribution is important for the conductivity, the Ziman formula is applicable. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\linewidth,clip=true]{dlld2-crop.pdf} \caption{(Color online) Correlation functions $d_{11}$ and $d_{33}$ in units of $d=\frac{4}{3}\sqrt{2\pi m \beta} [(n_{\rm e}e^2)/(4\pi\epsilon_0)]^2 \Omega_{\rm N}$ for the different considered scattering mechanisms $e$--$i$ (black), $e$--$e$ (brown), $e$--He (red), $e$--Ne (orange), $e$--Ar (green), $e$--Kr (blue) and $e$--Xe (violet) at $T = 20 \, 000 {\rm K}$. (Two scales.)} \label{fig:CorrF} \end{center} \end{figure} The different contributions for the correlation functions are shown in Fig.~\ref{fig:CorrF}. Despite the small ionization degrees the charged particle contributions yield the dominant parts for helium at $\Theta>600$ and neon at $\Theta > 60$ in $d_{11}$ (only $e$--$i$) and $d_{33}$ ($e$--$i$ and $e$--$e$) because of the weak $e$--$a$ interaction (weak $Q_{\rm T}^{\rm ea}$) in contrast to the charged particle interaction. The $e$--$e$ contribution is explicitly given in $d_{33}$. For low densities the ratio between the $e$--$i$ and the $e$--$e$ contribution is $d_{33}^{\rm ei}/d_{33}^{\rm ee}>\sqrt{2}Z$. \subsection{Electrical conductivity for partially ionized noble gases} \begin{widetext} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\linewidth,clip=true]{LeitfHeAr-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{LeitfT20kKPIPall-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{LeitfArPIP-crop.pdf} \includegraphics[width=0.4\linewidth,clip=true]{LeitfXePIP-crop.pdf} \caption{Conductivity of partially ionized noble gases using the (screened) optical potential for the $e$--$a$ interaction (full lines) \\ (a): at $20 \, 000 {\rm K}$ for helium (red) and argon (green) in comparison with the fully ionized plasma model $n_{\rm a}=0$ (dashed-dotted) and an neglecting $e$--$a$ interaction $V_{\rm ea}=0$ (dashed) and experimental data \cite{Dudin98He,FortovHe,Ivanov76NeArXe} (circles); \\ (b): at $20 \, 000 {\rm K}$ for helium (red), neon (orange), argon (green), krypton (blue) and xenon (violet) in comparison with the experimental data \cite{Dudin98He,FortovHe,Ivanov76NeArXe} (circles); \\ (c): for argon at various temperatures $T=10 \, 000 {\rm K}$ (cyan), $15 \, 000 {\rm K}$ (brwon) and $20 \, 000 {\rm K}$ (black) in comparison with calculations using the polarization potential \cite{Kuhlbrodt05} (dashed) and experimental data \cite{Ivanov76NeArXe, Shilkin03ArXe} (filled circles, filled squares); \\ (d): for xenon at various temperatures $T=10 \, 000 {\rm K}$ (cyan), $15 \, 000 {\rm K}$ (brwon) and $25 \, 000 {\rm K}$ (black) in comparison with calculations using the polarization potential \cite{Kuhlbrodt05} (dashed) and experimental data \cite{Ivanov76NeArXe, Shilkin03ArXe, Mintsev79Xe} (filled circles, filled squares, filled triangles). } \label{fig:PIPTra} \end{center} \end{figure} \end{widetext} Using the total momentum of electrons ${\bf P}_1$ and the heat current ${\bf P}_3$ as relevant observables the conductivity, Eq.~(\ref{eq:cond}), is calculated within the correlation functions $d_{11}$, $d_{13}$ and $d_{33}$ and the composition calculated with Comptra04. Theoretical results of the conductivity depending on the heave particle density $n_{\rm heavy}$ are compared with experimental data in Fig.~\ref{fig:PIPTra}. In Fig.~\ref{fig:PIPTra} above (a,b) the strong reduction of the conductivity as the consequence of $e$-$a$ collisions is shown at $T\approx 20 \, 000 \, {\rm K}$. Corresponding to Fig.~\ref{fig:Comp} and \ref{fig:CorrF}, the reduction is stronger for the lighter elements. The characteristic minimum in the conductivity and a systematic behavior is observed for all noble gases. The experimental data for the conductivity are described by the PIP using the optical potential. Similary good results are found by Adams {\it et al.} who used the experimental momentum-transfer cross sections describing $e$--$a$ collisions. In Fig.~\ref{fig:PIPTra} below (c,d) the conductivity depending on $n_{\rm heavy}$ is presented at various temperatures for argon and xenon. Experimental data are compared with calculations based on the optical potential, Eq.~(\ref{eq:sVopt}), and calculations performed by Kuhlbrodt {\it et al.} \cite{Kuhlbrodt05} based on the polarization potential for the description of $e$--$a$ collisions. Also for the lower temperatures $T=10 \, 000$ K the experimental data for the conductivity are described by the PIP using the optical potential. The qualitative behavior of the conductivity e.g. the characteristic minimum is also observed by Kuhlbrodt {\it et al.} \cite{Kuhlbrodt05} but the difference is up to two orders of magnitude, the polarization potential is not sufficient describing the $e$--$a$ collisions. \section{Conclusion} Within the LRT general expressions for plasma properties are obtained starting from a quantum-statistical approach. Introducing the chemical picture and the composition given by the ionization equilibrium the atoms (bound-states) are considered as a new species. Semi-empirical expressions for the effective potentials in particular for the $e$--$a$ interactions can be derived by first principle approaches. We introduce the optical potential to describe the $e$--$a$ interaction. A more systematic approach using the Green function method would give an interaction which is nonlocal and energy dependent. So the used optical potential is motivated by the perturbation theory up to second order which combines the Coulomb interaction which is relevant for the short distances and the polarization potential relevant for large distances. We used an exchange contribution determined by an effective field. This contribution is determiend by the condition that the characteristic scattering phase shifts of the non-local first order perturbation theory, the static-exchange approximation, are reproduced. We show that a unique form of the optical potential can be introduced to describe the momentum-transfer cross section for all noble gases with high precision. Only one parameter $r_0$ is adjusted describing the $e$--$a$ scattering mechanism. Beyond the Born approximation specific effects e.g. the Ramsauer-Townsend minimum appear within a T matrix approach. The optical potential model is extended to describe dense plasma environment in which screening effects arise. The effects on the momentum-transfer cross sections are shown. We show the Ramsauer-Townsend minimum is modified by plasma effects. Furthermore the electrical conductivity is calculated in a good agreement with experimental data. We found that for plasma conditions $T \approx 1 {\rm eV}$ and $n_{\rm heavy} \approx 10^{21} {\rm cm}^{-3}$ the screening effects on the conductivity are smaller than $2 \%$. This explains why the calculations for the conductivity using the experimental momentum-transfer cross sections \cite{Adams07} are in a good agreement with the measured data. In general, to explore WDM but also ultra-cold gases the medium (plasma) effects may become more significant to describe transport properties. For future work improved screening effects (dynamical screening), the Pauli blocking, the static structure factor and degeneracy effects on the quantum mechanical T matrix can also be treated with our quantum-statistical approach. \section*{Acknowledgements} The authors acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB 652. \begin{appendix} \section{Screened Hartree-Fock potential} \label{AppA} The Hartree-Fock potential between an electron and an atom describes the Coulomb interaction between the incoming electron with the core and the shell electrons given by \begin{align} V_{\rm HF}(r) &= \frac{e^2}{4\pi \epsilon_0} \left[ -\frac{Z}{r} + \int \frac{1}{\bf |r-r'|}\rho(r') \, d^3{\bf r'} \right] \, . \end{align} For the plasma system the Coulomb interaction is screened. We replace the Coulomb interaction by a Debye potential \begin{align} V_{\rm HF}^{\rm s}(r) &= \frac{e^2}{4\pi \epsilon_0} \left[ - \frac{Ze^{-\kappa r}}{r} + \int \frac{e^{-\kappa {\bf |r-r'|}}}{\bf |r-r'|}\rho(r') \, d^3{\bf r'} \right] \, . \end{align} The second term in the square brackets can be expressed by \begin{align*} 2\pi \int\limits_0^{\infty} dr' \, r'^2 \rho(r') \int\limits_{-1}^{1} dz \, \frac{e^{-\kappa \sqrt{r^2+r'^2-2rr'z}}}{\sqrt{r^2+r'^2-2rr'z}} \, . \end{align*} Substituting $y=\sqrt{r^2+r'^2-2rr'z}$ we obtain \begin{align*} 2\pi \int\limits_0^{\infty} dr' \, r'^2 \rho(r') \int\limits_{|r-r'|}^{r+r'} dy \, \frac{e^{-\kappa y}}{rr'} \, . \end{align*} Performing the integral over $y$ the integral over $r'$ can be spitted, we obtain Eq.~(\ref{eq:sVHF}) which leads for $\kappa \rightarrow 0$ to the well known result Eq.~(\ref{eq:VHF}). \section{Asymptotic behavior for the screened Hartree-Fock potential} \label{AppB} Expanding the exponential functions in the integrand of $I_1$ and $I_2$ we obtain: \begin{align} I_1 &= \frac{e^{-\kappa r}}{2\kappa r} \int\limits_0^{r} \frac{\rho(r_1)}{r_1} \left\{1+ \kappa r_1 + \frac{\kappa^2 r_1^2}{2} +... \right\} \, dr_1 \, , \\ I_2 &= - \frac{e^{-\kappa r}}{2\kappa r} \int\limits_0^{\infty} \frac{\rho(r_1)}{r_1} \left\{ 1 - \kappa r_1 + \frac{\kappa^2 r_1^2}{2} -+... \right\} \, dr_1 \, , \\ I_1+I_2 &> \frac{Ze^{-\kappa r}}{r} - \frac{1}{2\kappa r} e^{-\kappa r} \int\limits_r^{\infty} \frac{\rho(r_1)}{r_1} \left\{ 1 + \kappa r_1 + \kappa^2 r_1^2 \right\} \, . \label{eq:I1I2} \end{align} For large distances $r \rightarrow \infty$, $I_3$ is bigger than the last term in Eq.~(\ref{eq:I1I2}), so that $V_{\rm HF}(r)$ becomes repulsive. Expanding the exponential functions for $I_1$ and $I_2$ up to the third order $(\kappa r_1)^3$ we find \begin{align} \begin{split} V_{\rm HF}^{\rm s}(r)&= \frac{e^2}{4\pi \epsilon_0}\frac{e^{-\kappa r}}{\kappa r} \int\limits_0^{\infty} \frac{\rho(r_1)}{r_1} \left\{\frac{\kappa^3 r_1^3}{3!} + \frac{\kappa^5 r_1^5}{5!} + ... \right\} \, dr_1 \\ & \quad- \frac{e^2}{4\pi \epsilon_0} \frac{1}{\kappa r} \int\limits_r^{\infty} \frac{\rho(r_1)}{r_1} \sinh\left[\kappa (r_1 -r)\right] \, dr_1 \, . \end{split} \end{align} For $0 < \kappa a_0 < 1$ at the dominant contribution in the first term is in order to ${\cal O} (\kappa^2 e^{-\kappa r}/r)$. The last term decreases faster for large distances $r \rightarrow \infty$, we obtain \begin{align} V_{\rm HF}^{\rm s}(r) &= \frac{Z e^2}{4\pi \epsilon_0 } \frac{e^{-\kappa r}}{r} \, \sum_{k=0}^{\infty} (\kappa a_0)^{k+2} {\cal C}_k \, , \\ {\cal C}_k &= \frac{Z^{-1}}{(2k+3)!} \int\limits_0^{\infty} \left( \frac{r_1}{a_0}\right)^{2k+2} \rho(r_1) \, dr_1 \, . \end{align} The numerical values for the coefficients ${\cal C}_k$ are listed in Tab.~\ref{tab:coef}. Because of the fast convergence only ${\cal C}_0$ is necessary at an inverse screening length $0<\kappa a_0 < 1$, Eq.~(\ref{eq:asVHF}) is veryfied. \begin{center} \begin{table}[h] \begin{tabular}{ | c | c c c c c |} \hline $k$ & ${\cal C}_k^{\rm He}$ & ${\cal C}_k^{\rm Ne}$ & ${\cal C}_k^{\rm Ar}$ & ${\cal C}_k^{\rm Kr}$ & ${\cal C}_k^{\rm Xe}$ \\ \hline 0 & 0.1974 & 0.1563 & 0.2411 & 0.1830 & 0.1933 \\ 1 & 0.0324 & 0.0228 & 0.0672 & 0.0568 & 0.0727 \\ 2 & 0.0050 & 0.0033 & 0.0170 & 0.0165 & 0.0262 \\ 3 & 0.0007 & 0.0005 & 0.0044 & 0.0044 & 0.0088 \\ 4 & 0.0001 & 0.0001 & 0.0013 & 0.0011 & 0.0028 \\ \hline \end{tabular} \caption{Coefficients ${\cal C}_k$ for the noble gases.} \label{tab:coef} \end{table} \end{center} \end{appendix}
2024-02-18T23:40:18.037Z
2017-04-12T02:08:08.000Z
algebraic_stack_train_0000
1,979
7,815
proofpile-arXiv_065-9777
\section*{Acknowledgments} We are grateful to our collaborators in the ALICE and the STAR experiments for enlightening discussions and suggestions. We appreciate J. Liao for helpful comments from theoretical standpoint. We also thank W.-B. He and C. Zhong for their effort on maintaining computing resources. This work is supported by the National Key Research and Development Program of China (Nos. 2018YFE0104600, 2016YFE0100900), the National Natural Science Foundation of China (Nos. 11890710, 11890714, 12061141008, 11975078, 11421505), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB34030000) and the Guangdong Major Project of Basic and Applied Basic Research (No. 2020B0301030008). Q.-Y. S. is also sponsored by the Shanghai Rising-Star Program (20QA1401500). \bibliographystyle{utphys}
2024-02-18T23:40:18.233Z
2022-11-29T02:26:37.000Z
algebraic_stack_train_0000
1,997
118
proofpile-arXiv_065-9788
\section{Introduction} \label{INTRO} There is a well-established connection between elementary particle physics and representation theory, as first noted in the 1930s by Wigner \cite{epw}. It links the properties of particles to the structure of Lie groups and Lie algebras. An outstanding example is the Poincar\'e group, whose irreducible unitary representations provide the quantum mechanical state spaces of the various elementary particles. According to the axioms of quantum mechanics, a system composed of independent particles is described by the tensor product of their individual state spaces, which means by a product representation of the Poincar\'e group. In fact, the Standard Model (of particle physics) is based on product representations in connection with a Fock space formulation, which allows describing a variable number of particles. On the basis of these product representations, the electromagnetic, weak, and strong interactions are modelled by ``interaction terms'' that couple the particles to gauge fields and thus establish correlations between the particles. Another way of establishing correlations is to reduce a product representation to a direct sum of irreducible multi-particle representations. An irreducible representation of the Poincar\'e group is characterised and labelled by fixed values of two Casimir operators. It describes an isolated multi-particle system, characterised by well-defined and conserved total linear and angular momenta. The particles of a multi-particle system described by such a representation are no longer independent, but are correlated by the constancy of the Casimir operators. Both types of correlation, those by coupling to the gauge fields and those by fixing the values of the Casimir operators, can be described as an interaction mediated by the virtual exchange of momentum between the particles. I will show that irreducible two-particle representations of the Poincar\'e group describe an interaction that is equal in structure and strength to the electromagnetic interaction. \section{Irreducible unitary representations} \label{IRRE} \subsection{Basic properties} Following the presentation in \cite{sss1}, I will briefly recall the basic properties of the irreducible unitary representations of the Poincar\'e group. (The speed of light $c$ and Planck's constant $\hbar$ are set to 1.) The Poincar\'e group is defined by the commutation relations of the infinitesimal generators of translations and Lorentz transformations, $p_\mu$ and $M_{\mu\nu}$, \begin{eqnarray} \left[M_{\mu\nu}, p_\sigma \right] &=& i\,(g_{\nu\sigma} p_\mu - g_{\mu\sigma} p_\nu), \label{1-1} \\ \left[p_\mu, p_\nu \right] &=& 0, \label{1-2} \\ \left[M_{\mu\nu}, M_{\rho\sigma}\right] &=& -i\,(g_{\mu\rho} M_{\nu\sigma} - g_{\nu\rho} M_{\mu\sigma} + g_{\mu\sigma} M_{\rho\nu} - g_{\nu\sigma} M_{\rho\mu}), \label{1-3} \end{eqnarray} where $g_{\mu\nu}$ is the Minkowski metric tensor $g = (g_{\mu\nu}) = $ diag$(1,-1,-1,-1)$. In quantum mechanical applications, the infinitesimal generators $p_\mu$ and $M_{\mu\nu}$ are represented by self-adjoint linear operators, acting on a Hilbert space, which is the state space of the representation. These operators are identified with the following observables: their eigenvalues are real numbers that correspond to the measured values of linear momentum and angular momentum. (I will use the same notations for these operators as for the corresponding infinitesimal generators.) The Poincar\'e group has two Casimir operators \begin{equation} P = p^\mu p_\mu\;\; \mbox{and} \;\; W = -w^\mu w_\mu\; , \; \mbox{ with } \; w_\sigma = \frac{1}{2} \epsilon_{\sigma \mu \nu \lambda} M^{\mu \nu}p^\lambda; \label{1-4} \end{equation} here $\epsilon_{\sigma \mu \nu \lambda}$ is the antisymmetric tensor of rank 4 and $w_\sigma$ is the Pauli--Lubanski pseudovector. The commutation relations of $w_\sigma$ can be derived from the commutation relations (\ref{1-1}--\ref{1-3}): \begin{eqnarray} \left[M_{\mu\nu}, w_\rho \right] &=& i\,(g_{\nu\rho} w_\mu - g_{\mu\rho} w_\nu) , \label{1-10} \\ \left[w_\mu, p_\nu \right] &=& 0 , \label{1-11} \\ \left[w_\mu, w_\nu \right] &=& i \epsilon_{\mu \nu \rho \sigma} w^\rho p^\sigma . \label{1-12} \end{eqnarray} With (\ref{1-10}--\ref{1-12}), it can then be verified that the Casimir operators (\ref{1-4}) commute with all the infinitesimal generators $p_\mu$ and $M_{\mu\nu}$. A representation is irreducible if the Hilbert space has no nontrivial invariant subspaces. In this case, the Casimir operators are multiples of the identity and their scalar values can be used to classify the irreducible representations. The eigenvalues of $P$ correspond to a squared mass $m^2$. In a single-particle representation, relation $P = p^\mu p_\mu = m^2$ defines the ``mass shell'' of the particle. In the following, I will only consider representations with $m^2 > 0$. In vector notation, the Casimir operator $P$ can be written as \begin{equation} P = p_0^2 - \mathbf{p}^2, \label{1-7} \end{equation} where the vector $\mathbf{p}$ denotes the three spatial components of $p_\mu$. The interpretation of the eigenvalues of the Casimir operator $W$ is less obvious. The pseudovector $w_\sigma$ can be written as \begin{equation} w_0 = \mathbf{p} \cdot \mathbf{M}, \;\; \mathbf{w} = p_0\mathbf{M} - \mathbf{p} \times \mathbf{N}, \label{1-5} \end{equation} where $\mathbf{M}=(M_{32}, M_{13}, M_{21})$ denotes the three components of the angular momentum and $\mathbf{N}=(M_{01}, M_{02}, M_{03})$ are the three boost operators. It is convenient to make a Lorentz transformation to the rest frame in which $\mathbf{p}=\mathbf{0}$, $p_0=m$. Then (\ref{1-5}) reduces to \begin{equation} w_\mu = m\,(0, M_{23}, M_{31}, M_{12}) \label{1-6} \end{equation} and the Casimir operator $W$ takes on the simple form \begin{equation} W = m^2\,\mathbf{M}^2. \label{1-8} \end{equation} The eigenvalues of $\mathbf{M}^2$ are $s(s+1)$, where $s = 0, \frac{1}{2}, 1, \frac{3}{2}, 2, \dots$ The value of $s$, together with the eigenvalue of a component of $M_{ik}$, say $M_{12}$ with eigenvalue $s_3$, where $s_3 = -s, -s+1,\dots, s-1, s$, label a complete set of eigenstates of the angular momentum $\mathbf{M}$. In single-particle representations, $\mathbf{M}$ corresponds to the spin of the particles. Within the state space of an irreducible representation, a basis can be defined whose states are labelled by their eigenvalues with respect to a complete set of commuting operators; for example, the operators of the linear momentum $p_\mu$ and a component of $w_\mu$, say $w_3$. Because the Casimir operator $P = p^\mu p_\mu$ has a constant value, only the three spatial components $\mathbf{p}$ of $p_\mu$ are independent. Given that by Equation (\ref{1-8}) the eigenvalue of $W$ fixes the value $s$ of the angular momentum, we additionally need the eigenvalue $s_3$ of $M_{12}$ to label a complete set of basis states (in Dirac's bra-ket notation) $\left|\mathbf{p}, s_3 \right>$. There are two irreducible representations for each value of $P$ and $W$, one for each sign of $p_0/|p_0|$, which commutes with all the infinitesimal generators and is therefore an invariant of the group. \subsection{Product representations and their reduction} According to the axioms of quantum mechanics, a system of $N$ independent particles is described by the tensor product of single-particle representations (see, for instance, \cite{jmj}). In such a product representation, the total linear momentum and the angular momentum can be defined by \begin{equation} p_\mu = \sum^N_{i=1} p^i_\mu \;\; \mbox{ and } \;\; M_{\mu \nu} = \sum^N_{i=1} M^i_{\mu \nu} , \label{2-1} \end{equation} where $p^i_\mu$ and $M^i_{\mu \nu}$ are operators of the representation of particle i. It is easy to verify that the multi-particle operators (\ref{2-1}) satisfy the same commutation relations (\ref{1-1}--\ref{1-3}) as the single-particle operators. Therefore, a product representation has the same number of Casimir operators as a single-particle representation and expressed by the operators (\ref{2-1}) they have the same form. By construction, a product representation of the Poincar\'e group is not irreducible. It can, however, be reduced to a direct sum of irreducible representations, as first described by Joos \cite{hj}. Reduction essentially means sorting states according to the eigenvalues of the Casimir operators. The different irreducible representations can then be classified by the eigenvalues of the Casimir operators $P$ and $W$ built from the multi-particle operators defined in (\ref{2-1}). Therefore, a multi-particle system described by an irreducible representation has an invariant effective mass $m_{\mathrm{tot}}$, with $m_{\mathrm{tot}}^2 = P$, and an invariant product of total linear momentum and angular momentum, which in the rest frame can be written as $m_{\mathrm{tot}}^2\,\mathbf{M}^2 = W$. The angular momentum $\mathbf{M}$ now consists of the particle spins and the orbital angular momentum that corresponds to a rotation of the N-particle system as a whole. In the following, for reasons of simplicity, I will ignore spin variables. Therefore, the angular momentum will be identical to the orbital angular momentum and the individual particle states will be labelled solely by their linear momenta. It should be emphasised that by Equation (\ref{1-8}) the orbital angular momentum is tightly anchored in the Casimir operator $W$, which means that it is an inherent property of an irreducible representation of the Poincar\'e group. It is the multi-particle equivalent of the spin of an elementary particle. As in the single-particle case, a basis of eigenstates $\left|\mathbf{p}, s_3 \right>$ of the (total) linear momentum and a component of the angular momentum can be chosen. Due to the double function of the infinitesimal generators as observables and as generators of symmetry transformations, an eigenstate of the orbital angular momentum has a rotationally symmetric structure. This means that in irreducible $N$-particle representations, the individual particle states are forced into a geometrically correlated configuration. This is in sharp contrast to what would be expected from a classical point of view for a system of ``independent'' particles. \subsection{Two-particle representations} In the following, I will study the consequences of this correlation for two-particle systems. Consider two independent particles with momenta $\mathbf{p}_1$ and $\mathbf{p}_2$, prepared in a (tensor) product state $\left|\mathbf{p}_1\right>\!\left|\mathbf{p}_2 \right>$. Following a common habit, I shall write this product state also in the form $\left|\mathbf{p}_1,\mathbf{p}_2 \right>$. The product states are normalised via the inner product $\left<..|..\right>$ by \begin{equation} \left<\mathbf{p}_1,\mathbf{p}_2 | \mathbf{p}'_1,\mathbf{p}'_2\right> = \delta(\mathbf{p}_1 - \mathbf{p}'_1) \, \delta(\mathbf{p}_2 - \mathbf{p}'_2) . \label{2-2} \end{equation} By superimposing these product states, the basis of eigenstates $\left|\mathbf{p},s_3\right>$ of the total linear momentum and the angular momentum is formed. Because of their rotational symmetry, the basis states are a superposition of product states, such that along with any pure product state $\left|\mathbf{p}_1,\mathbf{p}_2\right>$, the rotated versions of this state also contribute to the superposition. This consideration largely determines the structure of the basis states. They are a momentum-entangled, rotationally symmetrical superposition of product states $\left|\mathbf{p}_1,\mathbf{p}_2\right>$ with the same total momentum $\mathbf{p} = \mathbf{p}_1 + \mathbf{p}_2$ \begin{equation} \left|\mathbf{p},s_3\right>\; = \;\int_\Omega\!d^3\mathbf{p}_1 d^3\mathbf{p}_2\; \delta(\mathbf{p} - \mathbf{p}_1 - \mathbf{p}_2)\; c_{\mathbf{p}, s_3}(\mathbf{p}_1,\mathbf{p}_2) \, \left|\mathbf{p}_1,\mathbf{p}_2\right> . \label{2-4} \end{equation} The integration area $\Omega$ is part of the two-particle mass shell \begin{equation} (p_1 + p_2)^\mu (p_1 + p_2)_\mu = m_{\mathrm{tot}}^2 . \label{2-5} \end{equation} With the alternative variable set $\mathbf{k}$ = $\mathbf{p}_1 - \mathbf{p}_2$ and $\mathbf{p}$ = $\mathbf{p}_1 + \mathbf{p}_2$ we have $\mathbf{p}_1 = \frac{1}{2}(\mathbf{p+k})$ and $\mathbf{p}_2 = \frac{1}{2}(\mathbf{p-k})$; therefore the state (\ref{2-4}) can be given in the form \begin{equation} \left|\mathbf{p},s_3\right>\; = \;\int_\Omega\!d^3\mathbf{k}\; \left|\frac{\mathbf{p+k}}{2}\right> \, c_{\mathbf{p}, s_3}(\mathbf{k}) \, \left|\frac{\mathbf{p-k}}{2}\right> , \label{2-6} \end{equation} where I have written the product state $\left|\mathbf{p}_1,\mathbf{p}_2\right>$ explicitly as a tensor product. This form suggests a familiar physical interpretation: The state describes two particles {\it coupled by the field} $c_{\mathbf{p}, s_3}(\mathbf{k})$, which controls (causes) a {\it virtual exchange of momentum} $\mathbf{k}$ between the particles. This is formally the same as the description of the interaction mechanism of the Standard Model, where a similar field is understood as a gauge field and the exchanged momenta as virtual gauge bosons. Because all basis states have the same form, ``virtual exchange of momentum'' is a general structural element of irreducible two-particle representations of the Poincar\'e group. \subsection{Normalisation of two-particle states} The two-particle state (\ref{2-4}) still needs to be normalised according to the standard rule of quantum mechanics: If $N$ normalised orthogonal states are superimposed, then the resulting state must be re-normalised with the factor $N^{-\frac{1}{2}}$. If the summation is replaced by an integration, then the normalisation factor is $\omega = V\!(\Omega)^{-\frac{1}{2}}$, where $V$ is the volume of the region of integration $\Omega$. A general state (i.e. not necessarily an eigenstate) of the irreducible two-particle representation, normalised by the factor $\omega$, can then be written in the form \begin{equation} \left|\phi\right>\; = \;\int_\Omega\!\omega\,d^3\mathbf{p}_1 d^3\mathbf{p}_2\; c_\phi(\mathbf{p}_1,\mathbf{p}_2) \, \left|\mathbf{p}_1,\mathbf{p}_2\right>. \label{2-8} \end{equation} With the normalisation condition of product states (\ref{2-2}), the inner product evaluates to \begin{eqnarray} \left<\phi|\phi\right> &=& \int_\Omega\!\omega^2\,d^3\mathbf{p}_1 d^3\mathbf{p}_2 d^3\mathbf{p'}_1 d^3\mathbf{p'}_2\; c^*_\phi(\mathbf{p}_1,\mathbf{p}_2) c_\phi(\mathbf{p'}_1,\mathbf{p'}_2) \left<\mathbf{p}_1,\mathbf{p}_2 | \mathbf{p}'_1,\mathbf{p}'_2\right> \;\;\;\; \label{2-10} \\ &=& \int_\Omega\!\omega^2\,d^3\mathbf{p}_1 d^3\mathbf{p}_2\;c^*_\phi(\mathbf{p}_1,\mathbf{p}_2) c_\phi(\mathbf{p}_1,\mathbf{p}_2) \;\;!= 1.\label{2-11} \end{eqnarray} The last equation (\ref{2-11}) gives the normalisation condition for the coefficients $c_\phi(\mathbf{p}_1,\mathbf{p}_2)$. I will call the infinitesimal volume element $\omega\,d^3\mathbf{p}_1 d^3\mathbf{p}_2$ of the integral (\ref{2-8}) the {\it normalised volume element} on $\Omega$. For reasons of transparency, I will extract the normalisation factor $\omega$ from the state and write a normalised two-particle state in the following as $\omega \left|\phi\right>$. In Section 4, I will calculate the numerical value of the normalisation factor $\omega = V(\Omega)^{-\frac{1}{2}}$. This calculation is anything but trivial due to the specific topology and the non-Euclidean metric of the two-particle mass shell. \section{A gedanken experiment} \label{STAN} The following scattering experiment illustrates the physical effects of the virtual momentum exchange described by Equation (\ref{2-6}). Suppose two independently prepared particles, described by a product state $\left|\mathbf{p}_1,\mathbf{p}_2\right>$, pass through a measuring device capable of measuring their orbital angular momentum while leaving the total momentum unchanged. The device will measure an angular momentum $s$, and leave the two-particle system in the state $\omega \left|\mathbf{p},s_3\right>$. This state is then analysed by independently measuring the outgoing particle momenta $\mathbf{p}'_1$ and $\mathbf{p}'_2$. The total transition amplitude between an incoming product state $\left|\mathbf{p}_1,\mathbf{p}_2\right>$ and an outgoing product state $\left|\mathbf{p}'_1,\mathbf{p}'_2\right>$ is given by \begin{equation} S = \omega^2 \left<\mathbf{p}'_1, \mathbf{p}'_2|\mathbf{p},s_3\right>\! \left<\mathbf{p},s_3|\mathbf{p}_1,\mathbf{p}_2\right> . \label{3-1} \end{equation} Since in the intermediate state the information about the incoming particle momenta is lost, the outgoing particle momenta will, in general, be different from the incoming momenta. In other words, we will observe a scattering with an amplitude determined by $\omega^2$, which here acts like a coupling constant. This scattering mechanism can be seen as the two-particle analogue of diffraction of a plane wave at a pinhole. While a pinhole selects a spherical elementary wave, this gedanken experiment selects an eigenstate of the orbital angular momentum. \section{Calculating the normalisation factor} \label{CALC} The construction of the normalised infinitesimal volume element on the two-particle mass shell was first described in \cite{sm3}. This construction involved rather abstract geometrical considerations on the basis of the Lie ball in 5 dimensions. It remained, therefore, non-transparent with respect to its physical contents and consequently not really understood. The present paper provides a simpler, more detailed, and, above all, physically transparent description of that construction. \subsection{The two-particle mass shell $\mathcal{R}_{shell}$} \label{MASSSH} Let $p_1$ and $p_2$ be the 4-momenta of two particles with masses $m_1$ and $m_2$. They satisfy the mass-shell relations \begin{equation} {p_1}^2 = m_1^2 \;\; \mbox{ and } \;\; {p_2}^2 = m_2^2 . \label{4-1} \end{equation} The total momentum $p$ and the relative momentum $q$ are defined by \begin{equation} p = p_1 + p_2 \;\; \mbox{ and } \;\; q = p_1 - p_2 \label{4-2} \end{equation} and satisfy \begin{equation} p\,q = m_1^2 - m_2^2 . \label{4-3} \end{equation} Equation (\ref{4-3}) allows rotating the vector $q$ relatively to $p$ by the action of $SO(3)$, as long as no other restrictions apply. For an irreducible two-particle representation, \begin{equation} p^2 = m_{\mathrm{tot}}^2 \label{4-4} \end{equation} and \begin{equation} q^2 = 2 m_1^2 + 2 m_2^2 - m_{\mathrm{tot}}^2, \label{4-5} \end{equation} where $m_{\mathrm{tot}}^2$ is the constant value of the Casimir operator $P$. Unless a particle is combined with an anti-particle, $q$ is space-like. Then $q^2 < 0$ and by Equation (\ref{4-5}) \begin{equation} m_{\mathrm{tot}}^2 > 2 m_1^2 + 2 m_2^2, \label{4-8} \end{equation} corresponding to scattering states. If a particle is combined with an anti-particle, i.e. with $p_2$ on the negative half of its mass shell, $q$ is time-like. Then $q^2 > 0$ and \begin{equation} m_{\mathrm{tot}}^2 < 2 m_1^2 + 2 m_2^2, \label{4-9} \end{equation} indicating the possibility of bound states. Equations (\ref{4-4}) and (\ref{4-5}) can be combined to form the equation of the two-particle mass shell \begin{equation} p^2 + q^2 = 2 m_1^2 + 2 m_2^2 . \label{4-6} \end{equation} The symmetries of this equation require special attention. Rotations around the 3-momentum $\mathbf{p}$ as the axis leave $p$ invariant, but change $q$. Therefore, these rotations define an internal degree of freedom with an $SO(2)$ symmetry. The $SO(2)$ moves within $SO(3,1)$ as $p$ moves through the hyperboloid (\ref{4-4}). The two-particle mass shell has, therefore, the structure of a circle bundle over the mass hyperboloid of $p$. The circle fibres parametrise the internal rotational degree of freedom of a two-particle state. Together, $SO(3,1)$ and the local $SO(2)$ generate the irreducible parameter space \begin{equation} \mathcal{R}_{shell} = \{\mathbf{p} \in \mathbb{R}^3, \mathbf{q} \in \mathbb{R}^2; \, \mathbf{p}^2 + \mathbf{q}^2 = p_0^2 + q_0^2 - 2 m_1^2 - 2 m_2^2 \} \; \label{4-7} \end{equation} defined in terms of the independent parameters of the two particles. The local $SO(2)$ degree of freedom is in contrast to the $SO(3)$ degree of freedom of $q$ in Equation (\ref{4-3}), which takes into account only the constancy of the Casimir operator $P$, but disregards the second Casimir operator $W$. In the following section, I will examine how the geometry of $\mathcal{R}_{shell}$ is related to standard geometrical elements, such as spheres and balls. \subsection{The homogeneous domain $\mathcal{R}_{\mbox{\tiny{IV}}}$} \label{HOMO} The space $\mathcal{R}_{shell}$ has an $SO(3,1)$ symmetry under rotations and boost operations of $p$, an $SO(2,1)$ symmetry under rotations and boost operations of $q$, and a merely apparent $SO(2)$ symmetry under rotations in the $p_0$--$q_0$ plane. If we could ``rotate'' (as explained below) a component of $\mathbf{p}$ into a component of $\mathbf{q}$, then the equation would have a full $SO(5,2)$ symmetry. As it is, the physically relevant symmetries are the $SO(3,1)$ symmetry and the local $SO(2)$ symmetry. The fact that the group $SO(5,2)$ differs from their combined subgroups $SO(3,1), SO(2,1)$, and $SO(2)$ by only one additional rotation group $SO(2)$ suggests that the volume of the infinitesimal volume element on $\mathcal{R}_{shell}$ can be derived from the corresponding volume element on a symmetric space for the full $SO(5,2)$. Such spaces, more precisely, bounded homogeneous domains, have been studied by Cartan \cite{eca}. Cartan has proved that there exist 6 types of bounded homogeneous domains. One of these domains, $\mathcal{R}_{\mbox{\tiny{IV}}}$, is suited for our purposes. The domain $\mathcal{R}_{\mbox{\tiny{IV}}}$ of $n$-dimensional complex vectors $z = x + iy$ can be realised on the Lie ball (cf. Hua \cite{hua1}) \begin{equation} D^n = \{z \in \mathbb{C}^n; 1 + |zz'|^2-2\bar{z}z' > 0, |zz'| < 1 \} . \label{5-1} \end{equation} Its boundary (the Shilov boundary) is given by \begin{equation} Q^n = \{\xi = x\,e^{i\theta}; x \in \mathbb{R}^n, xx'=1 \}, \; 0<\theta<\pi . \label{5-2} \end{equation} The vector $z'$ is the transpose of $z$, $\bar{z}$ is the complex conjugate of $z$. Unfortunately, from the physicist's point of view, the realisation of $\mathcal{R}_{\mbox{\tiny{IV}}}$ on the Lie ball lacks transparency. However, as Hua \cite{hua2} has shown, $\mathcal{R}_{\mbox{\tiny{IV}}}$ can also be regarded as a homogeneous space of $2 \times n$ real matrices \begin{equation} A = \left| \begin{array}{lllll} x_1 & x_2 & x_3 & ... & x_n \vspace{0.2cm} \\ y_1 & y_2 & y_3 & ... & y_n \end{array} \right |\; \end{equation} built from the real and imaginary parts of $z$. In matrix form, the homogeneous domain $\mathcal{R}_{\mbox{\tiny{IV}}}$ is defined by \begin{equation} D^n = I^{(2)} - A A' > 0 , \label{5-3} \end{equation} where $I^{(2)}$ is the unit matrix in two dimensions and $A'$ the transposed of $A$. The boundary of $D^n$ is \begin{equation} Q^n = I^{(2)} - A A' = 0 . \label{5-4} \end{equation} The realisation of the symmetry group $SO(5,2)$ on the bounded $D^5$ involves a Cayley transformation, which maps the unbounded $\mathbb{R}^5\times\mathbb{R}^5$ onto $D^5$. In physical applications, the transformation of the infinitely extended momentum space onto a bounded domain is more confusing than helpful. Therefore, I will use the natural realisation of $SO(5,2)$ on the unbounded $\mathbb{R}^5\times\mathbb{R}^5$ with $D^5$ as the associated unit ball. To establish the connection with $\mathcal{R}_{shell}$, consider a 3-dimensional vector $\mathbf{x}$, proportional to $\mathbf{p}$, and a 2-dimensional vector $\mathbf{y}$, proportional to $\mathbf{q}$, both rewritten as vectors in $\mathbb{R}^5$, \begin{equation} \mathbf{x} = (x_1,\,x_2,\,x_3,\;0,\;0) \; \mbox{ and } \; \mathbf{y} = (0,\;0,\;0,\,y_4,\,y_5). \label{5-5} \end{equation} These vectors can be combined into the $2 \times 5$ matrix \begin{equation} A = \left| \begin{array}{lllll} x_1 & x_2 & x_3 & \,0 & \,0 \vspace{0.2cm} \\ \,0 & \,0 & \,0 & y_4 & y_5 \end{array} \right | . \label{5-6} \end{equation} The matrix product $A A'$ can be evaluated as \begin{equation} A A' = \left| \begin{array}{ll} \mathbf{x}^2 & \,0 \vspace{0.2cm} \\ \,0 & \mathbf{y}^2 \end{array} \right|. \label{5-7} \end{equation} Hence, the unit ball $D^5$ can be rewritten as the direct product of two unit balls \begin{equation} \mathbf{x}^2 < 1 \;\; \mbox{ and } \;\; \mathbf{y}^2 < 1 \label{5-8} \end{equation} on $\mathcal{R}_{shell}$. The meaning of a ``rotation'' of a component of $\mathbf{p}$ into a component of $\mathbf{q}$ is now obvious: if it were possible to rotate the component $x_3$ into $x_4$ and $y_4$ into $y_3$ by the operations of a single additional $SO(2)$ group, then both vectors, $\mathbf{x}$ and $\mathbf{y}$, would be converted into true 5-dimensional vectors and, together with the existing $SO(3)$ symmetry of $\mathbf{p}^2$ and the $SO(2)$ symmetry of $\mathbf{q}^2$, the whole $D^5$ would be covered. The Euclidean volumes of the unit ball $D^5$ and the unit sphere $Q^5$ have been calculated in Hua \cite{hua1}, with the results \begin{equation} V\!(D^5) = \frac{\pi^5}{2^4\, 5!}\;,\;\;\; V\!(Q^5) = \frac{8 \pi^3}{3} . \label{5-9} \end{equation} \subsection{The normalised volume element on $\mathcal{R}_{shell}$} \label{NORMA} Now we are prepared to answer the question: Given a normalised spherical volume element on $\mathcal{R}_{\mbox{\tiny{IV}}}$, what will be the corresponding Euclidean volume element on $\mathcal{R}_{shell}$? The volume of the unit ball $D^5$ can be expressed by an integral over $D^5$. This integral can be split into an integral over the unit sphere $Q^5$ and a second integral over the radial direction of $D^5$: \begin{equation} V\!(D^5) = \int_0^1 dr\;r^4\!\int_{Q^5} d^4x . \label{6-2} \end{equation} The infinitesimal volume element $d^4x$ in the integral over $Q^5$, resulting in the volume $V\!(Q^5)$, is normalised by the inverse of the square root of this volume. The unit sphere $Q^5$ can be understood as generated by the action of $SO(5)$, whereas the corresponding area on $\mathcal{R}_{shell}$ can be understood as generated by the combined actions of $SO(3)$ and a local $SO(2)$, equivalent to the actions of four rotations with orthogonal axes, that is, the group $SO(4)$. Hence, compared to the volume element $d^4x$ on $Q^5$, the corresponding volume element on $\mathcal{R}_{shell}$ not only has a different size, but also a dimension that has been reduced by one. The ratio of these two volumes is given by the volume of the symmetric space $SO(5)/SO(4)$, which is the unit sphere in five dimensions $S^4$. Dividing $d^4x$ by $V\!(S^4)$ reduces the volume calculated by the integral over $D^5$ to the corresponding volume on $\mathcal{R}_{shell}$. The infinitesimal volume element $dr\,d^4x$ is still a spherical volume element with equal scalings in four directions, but with a different scaling in the radial direction. To give the infinitesimal volume element the form of an isotropic Cartesian volume element, as is used in the integral of the two-particle state (\ref{2-8}), we first have to replace $D^5$ by a cuboid with the same volume $V\!(D^5)$ and an edge length of 1 in a formerly radial direction. On this cuboid, the integration over $r$ delivers a factor of 1 to the volume of the cuboid. Together with the integration over the four other directions, the infinitesimal volume elements add up to the volume $V\!(D^5)$ of the original Lie ball. Therefore, each of the four volume elements $dx_1, dx_2, dx_3, dx_4$, now considered as part of a Cartesian volume element, contributes a factor of $V\!(D^5)^{\frac{1}{4}}$ to this volume. To make the five-dimensional volume element an isotropic one, $dr$ must be rescaled by the factor $V\!(D^5)^{\frac{1}{4}}$. Finally, the Jacobian that relates $p$ and $q$ to $p_1$ and $p_2$ (cf. Equation (\ref{4-2})) contributes an additional factor of 2. Taking all factors together results in the volume term \begin{equation} \omega^2 = 2\,V\!(D^5)^{\frac{1}{4}} \, / \, (V\!(Q^5)\,V\!(S^4)) . \label{6-3} \end{equation} Its square root, the factor $\omega$, normalises the Euclidean infinitesimal volume element on $\mathcal{R}_{shell}$. It is determined, at first, for a single point on the unit sphere $Q^5$, but since $\mathcal{R}_{\mbox{\tiny{IV}}}$ is a homogeneous symmetric domain, the same factor is obtained everywhere on $\mathcal{R}_{\mbox{\tiny{IV}}}$. Hence, $\omega$ is a constant: its numerical value is characteristic for an irreducible two-particle representation of the Poincar\'e group. I have to come back to the gedanken experiment resulting in the transition amplitude (\ref{3-1}). In this scattering experiment, only the value of the first Casimir operator $P$ is determined by the incoming product state. Therefore, all intermediate states $\left|\mathbf{p},s_3\right>$ with the same values of $\mathbf{p}$ and $(s, s_3)$, but different values of the second Casimir operator $W$, may contribute to the transition amplitude. Their amount derives from comparing the range of $q$ within a product representation, as covered by the action of $SO(3)$ (cf.\,Equation (\ref{4-3})), with its range within an irreducible representation, as covered by the action of $SO(2)$. This ratio is given by $4\pi$, which is the volume of the symmetric space $SO(3)/SO(2)$, which is equal to the unit sphere in three dimensions $S^2$. If we take into account all irreducible representations, the constant $\omega^2$ in the transition amplitude (\ref{3-1}) must be adjusted by this ratio, leading to the actual coupling constant \begin{equation} 4\pi\,\omega^2 = 8\pi\,V\!(D^5)^{\frac{1}{4}} \, / \, (V\!(Q^5)\,V\!(S^4)) . \label{6-4} \end{equation} Inserting the explicit values (taken from Hua \cite{hua1}) \begin{equation} V\!(D^5) = \frac{\pi^5}{2^4\, 5!},\;\; V\!(Q^5) = \frac{8 \pi^3}{3},\;\; V\!(S^4) = \frac{8 \pi^2}{3} \end{equation} leads to \begin{equation} 4\pi\,\omega^2 \;= \; \frac{9}{16 \pi^3} \left(\frac{\pi}{120}\right)^{1/4} \label{6-5} = \; 1/137.03608245 , \vspace{0.2cm} \end{equation} which closely matches the CODATA value $1/137.035999139$ \cite{cod} of the electromagnetic fine-structure constant $\alpha$. Note that the theoretical value (\ref{6-5}) refers to a universe with only two particles; it certainly does not include ``hadronic corrections''. This agreement establishes a connection between the fine-structure constant and the normalisation constant of the two-particle states by \begin{equation} \alpha \simeq 4\pi\,\omega^2 . \label{6-6} \vspace{0.2cm} \end{equation} The gedanken experiment in Section 3, which identifies $\omega^2$ as a coupling constant, shows that this relation is not just a numerical coincidence. The representation of $\alpha$ by the term $8\pi\,V\!(D^5)^{\frac{1}{4}} \, / \, (V\!(Q^5)\,V\!(S^4))$ was accidentally found in 1971 by the Swiss mathematician Armand Wyler (a former student of Heinz Hopf), while he was examining some symmetric domains. It has become known as ``Wyler's formula'' for the fine-structure constant. Wyler \cite{aw} published his finding, hoping to attract the interest of physicists. Unfortunately, Wyler was not able to put his observation into a convincing physical context. Therefore, the physics community dismissed his formula as meaningless numerology (see, for example, \cite{meh}). Now it has become evident that Wyler's formula directly links the signature of the electromagnetic interaction with the geometric footprint of the fibred two-particle mass shell. \section{Conclusions} \label{CONC} The simple and seemingly insignificant question about the normalisation of two-particle states of irreducible two-particle representations of the Poincar\'e group has led to the relation $\alpha = 4\pi\, \omega^2$ between the normalisation factor $\omega$ and the electromagnetic fine structure constant $\alpha$. Within these states, the same factor---which is now considered to be a coupling constant---determines the strength of an interaction between the particles, caused by the constraints of an irreducible representation. The matching values of the coupling constants strongly suggest that this interaction and the phenomenological electromagnetic interactions are one and the same. The mathematical similarity of the basic interaction mechanisms---here ``interaction by exchange of momentum'', there ``interaction by exchange of gauge bosons''---indicates that in the classical limit the ``exchange of momentum'' will also lead to the Maxwell equations, especially since the structure of the Maxwell equations is largely determined by the Lorentz symmetry. This would mean that on the quantum mechanical level, the electromagnetic interaction is a geometry-induced interaction that is based solely on the structure of the Poincar\'e group. These insights into the geometry of two-particle states should have far-reaching consequences for the interpretation of the Standard Model and, in particular, for its extension by a quantum theory of gravity. \section*{Acknowledgements} \label{ACKN} This work would not have been possible without Wyler's discovery. Without his formula, I would never have embedded the two-particle mass shell into one of Cartan's homogeneous domains. Without this embedding, I would not have found the link between the geometry of the two-particle mass shell and $\alpha$. And without the connection to $\alpha$ it would never have come to my mind that two ``independent'' particles ``interact'' with each other when configured as a two-particle system with well-defined total linear and angular momentum.
2024-02-18T23:40:18.271Z
2022-11-29T02:26:39.000Z
algebraic_stack_train_0000
2,002
5,718
proofpile-arXiv_065-9937
\section{Introduction} Accessible and affordable technology in conjunction with open educational resources can promote equal opportunities for childhood education \cite{yoshie2021-unesco}. However, teaching state-of-the-art technologies such as Artificial Intelligence and Robotics, AIR, is a current challenge for low-income and often politically or culturally marginalized countries. Additionally, creating the right environment to promote inclusivity and diversity to teach AIR has been little investigated. Astobiza \etal 2019, for instance, reported the need of collaborations between industry and a multidisciplinary group of researchers to address concerns on the paradigm of inclusivity in robotics \cite{MonasterioAstobiza2019}. In that sense, Astobiza \etal suggested that inclusive robotics should be based on two points: "1) they should be easy to use, and 2) they must contribute to making accessibility easier in distinct environments" \cite{MonasterioAstobiza2019}. Peixoto \etal in 2018 reported the use of robots as tool to promote diversity leading to improve competences in communication, teamwork, leadership, problem solving, resilience and entrepreneurship \cite{PeixotoCastro2018, PeixotoGonzalez2018}. Recently, Pannier \etal in 2020 pointed out the challenges of increasing the participation of women and underrepresented minorities in the areas of Mechatronics and Robotics Engineering as well as the creation of community of educators to promote diversity and inclusion \cite{Pannier2020}. Similarly, Pannier \etal mentioned that the prevalence of free and open-source software and hardware made mechatronics and robotics more accessible to a diverse group of population. Pannier \etal also touched on the importance of offering workshops to different range of underrepresented students leading to inspire other programs and to create outreach activities for students, trainers and workshops \cite{Pannier2020}. In March 2021, we introduced air4children, Artificial Intelligence and Robotics for Children, as a way (a) to address aspects for inclusion accessibility, equity and fairness and (b) to create affordable child-centred materials in AI and Robotics (AIR) in developing countries \cite{montenegro2021air4children}. That said, in this work, we are addressing the challenges of piloting and organising workshops in the context of communities in developing countries where little to none is known about the demographics, education levels and socio-economical factors that impact on teaching AIR. For instance, considering the town of Xicohtzinco, Tlaxcala M\'exico as our case study, where Xicohtzinco has a total population of 14,197 (6762 males and 7435 females) \cite{inegi2022} and 19 schools including seven kindergartens (3 public and 4 private), seven primaries (4 public and 3 private), four secondaries (2 public and 2 private) and one public hight-school \cite{siged2022}. However, neither the census \cite{inegi2022} nor the education information site \cite{siged2022} provide further information about teaching technological subjects in AI and Robotics. That said, we hypothesised that piloting workshops of air4chidren in a town such as Xicohtzinco might led us to have better understanding of the needs and challenges of promoting diversity and inclusion considering state-of-the-art technologies with open education resources. This short paper is organised as follows: Section II presents resources to promote diversity and inclusion in AI and Robotics for children. Section III presents the design of workshops for children from 6 to 8 years old. Section IV presents outcomes of a four lessons pilot workshop for 14 children including the engagement of instructors and coordinators. We present results of the workshops and finalise it with conclusions, limitations and future work. \section{Resources to promote Diversity and Inclusion in AI and Robotics for children} \subsection{Free and open-source software, open-source hardware and open educational resources} In March 2021, we presented examples to create educational resources aimed to be "affordable, educational and fun", such examples are (a) Otto DIY -- an educational open source robot and (b) JetBot platform -- open source educational robot to create new AI projects \cite{montenegro2021air4children}. Similarly, considering Open Educational Resources (OER) which aim to provide "teaching and learning materials that are available without access fees" seems to be a right direction to afford innovation through OER-enabled pedagogy \cite{Clinton-Lisell2021}. However, Wiley \etal in 2014 contrasted positives and negatives of OERS where for instance of the benefits of OERs is to make course development process quicker and easier but also highlighting the challenges of making OER material for people easier to find but with the challenge of making financially self-sustainable programs among many other difficulties \cite{Wiley2014}. Hence, in this work, we consider Otto Humanoid as a good option because of its affordability with a cost of 200 EUROS, the block diagram programming interface, the multiple sensors and actuators (servos and LCD matrix display) aligned with open-source software and hardware and OER principles \cite{OttoDIY:2016}. \subsection{Ensuring education and Inclusive Learning} Recently, Opertti \etal in 2021 discussed ideas in the forum "Ensuring education and Inclusive Learning for Educational Recovery 2021" \cite{opertti2021-unesco}. Such ideas to ensure and inclusive learning are summarised as: (1) personalisation of education including the recognition of specific learning expectations and needs, (2) designing inclusive, emphatic and participatory curriculum for an plural and open participation of a diversity of actors and institutions, (3) appropriation of technology as a community resource to strength ties between students, educators, families and communities, (4) empowering knowledge, learning, collaboration, trust and listening among peers, and (5) the visualisation of schools as lifelong learning spaces. Therefore, such ideas would help to promote diversity and inclusion of teaching AI and Robotics to children. \subsection{Alternative education programs with new technologies} Alternative education programs such as Montessori, Waldorf and Regio Emilia considers children as active authors of their own development \cite{edwards2002, MontessoriBOOK1969}. In the last 5 years such programs are starting to include topics on AI, robotics and computational thinking into their curriculum \cite{elkin2014, Aljabreen2020}. For instance, Aljabreen pointed out the adoptions of new technologies and how early child education is re-conceptualised \cite{Aljabreen2020}. Elkin et al. in 2014 explored the how robots can be used in the Montessori curriculum \cite{elkin2014}. Similarly Elkin et al. posed the question on the revision of new curriculums that include technology should not deviate from the main purpose of the Montessori classroom \cite{elkin2014}. Drigas and Gkeka in 2016 reviewed the application of information and communication technologies in the Montessori Method, mentioning the Manipulatives, as objects to develop motor skills or understand mathematical abstractions, are based on cultural areas, language, mathematics and sensoria but little to none on technological areas \cite{DrigasGkeka2016}. Drigas and Gkeka reviewed Montessori materials of the 21st century where interactive systems with sounds and lights, touch application to enhance visual literacy or the development of computational thinking and constructions of the physical world \cite{DrigasGkeka2016}. These indicate that the incorporation of such manipulatives with the use of robotics might led to reach scenarios to explore motor skill development, visualisation and computational thinking. Recently, Scippo and Ardolino reported a longitudinal study of the use of computational thinking in five years participants of primary school in a Montessori school \cite{ScippoArdolino2021} Scippo and Ardolino pointed out the importance of alignment of the Montessori material with the computational thinking activities. That said, previous authors stated various challenges on the incorporation of new technologies into their curriculum posing more questions on creating curriculums that should be more accessible to a diverse group of population as it has been done in other areas such as the case of open educational resources. \section{Designing Diversity and Inclusion Workshops} To design diversity and inclusive workshops we considered: (a) free and open-source software, open-source hardware and open educational resources, (b) Ensuring education and Inclusive Learning, and (c) alternative education programs with new technologies. That said, with the combination of such resources, we proposed a four lesson workshop with three-fold aims: (a) to promote diversity of and inclusion to children to teach AI and Robotics with recreational and engaging activities, (a) to encourage children to discover and increase their interest in AI and Robotics with open source hardware and software, and (c) to develop the Montessori concept of 'concrete to abstract' to make abstract concepts clearer with hands-on learning materials. \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{drawing-v04.png}} \caption{Curriculum of the pilot workshop with four lessons. Lesson 01 introduce the course (L01), lesson 02 provides the basics of anatomy (L02), lesson 03 covers algorithms (L03), and lesson 04 wrap-up and showcase the project of children (L04). The arrows in the figure illustrate the connection from the final to initial part of each lesson and how all the lessons where connected to the final section of the workshop. } \label{fig:curriculum} \end{figure} \paragraph{Lesson 01: Breaking the ice and motivations} The educational goal for this lesson was to develop the children’s curiosity about AI and Robotics while emphasizing the importance of interpersonal connections that will evolve into a collaboration work for the following lessons. That said, this lesson started with a recreational activity where each student and teacher introduced themselves with name, favorite food and a superpower or ability that we would like to have and that was related to robots. This lesson also covers basic concepts and examples of AI and Robotics in different fields and daily life, how the brain works and how the human senses and body parts relates to the way a robot is built and how it works and perform activities. \paragraph{Lesson 02: Human senses and coding my first robot} The main purpose of this lesson was to understand fundamentals of Robotics. Children began to work with more abstract concepts, developing problem-solving skills as well as cooperatively working relationships. The first activity, outside of the classroom, was a true or false game, were the teacher told a sentence about AI and Robotics, and the children jumped in front of a rope if the sentence was true, or if the sentence was false, the kids jumped behind of the rope. In the second activity, the instructor explain about the human senses and their relationship with inputs and outputs. After that, instructors explain examples of sequences and codes. In the last activity, participants were asked to sort out tangram in a group in which a leader of the group provide instructions to the team-mates as an analogy of coding a robot with algorithms. \paragraph{Lesson 03: Playing with reaction-action activities} The educational goal of this lesson is to cover the concept of the effect of causes and consequences with daily life examples and the computational thinking of robots. Hence, this lesson start with a match game consisting of figures and shadow’s figures where participants develop their comparing skills to match similar or different robots. This lesson covered points on how robot works with sensors, processors, actuators and programming. The “find the effect” activity was also introduced where participants have to relate pictures of cause and consequences, for example “the cause is the rain and the consequently is a rainbow”. Afterwards, we worked with the Otto humanoids in which we programmed the sensor presence for that robot moves, dances and that emit texts with the 8x8 matrix. \paragraph{Lesson 04: Develop your own AIR} The four lesson aimed to summaries what was covered in the previous lessons emphasising the relationship of the human body anatomy (brain, neurons and body parts) with humanoid robots (computer, sensors and actuators). This lesson covered real-word application of AI and Robotics including medicine, spacial robotics and smart cities. Three projects were prepared to be introduced to each team in which every participant have a role. Each team prepare a short speech of their application using AI and Robotics. See figure \ref{fig:curriculum} that illustrates four lessons of the workshops. \section{Piloting Diversity and Inclusion Workshops} To pilot the four lesson workshop, we invited 14 participants (6 female and 8 male) with range of age from 6 to 11 years old (average age of 7.64) (Figure \ref{fig:pilot}). For the workshop, three instructors with three years of experience in teaching and two coordinators with ten years of teaching experience volunteered to deliver four lessons of 90 minutes in the workshop (as shown in the proposed curriculum Fig \ref{fig:curriculum}). During the initial three lessons of the workshop, children incorporated the gained knowledge to relate fundamental human body anatomy (brain, neurons, body and senses) to robot parts (microcontroller, motors, sensors) based on open source hardware and software. In the final lesson, children showcased their final work promoting a sense of achievement in the children working not only with their mind but also with their social emotional well-being. In all lessons, instructors encouraged and engaged every participant for individual and group activities. \begin{figure}[tbp] \centerline{\includegraphics[width=\linewidth]{drawing-v00.png}} \caption{ Instructors demonstrating fundamentals of AI and Robotics (A, and B). Children engaging with classmates, robots and instructors (B, and C). } \label{fig:pilot} \end{figure} We however noted that each lessons was originally planned to be 90 minutes of length and we did not consider breaks nor the participant's energy levels to which in the second to four lesson a 15 minutes break was incorporated. Additionally, we piloted surveys to (a) children with ten questions about their understanding and feelings towards different type of robots and (b) to parents with 30 questions about their understanding of AI and robotics and how parents were aware of technological advances in AI and Robotics. Although the aim of the surveys was not to be reported but only to understand how participants and parents feel about being surveyed and how the logistics of surveys would be followed with more participants. That said, we noticed that participants require support as few participants were not familiar with reading surveys to which the content of 10 questions was spread into five questions into two sessions. On other hand, parents felt that surveys were lengthy, taking more than 60 minutes, and we also realised that a paper-based survey require more work as scans and transcripts are required. \section{Conclusions, limitations and future work} In this paper, we posed the challenges of promoting Diversity and Inclusion to teach "Artificial Intelligence and Robotics for Children" in developing countries, considering resources of open-source hardware and software and principles of Montessori education. We think that the goals of each lessons were intended to remain beyond the learning of a single concept but to contribute to develop skills of inclusion and diverstiy that children can take and apply to other areas in their life. That said, for the pilot of the workshop, we considered a small sample of 14 children of an average age of 7.64 years old from the town Xicohtzinco Tlaxcala M\'exico. The workshops were free of cost as a way encourage participation and inclusion of anyone. During the pilot workshop, children were enthusiastic about learning the fundamentals for AIR by coding, designing and playing with open-sourced robots. The instructors embraced the different set of skills each child had by working in small groups and supporting the students during all the activities. However, we noted that grouping children of four participants with one instructor was not creating an engaging experience as each group has only one robot and one computer and the space and number of participants was leaving sometimes one participant outside of the reachable robot-computer setup. In terms of limitations, the pilot surveys only helped us to identify the gender and age of the participants and no other insights such as needs of the target group were considered. Similarly, this work did no consider metrics to quantify the impact of the workshop but to identify the needs of the workshop that might be addressed in future work. The workshops were free of cost but no sustainable model was considered for this pilot experiment. As a future work, we are planing to run another pilot in late 2022 or early 2023 with more lessons and perhaps more participants considering the addition of a study design and to run a pre-survey to identify the needs of participants of the workshops. For the curriculum of the workshops, we are planning to improve the activities to be more engaging, diverse and inclusive and to provide further evidence on how alternative education programs (e.g. Montessori, Waldorf, Regio Emilia \cite{edwards2002}; and "synthesis program" \cite{synthesis2022}) with new technologies might lead to potential new avenues of inclusivity and diversity. \section*{Acknowledgment} To Marta P\'erez and Donato Badillo for their support in organising the pilot of the workshops. To Rocio Montenegro for her contributions with the design of the Montessori curriculum for the workshops. To Donato Badillo Per\'ez, Antonio Badillo Per\'ez and Diego Coyotzi Molina for volunteering as instructors of the workshops. To Leticia V\'azquez for her support with the logistics and feedback to improve the workshops. To Adriana P\'erez Fortis for her contributions and discussion to prepare draft pilot surveys for the parents and children. To El\'ias M\'endez Zapata for his support and feedback on the hardware design of the robot. To Dago Cruz for his feedback and discussions on the design of the workshops. To Angel Mandujano, Elva Corona and others who have contributed with feedback and support to keep AIR4children project alive \section*{Contributions} Antonio Badillo-Per\'ez: Contributing to design and write up of lesson 02. Donato Badillo-Per\'ezo: Contributing to design and write up of lesson 01. Diego Coyotzi-Molina: Contributing to design and write up of lesson 03. Dago Cruz: Contributing to proofreading, edition and feedback. Rocio Montenegro: Contributing to the write up of designing and piloting workshops. Leticia V\'azquez: Write up and refinement of the conclusions. Miguel Xochicale: Contributing to create the open source and reproducible workflow, drafting, write-up, edition, and submission of the paper. \bibliographystyle{IEEEtran} \input{main.bbl} \end{document}
2024-02-18T23:40:18.981Z
2022-03-11T02:12:45.000Z
algebraic_stack_train_0000
2,031
2,961
proofpile-arXiv_065-9953
\section{Introduction} \emph{Stochastic gradient descent} (SGD) is one of the workhorses in modern machine learning due to its efficiency and scalability in training and good ability in generalization to unseen test data. From the optimization perspective, the efficiency of SGD is well understood. For example, to achieve the same level of optimization error, SGD saves the number of gradient computation compared to its deterministic counterpart, i.e., batched \emph{gradient descent} (GD) \citep{bottou2007tradeoffs,bottou2018optimization}, and therefore saves the total amount of running time. However, the generalization ability (e.g., excess risk bounds) of SGD is far less clear, especially from theoretical perspective. \emph{Single-pass} SGD, a less practical SGD variant where each training data is used only once, has been extensively studied in theory. In particular, a series of works establishes tight excess risk bounds of single-pass SGD in the setting of learning least squares \citep{bach2013non,dieuleveut2017harder,jain2017markov,jain2017parallelizing,neu2018iterate,ge2019step,zou2021benign,wu2021last}. In practice, though, one often runs SGD with \emph{multiple passes} over the training data and outputs the final iterate, which is referred to as \emph{multi-pass SGD} (or simply SGD in the rest of this paper when there is no confusion). Compared to single-pass SGD that has limited number of optimization steps, multi-pass SGD allows the algorithm to perform arbitrary number of optimization steps, which is more powerful in optimizing the empirical risk and thus leads to smaller bias error \citep{pillaud2018statistical}. Despite the extensive application of multi-pass SGD in practice, there are only a few theoretical techniques being developed to study the generalization of multi-pass SGD. One among them is through the method of \emph{uniform stability} \citep{elisseeff2005stability,hardt2016train}, which is defined as the change of the model outputs when applying a small change in the training dataset. However, the stability based generalization bound is a worst-case guarantee, which is relatively crude and does not show difference between GD and SGD (See, e.g., \citet{chen2018stability} showed GD and SGD have the same stability parameter in the convex smooth setting). On the contrary, one easily observes a generalization difference between SGD and GD even in learning the simplest least square problem (see Figure \ref{fig:risk-comparison}). In addition, \citet{lin2017optimal,pillaud2018statistical,mucke2019beating} explored the risk bounds for multi-pass SGD using the \emph{operator methods} that are originally developed for analyzing single-pass SGD. Their bounds are sharp in the minimax sense for a class of least square problems that satisfy certain \emph{source condition} (which restricts the norm of the optimal parameter) and \emph{capacity condition} (or effective dimension, which restricts the spectrum of the data covariance matrix). Still, their bounds are not problem-dependent, and could be pessimistic for benign least square instances. In this paper, our goal is to establish sharp algorithm-dependent and problem-dependent excess risk bounds of multi-pass SGD for least squares. Our focus is the \emph{interpolation regime} where the training data can be perfectly fitted by a linear interpolator (which holds almost surely when the number of parameter $d$ exceeds the number of training data $n$). We assume the data has a sub-Gaussian tail \citep{bartlett2020benign}. Our main contributions are summarized as follows: \begin{itemize}[leftmargin=*] \item We show that for any iteration number and stepsize, the excess risk of SGD can be exactly decomposed into the excess risk of GD (with the same stepsize and iteration number) and the so-called \textit{fluctuation error}, which is attributed to the accumulative variance of stochastic gradients in all iterations. This suggests that GD (with optimally tuned hyperparameters) always achieves smaller excess risk than SGD for least square problems. \item We further establish problem-dependent bounds for the excess risk of GD and the fluctuation error, stated as a function of the eigenspectrum of the data covariance, iteration number, training sample size, and stepsize. Compared to the bounds proved in prior works \citep{lin2017optimal,pillaud2018statistical,mucke2019beating}, our bounds hold under a milder assumption on the data covariance and ground-truth model. Moreover, our bounds can be applied to a wider range of iteration numbers $t$, i.e., for any $t>0$, in contrast to the prior results that will explode when $t\rightarrow \infty$. \item We develop a new suite of proof techniques for analyzing the excess risk of multi-pass SGD. Particularly, the key to our analysis is describing the error covariance based on the tensor operators defined by the second-order and fourth-order moments of the empirical data distribution (i.e., sampling with replacement from the training dataset), rather than the operators used in the single-pass SGD analysis that are defined based on the (population) data distribution \citep{jain2017markov,zou2021benign} (i.e., sampling from the data distribution), together with a sharp characterization on the properties of the operators. \end{itemize} Our developed excess risk bounds for SGD and GD have important implications on the complexity comparison between GD and SGD: to achieve the same order of excess risk, while SGD may need more iterations than GD, it can have fewer stochastic gradient evaluations than GD. For example, consider the case that the data covariance matrix has a polynomially decaying spectrum with rate $i^{-(1+r)}$, where $r>0$ is an absolute constant. In order to achieve the same order of excess risk, we have the following comparison in terms of iteration complexity and gradient complexity\footnote{We define the gradient complexity as the number of required stochastic gradient evaluations to achieve a target excess risk, which is closely related to the total computation time.}: \begin{itemize}[leftmargin=*] \item \textit{Iteration Complexity:} SGD needs to take $\tilde \cO(n^{\max\{0.5, \frac{r}{r+1}\}})$ more iterations than GD, with optimally tuned iteration number and stepsize. \item \textit{Gradient Complexity:} SGD needs $\tilde \cO(n^{\max\{0.5, \frac{1}{r+1}\}})$ less stochastic gradient evaluations than GD. \end{itemize} \begin{figure} \centering \subfigure[\small{$\lambda_i=i^{-1}\log^{-2}(i+10)$}]{\includegraphics[width=0.45\textwidth]{figures/risk_compare_alpha1.pdf}} \subfigure[\small{$\lambda_i=i^{-2}$}]{\includegraphics[width=0.45\textwidth]{figures/risk_compare_alpha2.pdf}} \caption{\small Excess risk comparison between SGD and GD with large and small stepsizes. The true parameter $\wb^*$ is randomly drawn from $\cN(0, \Ib)$ and the model noise variance $\sigma^2=1$. The problem dimension is $d=256$, and we randomly draw $n = 128$ training data. We consider two data covariance with eigenspectrum $\lambda_i=i^{-1}\log^{-2}(i+10)$ and $\lambda_i=i^{-2}$. For SGD, the reported risk is averaged over $100$ repeats of the algorithm's randomness. The large stepsize is $\eta = 0.2$ and the small stepsize is $\eta = 0.02$.\label{fig:risk-comparison}} \vspace{-.5cm} \end{figure} \paragraph{Notations.} For a scalar $n>0$. we use $\poly(n)$ to define some positive high-degree polynomial functions of $n$. For two positive-value functions $f(x)$ and $g(x)$ we write $f(x)\lesssim g(x)$ if $f(x) \le c g(x)$ for some constant $c>0$, we write $f(x) \gtrsim g(x)$ if $g(x) \lesssim f(x)$, and $f(x) \eqsim g(x)$ if both $f(x)\lesssim g(x)$ and $g(x) \lesssim f(x)$ hold. We use $\tilde \cO(\cdot)$ to hide some polylogarithmic factors in the standard big-$\cO$ notation. \section{Related Work} \paragraph{Optimization.} Regarding optimization efficiency, the benefits of SGD is well understood \citep{bottou2007tradeoffs,bottou2018optimization,ma2018power,bassily2018exponential,vaswani2019fast,vaswani2019painless}. For example, for strongly convex losses (can be relaxed with certain growth conditions), GD has less iteration complexity, but SGD enjoys less gradient complexity \citep{bottou2007tradeoffs,bottou2018optimization}. More recently, it is shown that SGD can converge at an exponential rate in the interpolating regime \citep{ma2018power,bassily2018exponential,vaswani2019fast,vaswani2019painless}, therefore SGD can match the iteration complexity of GD. Nevertheless, all the above results are regrading the optimization performance; our focus in this paper is to study the generalization performance of SGD (and GD). \paragraph{Risk Bounds for Multi-Pass SGD.} The risk bounds of multi-pass SGD are also studied from the operator perspective \citep{rosasco2015learning,lin2017optimal,pillaud2018statistical,mucke2019beating}. The work by \citet{rosasco2015learning} focused on \emph{cyclic SGD}, i.e., SGD with multiple passes but fixed sequence on the training data. Their results are limited to small stepsizes ($\gamma = \cO(1/n)$), while ours allow constant stepsize. Similar to \citet{lin2017optimal,pillaud2018statistical,mucke2019beating}, we decompose the population risk of SGD iterates into a risk term caused by batch GD iterates and a fluctuation error term between SGD and GD iterates. But our methods of bounding the fluctuation error are different (see more in..). Moreover, our results are based on different assumptions: \citet{lin2017optimal,pillaud2018statistical,mucke2019beating} assumed strong finiteness on the optimal parameter, and their results only apply to data covariance with a specific type of spectrum (nearly polynomially decaying ones); in contrast, our results assume a Gaussian prior on the optimal parameter (which might not admit a finite norm), and our results cover more general data covariance (inlcuding those with polynomially decaying spectrum). \citet{lei2021generalization} studied risk bounds for multi-pass SGD with general convex loss. When applied to least square problems, their bounds are cruder than ours. \paragraph{Uniform Stability.} Another approach for characterizing the generalization of multi-pass SGD is through \emph{uniform stability} \citep{hardt2016train,chen2018stability,kuzborskij2018data,zhang2021stability,bassily2020stability}. There are mainly two differences between this and our approach. First, we directly bound the excess risk of SGD; but the uniform stability can only bound the generalization error, there needs an additional triangle inequality to relate excess risk with generalization error plus optimization error (plus approximation error) --- this inequality can easily be loose (consider the algorithmic regularization effects). Secondly, the uniform stability bound is also crude. For example, in the non-strongly convex setting, the uniform stability bound for SGD/GD linearly scales with the total optimization length (i.e., sum of stepsizes), which grows as $t$ \citep{hardt2016train,chen2018stability,kuzborskij2018data,zhang2021stability,bassily2020stability} (this is minimaxly unavoidable according to \citet{zhang2021stability,bassily2020stability}). Notably, \citet{bassily2020stability} extended the uniform stability approach to the non-convex and smooth setting. We left such an extension of our method as a future work. \section{Problem Setup} Let $\xb$ be a feature vector in a Hilbert space $\cH$ (its dimension is denoted by $d$, which is possibly infinite) and $y \in \RR$ be its response, and assume that they jointly follow an unknown population distribution $\cD$. In linear regression problems, the \emph{population risk} of a parameter $\wb$ is defined by \[ L_{\cD}(\wb) := \frac{1}{2}\EE_{(\xb, y)\sim D} (\la \xb, \wb \ra - y)^2, \] and the \emph{excess risk} is defined by \begin{equation}\label{eq:excess_risk} \risk(\wb) := L_D(\wb) - \min_{\wb} L_D(\wb) = \frac{1}{2} \|\wb-\wb^*\|_{\Hb}^2, \quad \ \text{where}\ \Hb := \EE_{\cD} [\xb \xb^\top] . \end{equation} In the statistical learning setting, the population distribution $\cD$ is unknown, and one is provided with a set of $N$ training samples, $\cS = (\xb_i, y_i)_{i=1}^n$, that are drawn independently at random from the population distribution. We also use $\Xb := (\xb_1, \dots, \xb_n)^\top$ and $\yb := (y_1, \dots, y_n)^\top$ to denote the concatenated features and labels, respectively. The linear regression problems aim to find a parameter based on the training set $\cS$ that affords a small excess risk. \paragraph{Multi-Pass SGD.} We are interested in solving the linear regression problem using multi-pass \emph{stochastic gradient descent} (SGD). The algorithm generates a sequence of iterates $(\wb_t)_{t\ge 1}$ according to the following update rule: the initial iterate is $\wb_0 = \boldsymbol{0}$ (which can be assumed without lose of generality); then at each iteration, an example $(\xb_{i_t}, y_{i_t})$ is drawn from $\cS$ uniformly at random, and the iterate is updated by \begin{align*} \wb_{t+1} = \wb_t - \eta\cdot \xb_{i_t} (\xb_{i_t}^\top\wb_t - y_{i_t}), \end{align*} where $\eta > 0$ is a constant stepsize (i.e., learning rate). \paragraph{GD.} Another popular algorithm is \emph{gradient descent} (GD). For the clarify of notations, we use $(\hat{\wb}_t)_{t \ge 1}$ to denote the GD iterates, which follow the following updates: \begin{align*} \hat{\wb}_{t+1} = \hat{\wb}_t - \eta\cdot \frac{1}{n} \sum_{i=1}^n \xb_i (\xb_i^\top \hat{\wb}_t - y_i),\quad \hat\wb_0=\boldsymbol{0}, \end{align*} where $\eta > 0$ is a constant stepsize. \paragraph{Notations and Assumptions.} We use $\Hb := \EE[\xb\otimes\xb]$ to denote the population data covariance matrix. The eigenvalues of $\Hb$ is denoted by $(\lambda_i)_{i \ge 1}$, sorted in non-increasing order. Given the training data $(\Xb, \yb)$, we define $\bepsilon=\yb-\Xb\wb^*$ the collection of model noise, $\Ab = \Xb\Xb^\top$ as the gram matrix, and $\bSigma = n^{-1}\Xb^\top\Xb$ as the empirical covariance. Then the \textit{minimum-norm solution} is defined by \begin{align*} \hat\wb : = (\Xb^\top\Xb)^{\dagger}\Xb^\top\yb = \Xb^\top\Ab^{-1}\yb. \end{align*} It is clear that with appropriate stepsizes, both SGD and GD algorithms converge to $\hat{\wb}$ \citep{gunasekar2018characterizing,bartlett2020benign}. The assumptions required by our theorems are summarized in below. \begin{assumption}\label{assump:data_distribution} For the linear regression problem: \begin{enumerate}[label=\Alph*] \item The components of $\Hb^{-1/2}\xb$ are independent and $1$-subGaussian.\label{assump:item:subgaussian_data} \item The response $y$ is generated by $y := \la\wb^*, \xb\ra + \xi$, where $\wb^*$ is the ground truth weight vector and $\xi$ is a noise independent of $\xb$. Furthermore, the additive noise satisfies $\EE[\xi]=0$, $\EE[\xi^2]=\sigma^2$.\label{assump:item:well_specified_noise} \item The ground truth $\wb^*$ follows a Gaussian prior $\cN(\boldsymbol{0},\ \omega^2\cdot\Ib)$, where $\omega^2$ is a constant. \label{assump:item:gaussian_prior} \item The minimum-norm solution $\hat{\wb}$ linearly interpolates all training data, i.e., \(y_i = \hat{\wb}^\top \xb_i \) for every \( i=1,\dots,n\). \label{assump:item:interpolator} \end{enumerate} \end{assumption} Assumptions \ref{assump:data_distribution}\ref{assump:item:subgaussian_data} and \ref{assump:item:well_specified_noise} are standard for analyzing overparameterized linear regression problem \citep{bartlett2020benign,tsigler2020benign}. Assumption \ref{assump:data_distribution}\ref{assump:item:gaussian_prior} is also widely adopted in analyzing least square problems (see, e.g., \citet{ali2019continuous,dobriban2018high,xu2019number}). Finally, Assumption \ref{assump:data_distribution}\ref{assump:item:interpolator} holds almost surely when $d > n$, i.e., the number of parameter exceeds the number of data. In the following, the presented risk bounds will hold (i) with high-probability with respect to the randomness of sampling feature vectors $\Xb$, and (ii) in expectation with respect to the randomness of multi-pass SGD algorithm, the randomness of sampling additive noise $\bepsilon$ and the randomness of the true parameter $\wb^*$ as a prior. For these purpose, we will use $\EE_{\sgd}, \EE_{\wb^*}$ to refer to taking expectation with respect to the SGD algorithm and the prior distribution of $\wb^*$, respectively. \section{Main Results} Our first theorem shows that, under the same stepsize and number of iterates, SGD always generalizes worse than GD. \begin{theorem}[Risk decomposition]\label{thm:risk_decomposition} Suppose that Assumption \ref{assump:data_distribution}\ref{assump:item:interpolator} holds. Then the excess risk of SGD can be decomposed by \[ \EE_{\sgd} \big[\risk(\wb_t)\big] = \risk(\hat{\wb}_t) + \FlucError(\wb_t). \] Moreover, the fluctuation error is always positive. \end{theorem} \paragraph{A Risk Comparison.} Theorem \ref{thm:risk_decomposition} shows that, in the interpolation regime, SGD affords a strictly large excess risk than GD, given the same hyperparameters (stepsize $\eta$ and number of iterates $t$). Therefore, despite of a possibly higher computational cost, the optimally tuned GD \emph{dominates} the optimally tuned SGD in terms of the generalization performance. This observation is verified empirically by experiments in Figure~\ref{fig:risk-comparison}. Theorem \ref{thm:risk_decomposition} relates the risk of SGD iterates to that of GD iterates. This idea has appeared in earlier literature \citep{lin2017optimal,pillaud2018statistical,mucke2019beating}. Our next theorem is to characterize the fluctuation error of SGD (with respect to GD). \begin{theorem}[Fluctuation error bound]\label{thm:upperbound_fluctuation} Suppose that Assumptions \ref{assump:data_distribution}\ref{assump:item:subgaussian_data}, \ref{assump:item:well_specified_noise} and \ref{assump:item:interpolator} all hold. Then for every $n\ge 1$, $t\ge 1$ and $\eta\le c/\tr(\Hb)$ for some absolute constant $c$, with probability at least $1-1/\poly(n)$, it holds that \begin{align*} & \FlucError (\wb_t) \lesssim \notag\\ & \bigg[\log(t)\cdot\bigg(\frac{\tr(\Hb)\log(n)}{t}+\frac{k^\dagger\log^{5/2}(n)}{n^{1/2}t}\bigg)+\frac{\log^{5/2}(n)\eta}{n^{1/2}}\cdot\sum_{i>k^\dagger}\lambda_i\bigg)\bigg] \cdot\min\big\{\|\hat\wb\|_2^2,\ t\eta\cdot \|\hat{\wb} \|_{\bSigma}^2\big\}, \end{align*} where $k^\dagger \ge 0$ is an arbitrary index (can be infinity). \end{theorem} We first explain the factor $\min\big\{\|\hat\wb\|_2^2,\ t\eta\cdot \| \hat{\wb} \|_{\bSigma}^2\big\}$ in our bound. First of all, when the interpolator $\hat\wb$ has a small $\ell_2$-norm, the quantity is automatically small. Furthermore, $\| \hat{\wb} \|_{\bSigma}^2 \lesssim \omega^2 \lesssim 1$ easily holds under mild assumptions on $\wb^*$, e.g., Assumption \ref{assump:data_distribution}\ref{assump:item:gaussian_prior}. Then, for finite $t$ one can bound the factor with $\min\big\{\|\hat\wb\|_2^2,\ t\eta\cdot \| \hat{\wb} \|_{\bSigma}^2\big\} \lesssim \omega^2 \eta t$. More interestingly, for SGD with constant stepsize ($\gamma \eqsim 1/\tr(\Hb)$) and infinite optimization steps ($t \to \infty$), our risk bound can still vanish, while all risk bounds in prior works \citep{lin2017optimal,pillaud2018statistical,mucke2019beating} are vacuous. To see this, one can first set $k^\dagger = \infty$ and $t \to \infty$ in Theorem \ref{thm:upperbound_fluctuation}, so the fluctuation error vanishes. Secondly, note that GD with constant stepsize converges to the minimum-norm interpolator $\hat{\wb}$, so the risk of GD converges to the risk of $\hat{\wb}$, which is known to vanish for data covariance that enables ``benign overfitting'' \citep{bartlett2020benign}. Combining these with Theorm \ref{thm:risk_decomposition} gives a generalization bound of SGD with constant stepsize and infinite optimization steps. To complement the above results, we provide the following finite-time risk bound for GD. Nonetheless, we emphasize that any risk bound for GD can be plugged into Theorems \ref{thm:risk_decomposition} and~\ref{thm:upperbound_fluctuation} to obtain a risk bound for SGD. \begin{theorem}[GD risk]\label{thm:GD_earlystop} Suppose that Assumptions \ref{assump:data_distribution}\ref{assump:item:subgaussian_data}, \ref{assump:item:well_specified_noise} and \ref{assump:item:gaussian_prior} all hold. Then for every $n\ge 1$, $t \ge 1$ and $\eta < {1} / {\norm{\Hb}_2}$, with probability at least $1-1/\poly(n)$, it holds that \begin{equation*} \EE_{\wb^*, \bepsilon} [ \risk( \hat{\wb}_t) ] \lesssim \omega^2 \cdot \Bigg( \frac{\tilde{\lambda}^2 }{n^2 } \cdot \sum_{i\le k^*} \frac{1}{\lambda_i} + \sum_{i>k^*} \lambda_i \Bigg) + \sigma^2\cdot \rbr{\frac{k^*}{n} + \frac{n}{\tilde{\lambda}^2} \sum_{i>k^*}\lambda_i^2 }, \end{equation*} where $k^* := \min\{k: n\lambda_{k+1} \le \frac{n}{\eta t} + \sum_{i > k} \lambda_i \}$ and $\tilde{\lambda} := \frac{n}{\eta t} + \sum_{i > k^*} \lambda_i$. \end{theorem} The bound presented in Theorem \ref{thm:GD_earlystop} is comparable to that for ridge regression established by \citet{tsigler2020benign} and will be much better than the bound of single-pass SGD when the signal-to-noise ratio is large \citep[Theorem 5.1]{zou2021benefits}, e.g., $\omega^2 \gg \sigma^2$. In fact, Theorem \ref{thm:GD_earlystop} is proved via a reduction to ridge regression results (see Section \ref{sec:gd_risk} for more details). In particular, the quantity $n/(\eta t)$ for GD is an analogy to the regularization parameter $\lambda$ for ridge regression \citep{yao2007early,raskutti2014early,wei2017early,ali2019continuous}. As a final remark, the assumption that $\wb^*$ follows a Gaussian prior is the main concealing in Theorem \ref{thm:GD_earlystop} (which is not required by \citet{tsigler2020benign} for ridge regression). The Gaussian prior on $\wb^*$ is known to allow a connection between early stopped GD with ridge regression \citep{ali2019continuous}. We conjecture that this assumption is not necessary and potentially removable. Combining Theorems \ref{thm:risk_decomposition}, \ref{thm:upperbound_fluctuation} and \ref{thm:GD_earlystop}, we obtain the following excess risk bound for multi-pass SGD in the interpolating least square problems: \begin{corollary}\label{coro:expected_SGD_error} Suppose that Assumptions \ref{assump:data_distribution}\ref{assump:item:subgaussian_data}, \ref{assump:item:well_specified_noise}, \ref{assump:item:gaussian_prior} and \ref{assump:item:interpolator} all hold. Then with probability at least $1-1/\poly(n)$, it holds that \begin{align*} &\EE_{\mathrm{SGD}, \wb^*,\bepsilon}\big[\cE(\wb_t)\big] \lesssim \omega^2 \cdot \Bigg( \frac{\tilde{\lambda}^2 }{n^2 } \cdot \sum_{i\le k^*} \frac{1}{\lambda_i} + \sum_{i>k^*} \lambda_i \Bigg) + \sigma^2\cdot \rbr{\frac{k^*}{n} + \frac{n}{\tilde{\lambda}^2} \sum_{i>k^*}\lambda_i^2 }\notag\\ &\quad +\big[\omega^2\tr(\Hb) +\sigma^2)\big]\eta\cdot \bigg[\log(t)\cdot\bigg(\tr(\Hb)\log(n)+\frac{k^*\log^{5/2}(n)}{n^{1/2}}\bigg)+\frac{\log^{5/2}(n)t\eta}{n^{1/2}}\cdot\sum_{i>k^*}\lambda_i\bigg)\bigg], \end{align*} where $k^* := \min\{k: n\lambda_{k+1} \le \frac{n}{\eta t} + \sum_{i > k} \lambda_i \}$ and $\tilde{\lambda} := \frac{n}{\eta t} + \sum_{i > k^*} \lambda_i$. \end{corollary} \paragraph{Comparison with Existing Results.} We now discuss differences and connections between our bound and existing ones for multi-pass SGD \citep{lin2017optimal,pillaud2018statistical,mucke2019beating}. First, we highlight that our bound is \emph{problem-dependent} in the sense that the bound is stated as a function of the spectrum of data covariance; in contrast, existing papers only provide a minimax analysis for multi-pass SGD. Secondly, we rely on a different set of assumptions from the aforementioned papers. In particular, \citet{pillaud2018statistical} requires a \emph{source condition} on the data covariance, and \citet{lin2017optimal,mucke2019beating} requires an \emph{effective dimension} (defined by the data covariance) to be small, but our results are more general regarding the data covariance. Moreover, we assume $\wb^*$ follows a Gaussian prior (Assumption \ref{assump:data_distribution}\ref{assump:item:gaussian_prior}), but existing works require a \emph{source condition} on $\wb^*$, which are not directly comparable. The following corollary characterizes the risk of multi-pass SGD for data covariance with polynomially decaying spectrum. \begin{corollary}\label{coro:SGD_error_polydecay} Suppose that Assumptions \ref{assump:data_distribution}\ref{assump:item:subgaussian_data}, \ref{assump:item:well_specified_noise}, \ref{assump:item:gaussian_prior} and \ref{assump:item:interpolator} all hold. Assume the spectrum of $\Hb$ decays polynomially, i.e., $\lambda_i = i^{-1-r}$ for some absolute constant $r>0$, then with probability at least $1-1/\poly(n)$, it holds that \begin{align*} \EE_{\wb^*,\bepsilon}[\cE(\hat\wb_t)]&\lesssim \omega^2 \cdot (t\eta)^{-r/(r+1)} + \sigma^2\cdot \frac{(t\eta)^{1/(r+1)}}{n},\notag\\ \EE_{\sgd, \wb^*,\bepsilon}[\cE(\wb_t)] &\lesssim\omega^2 \cdot (t\eta)^{-r/(r+1)} + \sigma^2\cdot \frac{(t\eta)^{1/(r+1)}}{n}\notag\\ &\qquad+(\omega^2 + \sigma^2)\cdot \eta\cdot\log(t)\cdot\bigg[\log(n)+\frac{\log^{5/2}(n)}{n^{1/2}}\cdot (t\eta)^{1/(r+1)}\bigg]. \end{align*} \end{corollary} Corollary \ref{coro:SGD_error_polydecay} provides concrete excess risk bounds for SGD and GD, based on which we can make a comparison between SGD and GD in terms of their iteration and gradient complexities. For simplicity, in the following discussion, we assume that $\omega^2 \eqsim \sigma^2 \eqsim 1$. Then choosing $t \eta \eqsim n$ minimizes the upper bound for GD risk and yields the $O(n^{-r / (r+1)})$ rate. Here GD can employ a constant stepsize. Similarly, SGD can match the GD's rate, $O(n^{-r / (r+1)})$, by setting $ t\eta \eqsim n$ and \begin{equation}\label{eq:sgd_opt_lr} \eta \lesssim \log^{-1} (t) \cdot \min\{\log^{-1}(n)\cdot n^{-\frac{r}{r+1}}, \ \log^{-\frac{5}{2}}(n) \cdot n^{ -\frac{1}{2} })\}. \end{equation} The above stepsize choice implies that that SGD (fixed stepsize, last iterate) can only cooperate with small stepsize. \paragraph{Iteration Complexity.} We first compare GD and SGD in terms of the iteration complexity. To reach the optimal rate, GD can employ a constant stepsize and set the number of iterates to be $t \eqsim n$. However, in order to shelve the fluctuation error, the stepsize of SGD cannot be large, as required by \eqref{eq:sgd_opt_lr}. More precisely, in order to match the optimal rate, SGD needs to use a small stepsize, $\eta \eqsim n /t$, with a large number of iterates, \begin{equation*} t \eqsim \begin{cases} \log(n) \cdot n^{1+\frac{r}{r+1}} = \tilde{\cO}( n^{1+\frac{r}{r+1}}), & r>1; \\ \log^{3.5}(n) \cdot n^{1.5} = \tilde{\cO}(n^{1.5}), & r \le 1. \end{cases} \end{equation*} It can be seen that the iteration complexity of SGD is much worse than that of GD. This result is empirically verified by Figure \ref{fig:iter-comparison} (a). \paragraph{Gradient Complexity.} We next compare GD and SGD in terms of the gradient complexity. Recall that for each iterate, GD computes $n$ gradients but SGD only computes $1$ gradient. Therefore, to reach the optimal rate, the total number of gradient computed by GD needs to be $\Theta( n^2) $, but that computed by SGD is only $\tilde{\cO} (n^{\max\{(2r+1)/ (r+1), 1.5 \}})$. Thus, the gradient complexity of SGD is better than that of GD by a factor of $\tilde \cO(n^{\min\{0.5, 1/(r+1)\}})$. This result is empirically verified by Figure \ref{fig:iter-comparison} (b). \begin{figure} \centering \subfigure[\small{Iteration Complexity vs. Risk}]{\includegraphics[width=0.45\textwidth]{figures/iter_compare.pdf}} \subfigure[\small{Gradient Complexity vs. Risk}]{\includegraphics[width=0.45\textwidth]{figures/grad_compare.pdf}} \caption{\small Iteration and gradient complexity comparison between SGD and GD. The curves report the minimum number of steps/gradients for each algorithm (with an optimally tuned stepsize) to achieve a targeted risk. Experiment setup is the same as that in Figure \ref{fig:risk-comparison}. } \label{fig:iter-comparison} \vspace{-.6cm} \end{figure} \section{Overview of the Proof Technique} Our proof technique is inspired by the operator methods for analyzing single-pass SGD \citep{bach2013non,dieuleveut2017harder,jain2017markov,jain2017parallelizing,neu2018iterate,ge2019step,zou2021benign,wu2021last}. In particular, they track an error \emph{matrix}, $(\wb_t-\wb^*)\otimes(\wb_t-\wb^*)$ that keeps richer information than the error norm $\|\wb_t-\wb^*\|_2^2$. For single-pass SGD where each data is used only once, the resulted iterates enjoy a simple dependence on history that allows an easy calculation of the expected error matrix (with respect to the randomness of data generation). However for multi-pass SGD, a data might be used multiple times, which prevents us from tracking the expected error matrix directly. Instead, a trackable analogy to the error matrix is the \emph{empirical error matrix}, $(\wb_t- \hat \wb)\otimes(\wb_t- \hat \wb)$ where $\hat{\wb}$ is the minimum norm interpolator. More precisely, note that \begin{align}\label{eq:update_sgd_error_vector} \wb_{t+1} -\hat\wb&= \wb_t -\hat\wb -\eta\cdot (\xb_{i_t}\xb_{i_t}^\top\wb_t - \xb_{i_t}\xb_{i_t}^\top\hat \wb) = (\Ib - \eta \xb_{i_t}\xb_{i_t}^\top) (\wb_{t} -\hat\wb). \end{align} Therefore the expected (over the algorithm's randomness) empirical error matrix enjoy a simple update rule: \begin{align*} \text{let}\ \Eb_t := \EE_{\sgd}\big[(\wb_{t} -\hat\wb)(\wb_t -\hat\wb)^\top\big],\ \text{then}\ \Eb_{t+1} = \EE_{i_t} \big[(\Ib - \eta \xb_{i_t} \xb_{i_t}^\top ) \Eb_t (\Ib - \eta \xb_{i_t} \xb_{i_t}^\top )\big]. \end{align*} Let $\bSigma := \frac{1}{n} \Xb^\top \Xb$ be the empirical covariance matrix. We then follow the operator method \citep{zou2021benign} to define the following operators on symmetric matrices: \[ \cG := (\Ib - \eta \bSigma) \otimes (\Ib - \eta \bSigma),\ \ \cM := \EE_{\sgd} [\xb_{i_t}\otimes \xb_{i_t}\otimes \xb_{i_t} \otimes \xb_{i_t}], \ \ \tilde{\cM} := \bSigma \otimes \bSigma. \] One can verify that, for a symmetric matrix $\Jb$, the following holds: \begin{align*} \cG\circ\Jb :&= (\Ib-\eta\bSigma)\Jb(\Ib-\eta\bSigma),\ \ \cM\circ\Jb:= \EE_{\sgd} [\xb_{i_t}\xb_{i_t}^\top\Jb\xb_{i_t}\xb_{i_t}^\top],\ \ \tilde \cM\circ\Jb:=\bSigma\Jb\bSigma. \end{align*} Moreover, the following properties of the defined operators are essential in the subsequent analysis: \begin{itemize}[leftmargin=*] \item \textbf{PSD mapping:} for every PSD matrix $\Jb$, $\cM\circ\Jb$, $(\cM-\tilde \cM)\circ\Jb$ and $\cG\circ\Jb$ are all PSD matrices. \item \textbf{Commutative property:} for two PSD matrices $\Bb_1$ and $\Bb_2$, we have \begin{align*} \la\cG \circ\Bb_1, \Bb_2\ra = \la\Bb_1, \cG \circ\Bb_2\ra,\ \la\cM \circ\Bb_1, \Bb_2\ra = \la\Bb_1, \cM \circ\Bb_2\ra, \ \la\tilde\cM \circ\Bb_1, \Bb_2\ra = \la\Bb_1, \tilde\cM \circ\Bb_2\ra \end{align*} \end{itemize} Based on these operators, we can obtain a close form update rule for $\Eb_t$: \begin{align} \Eb_{t} &= \EE_{i_{t-1}} (\Ib - \eta \xb_{i_{t-1}} \xb_{i_{t-1}}^\top ) \Eb_t (\Ib - \eta \xb_{i_{t-1}} \xb_{i_{t-1}}^\top )\notag\\ & = \cG\circ\Eb_{t-1} + \eta^2\cdot(\cM-\tilde\cM)\circ\Eb_{t-1}\notag\\ &=\underbrace{\cG^{t}\circ\Eb_0}_{\bTheta_1} + \underbrace{\eta^2\cdot \sum_{k=0}^{t-1}\cG^{t-1-k}\circ(\cM-\tilde\cM)\circ \Eb_{k}}_{\bTheta_2}.\label{eq:update_form_Et} \end{align} Here the first term \[\bTheta_1 := (\Ib-\eta\bSigma)^t\Eb_0(\Ib-\eta\bSigma)^t = (\hat{\wb}_t - \hat{\wb})(\hat{\wb}_t - \hat{\wb})^\top\] is exactly the error matrix caused by GD iterates (with stepsize $\eta$ and iteration number $t$), and the second term $\bTheta_2$ is a \textit{fluctuation matrix} that captures the deviation of a SGD iterate $\wb_t$ with respect to a corresponding GD iterate $\hat{\wb}_t$. We remark that the expected error matrix $\Eb_t$ contains all information of $\wb_t$. We next prove Theorem \ref{thm:risk_decomposition}, from where we will see the usage of $\Eb_t$. \subsection{Risk Decomposition: Proof of Theorem \ref{thm:risk_decomposition}} The following fact is clear from the update rule \eqref{eq:update_sgd_error_vector}. \begin{fact}\label{fact:sgd_expectation} The GD iterates satisfy \( \hat{\wb}_{t+1} -\hat\wb = (\Ib - \eta \bSigma) (\hat{\wb}_{t} -\hat\wb) \) and \( \EE_{\sgd}[\wb_t-\hat\wb] = \hat\wb_t - \hat\wb \). \end{fact} Based on Fact \ref{fact:sgd_expectation} and \eqref{eq:update_form_Et}, we have \begin{align*} &\EE_{\sgd}[(\wb_t - \wb^*)(\wb_t - \wb^*)^\top] \notag\\ &= \Eb_t + (\hat\wb - \wb^*)(\hat\wb_t - \hat\wb)^\top + (\hat\wb_t - \hat\wb)(\hat\wb - \wb^*)^\top + (\hat\wb-\wb^*)(\hat\wb-\wb^*)^\top\notag\\ &= \bTheta_1 + (\hat\wb - \wb^*)(\hat\wb_t - \hat\wb)^\top + (\hat\wb_t - \hat\wb)(\hat\wb - \wb^*)^\top + (\hat\wb-\wb^*)(\hat\wb-\wb^*)^\top +\bTheta_2\notag\\ & = (\hat\wb_t - \wb^*)(\hat\wb_t - \wb^*)^\top + \bTheta_2, \end{align*} where $\bTheta_1$ and $\bTheta_2$ are defined in \eqref{eq:update_form_Et} and the last equality is due to $\bTheta_1 = (\hat\wb_t-\hat\wb)(\wb_t-\hat\wb)^\top$. Also note that \begin{align*} \EE_{\sgd}[\cE(\wb_t)] = \frac{1}{2}\EE_{\sgd}\big[\|\wb_t - \wb^*\|_\Hb^2\big] = \frac{1}{2}\big\la\EE_{\sgd}[(\wb_t - \wb^*)(\wb_t - \wb^*)^\top], \Hb\big\ra. \end{align*} Combining these two inequalities proves Theorem \ref{thm:risk_decomposition}: \begin{align} \label{eq:risk_decomposition_detailed} \EE_{\sgd} [ \cE(\wb_t)] = \underbrace{\frac{1}{2}\|\hat\wb_t - \wb^*\|_{\Hb}^2}_{\text{GD error}} + \underbrace{\frac{\eta^2}{2}\cdot \sum_{k=0}^{t-1}\big\la\cG^{t-1-k}\circ(\cM-\tilde\cM)\circ \Eb_{k}, \Hb\big\ra}_{\text{Fluctuation error }}. \end{align} Finally, the fluctuation error is also positive because both $\cG$ and $\cM - \tilde\cM$ are PSD mappings. \subsection{Bounding the Fluctuation Error: Proof of Theorem \ref{thm:upperbound_fluctuation}}\label{sec:bounding_fluc_err} There are several challenges in the analysis of fluctuation error: (1) it is difficult to characterize the matrix $(\cM - \tilde \cM) \circ\Eb_k$ since the matrix $\Eb_k$ is unknown; (2) the operator $\cG$ involves an exponential decaying term with respect to the empirical covariance matrix $\bSigma$, which does not commute with the population covariance matrix $\Hb$. To address the first problem, we will use the PSD mapping and commutative property of the operators $\tilde \cM$, $\cG$, $\cM$ and obtain the following result. \begin{align}\label{eq:formula_fluctuationErr} \mathrm{FluctuationError} \le \frac{\eta^2}{2}\cdot \sum_{k=0}^{t-1}\la\cM\circ\cG^{t-1-k}\circ\Hb, \Eb_k\ra. \end{align} Now, the input of the operator $\cM\circ \cG^{t-1-k}$ will not be an unknown matrix but a fixed one (i.e., $\Hb$), and the remaining effort will be focusing on characterizing $\cM\circ \cG^{k}\circ\Hb$. Applying the definitions of $\cM$ and $\cG$ implies \begin{align*} \cM\circ\cG^k\circ\Hb = \EE_{i}\big[\xb_i\xb_i^\top (\Ib-\eta\bSigma)^k\Hb(\Ib-\eta\bSigma)^k\xb_i\xb_i^\top\big]. \end{align*} Then our idea is to first prove an uniform upper bound on the quantity $\xb_i^\top (\Ib-\eta\bSigma)^k\Hb(\Ib-\eta\bSigma)^k\xb_i$ for all $i\in[n]$ (e.g., denoted as $U(k, \eta, n)$), then it can be naturally obtained that \begin{align}\label{eq:upperbound_fourthmoment} \cM\circ\cG^k\circ\Hb\preceq U(k, \eta, n)\cdot \EE_{i}[\xb_i\xb_i^\top] = U(k,\eta, n)\cdot\bSigma, \end{align} then we will only need to characterize the inner product $\la\Eb_k,\bSigma\ra$ in \eqref{eq:formula_fluctuationErr}, which can be understood as the optimization error at the $k$-th iteration. In order to precisely characterize $U(k, \eta, n)$, we encounter the second problem that the population covariance $\Hb$ and empirical covariance $\bSigma$ are not commute, thus the exponential decaying term $(\Ib-\eta\bSigma)^k$ will not be able to fully decrease $\Hb$ since some components of $\Hb$ may lie in the small eigenvalue directions of $\bSigma$. Therefore, we consider the following decomposition \begin{align*} \xb_i^\top (\Ib-\eta\bSigma)^k\Hb(\Ib-\eta\bSigma)^k\xb_i = \underbrace{\xb_i^\top (\Ib-\eta\bSigma)^k\bSigma(\Ib-\eta\bSigma)^k\xb_i}_{\Theta_1} + \underbrace{\xb_i^\top (\Ib-\eta\bSigma)^k(\Hb-\bSigma)(\Ib-\eta\bSigma)^k\xb_i}_{\Theta_2}. \end{align*} Then for $\Theta_1$, it can be seen that the decaying term $(\Ib-\eta\bSigma)^k$ is commute with $\bSigma$ thus can successfully make it decrease. For $\Theta_2$, we will view the difference $\Hb-\bSigma$ as the component of $\Hb$ that cannot be effectively decreased by $(\Ib-\eta\bSigma)^k$, which will be small as $n$ increases. More specifically, we can get the following upper bound on $\Theta_1$. \begin{lemma}\label{lemma:upperbound_Theta1} If the stepsize satisfies $\gamma\le c/\tr(\Hb)$ for some small absolute constant $c$, then with probability at least $1-1/\poly(n)$, it holds that \begin{align*} \Theta_1 \lesssim \tr(\Hb) \cdot \log(n)\cdot \min\bigg\{\frac{1}{(k+1)\eta}, \|\Hb\|_2\bigg\}. \end{align*} \end{lemma} For $\Theta_2$, we will rewrite $\xb_i$ as $\eb_i^\top\Xb$ where $\eb_i\in\RR^n$ and $\Xb\in\RR^{n\times d}$, then \begin{align}\label{eq:upperbound_Theta2_main} \Theta_2 &= \eb_i^\top \Xb(\Ib-\eta\bSigma)^k(\Hb-\bSigma)(\Ib-\eta\bSigma)^k\Xb^\top\eb_i\notag\\ &\le \|\eb_i^\top\Xb(\Ib-\eta\bSigma)^k\|_2^2\cdot\|\Hb-\bSigma\|_2. \end{align} Then since $\Xb$ and $\bSigma$ have the same column eigenspectrum, we can fully unleash the decaying power of the term $(\Ib-\eta\bSigma)^k$ on $\Xb$. Further note the that the row space of $\Xb$ is uniform distributed (corresponding to the index of training data), which is independent of $\eb_i$. This implies that we can adopt standard concentration arguments with covering on $n$ fixed vectors $\{\eb_i\}_{i=1,\dots,n}$ to prove a sharp high probability upper bound (compared to the naive worst-case upper bound). Consequently, we state the upper bound on $\Theta_2$ in the following lemma. \begin{lemma}\label{lemma:upperbound_Theta2} For every $i\in[n]$, we have with probability at least $1-1/\poly(n)$, the following holds for every $k^*\in[d]$, \begin{align} \Theta_2\lesssim \frac{ \log^{5/2}(n)}{n^{1/2}}\cdot \bigg(\frac{k^*}{(k+1)\eta} + \sum_{i>k^*}\lambda_i\bigg). \end{align} \end{lemma} \subsection{Bounding the Risk of GD: Proof of Theorem \ref{thm:GD_earlystop}}\label{sec:gd_risk} Recall that \( \hat{\wb} = \Xb^\top (\Xb \Xb^\top)^{-1}\yb = \Xb^\top \Ab^{-1}\yb \), where $\Ab := \Xb \Xb^\top$ is the gram matrix. Then we can reformulate $\hat\wb_t$ by \begin{align*} \hat{\wb}_t &= \hat{\wb} - (\Ib - \eta \bSigma)^t (\hat{\wb}_0 - \hat{\wb})= \big( \Ib - (\Ib - \eta \bSigma)^t \big)\Xb^\top \Ab^{-1}\yb= \Xb^\top \big( \Ib - (\Ib - \eta n^{-1}\Ab)^t \big) \Ab^{-1}\yb. \end{align*} Denote $\tilde{\Ab} := \Ab \big( \Ib - (\Ib - \eta n^{-1}\Ab)^t \big)^{-1}$, the excess risk of $\hat\wb_t$ is \begin{align}\label{eq:bias_var_decomposition_GD} \cE(\hat\wb_t) &= \frac{1}{2}\big\| \Xb^\top \tilde{\Ab}^{-1} \yb - \wb^* \big\|^2_{\Hb}= \underbrace{\frac{1}{2}\big\|\wb^*\big(\Ib-\Xb^\top\tilde\Ab^{-1}\Xb\big)\big\|_\Hb^2}_{\mathrm{BiasError}} + \underbrace{\frac{1}{2}\big\|\Xb^\top\tilde\Ab^{-1}\bepsilon\big\|_\Hb^2}_{\mathrm{VarError}} . \end{align} The remaining proof will be relates the excess risk of early stopped GD to that of ridge regression with certain regularization parameters. In particular, note that the excess risk of the ridge regression solution with parameter $\lambda$ is $\frac{1}{2}\|\Xb^\top(\Ab+\lambda\Ib)^{-1}\yb-\wb^*\|_\Hb^2$. Then it remains to show the relationship between $\tilde\Ab$ and $\Ab+\lambda\Ib$, which is illustrated in the following lemma. \begin{lemma}\label{lemma:tildeA-bounds} For any $\eta\le c/\lambda_1$ for some absolute constant $c$ and $t>0$, we have \[ \frac{1}{2} \bigg( \Ab + \frac{n}{\eta t} \Ib \bigg) \preceq \tilde{\Ab} \preceq\Ab + \frac{2 n}{t\eta}\cdot \Ib. \] \end{lemma} Then, the lower bound of $\tilde \Ab$ will be applied to prove the upper bound of variance error of GD, as shown in \eqref{eq:bias_var_decomposition_GD}, which is at most four times the variance error achieved by the ridge regression with $\lambda = n/(\eta t)$. The upper bound of $\tilde \Ab$ will be applied to prove the upper bound of the bias error of GD, which is at most the bias error achieved by ridge regression with $\lambda = 2n/(\eta t)$. Finally, we can apply the prior work \citep[Theorem 1]{tsigler2020benign} on the excess risk analysis for ridge regression to complete the proof for bounding the bias and variance errors separately. \section{Conclusion and Discussion} In this paper, we establish an instance-dependent excess risk bound of multi-pass SGD for interpolating least square problems. The key takeaways include: (1) the excess risk of SGD is \textit{always} worse than that of GD, given the same setup of stepsize and iteration number; (2) in order to achieve the same level of excess risk, SGD requires more iterations than GD; and (3) however, the gradient complexity of SGD can be better than that of GD. The proposed technique for analyzing multi-pass SGD could be of broader interest. Several interesting problems are left for future exploration: \paragraph{A problem-dependent excess risk lower bound} could be useful to help understand the sharpness of our excess risk upper bound for multi-pass SGD. The challenge here is mainly from the fact that the empirical covariance matrix $\bSigma$ does not commute with the population covariance matrix $\Hb$. In particular, one needs to develop an even sharper characterization on the quantity $\cM\circ\cG^k\circ\Hb$ (see Section \ref{sec:bounding_fluc_err}); more precisely, a sharp lower bound on $\xb_i^\top (\Ib-\eta\bSigma)^k\Hb(\Ib-\eta\bSigma)^k\xb_i$ is required. \noindent \textbf{Multi-pass SGD without replacement} is a more practical SGD variant than the multi-pass SGD with replacement studied in this work. The key difference is that, the former does not pass training data independently (since each data must be used for equal times). In terms of optimization complexity, it has already been demonstrated in theory that multi-pass SGD without replacement (e.g., SGD with single shuffle or random shuffle) outperforms multi-pass SGD with replacement \citep{haochen2019random,safran2020good,ahn2020sgd}. In terms of generalization, it is still open whether or not the former can be better than the latter, as there lacks a sharp excess risk analysis for multi-pass SGD without replacement. The techniques presented in this paper can shed light on this direction.
2024-02-18T23:40:19.043Z
2022-03-08T02:33:39.000Z
algebraic_stack_train_0000
2,035
6,822
proofpile-arXiv_065-10016
\section{Introduction and Context} \label{sec:intro} Accurate prediction of signal propagation is a crucial task in modern digital circuit design. Although the highest precision is obtained by analog simulations, e.g., using SPICE, they suffer from excessive simulation times. Digital timing analysis techniques, which rely on (i) discretizing the analog waveform at certain thresholds and (ii) simplified interconnect resp. gate delay models, are hence utilized to verify most parts of a circuit. Prominent examples of the latter are pure (constant input-to-output delay $\Delta$) and inertial delays (constant delay $\Delta$, pulses shorter than an upper bound are removed) \cite{Ung71}. To accurately determine $\Delta$, which stays constant for all simulation runs, highly elaborate estimation methods like CCSM~\cite{Syn:CCSM} and ECSM~\cite{Cad:ECSM} are required. Single-history delay models, like the \emph{\gls{ddm}} \cite{BJV06}, have been proposed as a more accurate alternative. Here, the input-to-output delay $\delta(T)$ depends on a single parameter, the previous-output-to-input delay $T$ (see \cref{fig:single_history}). Still, F\"ugger \textit{et~al.}\ \cite{FNS16:ToC} showed that none of the existing delay models, including DDM, is faithful, as they fail to correctly predict the behavior of circuits solving the canonical \emph{short-pulse filtration} (SPF) problem. F\"ugger \textit{et~al.}~\cite{FNNS19:TCAD} introduced the \textit{\gls{idm}} and showed that, unlike all other digital delay models known so far, it faithfully models glitch propagation. The distinguishing property of the \gls{idm} is that the delay functions for rising ($\delta_\uparrow$) and falling ($\delta_\downarrow$) transitions form an involution, i.e., $-\delta_\uparrow(-\delta_\downarrow(T))=T$. A simulation framework based on ModelSim, the Involution Tool, confirmed the accuracy of the model predictions for several simple circuits~\cite{OMFS20:INTEGRATION}. Nevertheless, the \gls{idm} shows, at the moment, several shortcomings which impair its composability: \noindent (I) Ensuring the involution property requires specific (``matching'') \emph{discretization threshold voltages} $V_{th}^{in*}$ and $V_{th}^{out*}$ to discretize the analog waveforms at in- and output. These are not only unique for every single gate in the circuit but also difficult to determine. (II) The discretization threshold voltages may vary from gate to gate, i.e., the matching output discretization threshold voltage $V_{th}^{out*}$ of a given gate $G_1$ and the matching input discretization threshold voltage $V_{th}^{in*}$ of a successor gate $G_2$ are not necessarily the same. Consequently, just adding the delay predictions of the IDM for $G_1$ and $G_2$ cannot be expected to accurately model the overall delay of their composition. This is particularly true for circuits designs where different transistor threshold voltages \cite{Ase98} are used for tuning the delay-power trade-off \cite{NBPP10} or reliability \cite{IB09}. \noindent (III) Intermediate voltages, caused by creeping or oscillatory metastability, are expressed differently for various values of $V_{th}^{out*}$: Ultimately, a single analog trajectory may result in either zero, one or a whole train of digital transitions. \begin{figure}[t] \centering \scalebox{0.8}{\input{single_history.tex}} \caption{The delay $\delta_\uparrow$ as function of $T$. Taken from~\cite{OMFS20:INTEGRATION}.} \label{fig:single_history} \end{figure} \noindent \medskip \textbf{Main contributions:} In the present paper, we address all these shortcomings. \begin{enumerate} \item[1)] Based on the empirical analysis of the impact of $V_{th}^{in}$ and $V_{th}^{out}$ on the delay functions of a real circuit, we introduce CIDM channels, which are essentially made up of a typically asymmetric (for rising/falling transitions) pure delay shifter followed by an IDM channel (a PI channel). CIDM channels can model gates with arbitrarily discretization threshold $V_{th}^{in*}$ and $V_{th}^{out*}$ and simplify the characterization of their delay functions. Moreover, the CIDM exposes cancellations at the interconnect between gates, rather than removing them silently as in the IDM. Consequently, digital timing simulators can, for example, record trains of canceled pulses and report it to the user accordingly. \item[2)] We prove that CIDM channels are \emph{not} equivalent to IDM channels per se, in the sense that their delay functions are not involutions. However, the fact that both essentially contain the same components allows us to map a CIDM circuit description to an IDM one: For every circuit modeled with compatible CIDM channels, modeling matching discretization threshold voltages, there is an equivalent decomposition into IP channels, consisting of an IDM channel followed by an arbitrary pure delay shifter. Somewhat surprisingly, the mathematical properties of involution delay functions reveal that IP channels are instances of IDM channels. Consequently, the faithfulness of the IDM carries over literally to the CIDM. \item[3)] We present the theoretical foundations and an implementation of a simulation framework for CIDM, which was incorporated into the Involution Tool~\cite{OMFS20:INTEGRATION}. \footnote{The original Involution Tool is accessible via \url{https://github.com/oehlinscher/InvolutionTool}, our extended version is provided at \url{https://github.com/oehlinscher/CDMTool}.} We also provide a proof of correctness of our simulation algorithm, which shows that it always terminates after having produced the unique execution of an arbitrary circuit composed of gates with compatible CIDM channels. \item[(4)] We conducted a suite of experiments, both on a custom inverter chain with varying matching threshold voltages and a standard inverter chain. In the former case, we observed an impressive improvement of modeling accuracy for CIDM over IDM in many cases. \end{enumerate} \noindent \textbf{Paper organization:} We start with some basic properties of the existing \gls{idm} in \cref{sec:idm}. In \cref{sec:vth}, we empirically analyze the impact of changing $V_{th}^{out*}$ and $V_{th}^{in*}$ on the delay functions. \cref{sec:gidm} introduces and justifies the CIDM, while \cref{sec:gidmisidm} proves its faithfulness. In \cref{sec:algorithm}, we describe the CIDM simulation algorithm and its implementation, and prove that its correctness. \cref{sec:experiments} provides the results of our experiments. Some conclusion in \cref{sec:conclusion} round-off our paper. \section{Involution Delay Model Basics} \label{sec:idm} In this section, we briefly discuss the IDM. For further details, the interested reader is referred to the original publication~\cite{FNNS19:TCAD}. The essential benefit of using delay functions which are involutions is their ability to perfectly cancel zero-time input glitches: In \cref{fig:single_history}, it is apparent that the rising input transition causes a rising transition at the output, after delay $\delta_\uparrow(T)$. Now suppose that there is an additional falling input transition immediately after the rising one, actually occurring at the same time. Since this constitutes a zero-time input glitch, which is almost equivalent to no pulse at all, the output should behave as if it was not there at all. For this purpose it is required that the delay of the additional falling input transition to go \emph{back in time}, i.e., to exactly hit the predicted time of the previous falling output transition: Note carefully that just canceling the rising output transition, by generating the falling output transition at or before the rising one, would not suffice, as the calculation of the parameter $T$ for the next transition depends on the time of the previous output transition. It is not difficult to check that this going back is indeed achieved when $-\delta_\uparrow(-\delta_\downarrow(T))=T$ and $-\delta_\downarrow(-\delta_\uparrow(T))=T$ holds. As the IDM also requires the delay functions to be strictly increasing and concave, the involution property enforces them to be symmetric w.r.t.\ the 2\textsuperscript{nd} median $y=-x$. Lemma~3 in \cite{FNNS19:TCAD}, restated as \cref{lem:delta:min:idm} below, shows that \emph{strictly causal} involution channels, characterized by strictly increasing, concave delay functions with $\delta_\uparrow(0)>0$ and $\delta_\downarrow(0)>0$, give raise to a unique $\delta_{min}>0$ that (i) resides on the 2\textsuperscript{nd} median $y=-x$ and (ii) is shared by $\delta_\uparrow$ and $\delta_\downarrow$ due to the involution property. \begin{lemma}[{\cite[Lem.~3]{FNNS19:TCAD}}] \label{lem:delta:min:idm} A strictly causal involution channel has a unique~$\delta_{min}>0$ defined by $\delta_\uparrow(-\delta_{min}) = \delta_{min} = \delta_\downarrow(-\delta_{min})$. \end{lemma} In \cite{FNNS19:TCAD}, F\"ugger \textit{et~al.}\ have shown that self-inverse delay functions arise naturally in a (generalized) standard analog model that consists of a pure delay, a slew-rate limiter with generalized switching waveforms and an ideal comparator (see Fig.~\ref{fig:analogy}). First, the binary-valued input~$u_i$ is delayed by $\delta_{min}>0$, which assures causal channels, i.e., $\delta_{\uparrow/\downarrow}(0)>0$. For every transition on $u_d$, the generalized slew-rate limiter switches to the corresponding waveform ($f_\downarrow/f_\uparrow$ for a falling/rising transition). Note that the value at $u_r$ (the analog output voltage) does not jump, i.e., is a continuous function. Finally, the comparator generates the output $u_o$ by discretizing the value of this waveform w.r.t.\ the discretization threshold $V_{th}^{out}$. Using this representation, the need for $\delta_\uparrow(T), \delta_\downarrow(T) < 0$ can be explained by the necessity to cover sub-threshold pulses, i.e., ones that do not reach the output discretization threshold. In this case, the switching waveform has to be followed into the past to cross $V_{th}^{out}$, resulting in the seemingly acausal behavior. \begin{figure}[tb] \centerline{ \input{channel_model.tex}} \centerline{ \input{channel_time.tex}} \caption{Analog channel model representation (upper part) with a sample execution (bottom part). Adapted from~\cite{FNNS19:TCAD}.} \label{fig:analogy} \end{figure} \section{Discretization Threshold Voltages} \label{sec:vth} In this section, we will empirically explore the relation of gate delays and discretization threshold voltages by means of simulation results.\footnote{Our simulations have been performed for a buffer in the \SI{15}{\nm} NanGate library. However, since we are only reasoning about qualitative aspects common to all technologies, the actual choice has no significance. } In most of the following observations, we assume that a given physical (analog) gate is to be characterized as a zero-time Boolean gate with a succeeding IDM channel that models the delay. In order to accomplish concrete values, discretization threshold voltages $V_{th}^{in}$ at the input and $V_{th}^{out}$ at the output of a gate have to be fixed, and the pure delay component $\delta_{min}$ of the IDM channel as well as the delay functions $\delta_\uparrow$ and $\delta_\downarrow$ are determined accordingly. \begin{definition} The input and output discretization voltages $V_{th}^{in}$ and $V_{th}^{out}$ are called \emph{matching} for a gate, if the induced delay functions $\delta_\uparrow(T)$, $\delta_\downarrow(T)$ fulfill the condition $\delta_\uparrow(-\delta_{min})= \delta_{min}=\delta_\downarrow(-\delta_{min})$. To stress that a pair of input and output discretization threshold voltages is matching, they will be denoted as $V_{th}^{in*}$ and $V_{th}^{out*}$. \end{definition} We will now characterize properties of matching discretization threshold voltages. They depend on many factors, including transistor threshold voltages~\cite{Ase98} and the symmetry of the pMOS vs.\ nMOS stack. Since varying these parameters is commonly used in advanced circuit design to trade delay for power consumption~\cite{NBPP10,GG14} and reliability~\cite{IB09}, as well as for implementing special gates (e.g.\ logic-level conversion \cite{SRMS98}), the range of suitable discretization threshold voltages could differ significantly among gates. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{buffer_vthin.pdf} \caption{The relationship among $V_{th}^{in*}$, $\delta_{min}$ and $V_{th}^{out*}$.} \label{fig:im_char} \end{figure} Considering these circumstances it seems impossible that output and input descretization values coincide among connected gates. However, the following observation shows that there is an unlimited number of matching discretization threshold pairs for \gls{idm}: \begin{observation} \label{lem:vth:inout} For every choice of $V_{th}^{in}$, there is exactly one matching $V_{th}^{out}$. Fixing either of them uniquely determines the other and, in addition, also the pure delay $\delta_{min}$. \end{observation} \begin{proof}[Justification] Let us fix $V_{th}^{out}$ and investigate how $V_{th}^{in}$ and $\delta_{min}$ can be determined. For this purpose, we consider an analog pulse at $V_{out}$ that barely touches $V_{th}^{out}$, i.e., results in a zero-time glitch in the digital domain. There is a \emph{unique} positive and a \emph{unique} negative analog output pulse with this shape, which is both confirmed by simulation results and analytic considerations on the underlying system of differential equations \cite{M17:TR}. Now shift the positive and negative pulses in time such that their output voltages touch $V_{th}^{out}$, one from below and the other from above, at time $t_o$ (see \cref{fig:im_char}). Due to the condition $\delta_\downarrow(-\delta_{min})=\delta_{min}=\delta_\uparrow(-\delta_{min})$, this implies that the falling transition of the positive pulse and the rising transition of the negative pulse at the input must both cross $V_{th}^{in}$ at time $t_i = t_o - \delta_{min}$. Thus, fixing $V_{th}^{out*}$ uniquely determines the matching $V_{th}^{in*}$ and $\delta_{min} = t_o - t_i$. \end{proof} Actually determining the matching $V_{th}^{out*}$ for a given $V_{th}^{in*}$ and vice versa is a challenging task. For a start, let us investigate the static case with $f_s$ being the static transfer function of a gate. In this setup, an output derivative $V_{out}'=0$ is achieved for all values fulfilling the condition $V_{out} = f_s(V_{in})$ since $f_s$ represents the stable states of the gate. To obtain high accuracy when discretizing the analog signal, one typically chooses the output threshold such that the respective output waveform for a full-range input pulse has a steep slope at this point. While $V_{th}^{out*} = V_{DD} /2$ is in general a good choice, the corresponding $V_{th}^{in*}$ will differ significantly between balanced and high-threshold inverters, for example. Besides these static considerations, for a dynamic input, coupling capacitances cause a current at the output, which must be compensated via the gate-source voltages of the transistors as well. Obviously, the required overshoot w.r.t.\ $V_{th}^{in}$, and hence the time until this value is reached, depends on many parameters like the size of the coupling capacitances and the slope of the input signal. It is also worth mentioning that CMOS logic shows an inverse proportionality between in- and output, i.e., increasing the input increases the conductivity to \texttt{GND}, which drains the output, and vice versa. This implies that the current induced by the alternation of the input goes against the change of the output current, which effectively slows down the switching operation, and thus increases the pure delay $\delta_{min}>0$. \cref{lem:vth:inout} has a severe consequence for the simulation of circuits in any model, like IDM, where \cref{lem:delta:min:idm} holds: \begin{observation}\label{obs:singlefixesall} Fixing either $V_{th}^{in}$ or $V_{th}^{out}$ for a single gate $G$ fixes the threshold voltages of all gates in the circuit when it is simulated in a model where \cref{lem:vth:inout} holds. \end{observation} \begin{proof}[Justification] For a single gate, the observation follows from the unique relationship between $V_{th}^{in*}$ and $V_{th}^{out*}$ by \cref{lem:vth:inout}. For gates in a path, the ability to just add up their delays requires output and next input discretization thresholds to be the same. \end{proof} Since the detailed relation of $V_{th}^{in*}$ and $V_{th}^{out*}$ according to \cref{lem:vth:inout} depends on the individual gate, this means that the discretization threshold voltages across a circuit may vary in \emph{a priori} arbitrary ways, depending on the interconnect topology and the gate properties. In the extreme, the discretization thresholds can exceed the acceptable range for some gates, and even lead to unsatisfiable assignments in the case of circuits containing feedback-loops. In any case, it may take a large effort to properly characterize every gate such that the dependencies among discretization thresholds are fulfilled. By contrast, an ideally composable delay model uses a uniform discretization threshold such as $V_{th}^{out}=V_{th}^{in} = V_{DD} /2$. To investigate if \gls{idm} allows such a uniform choice, we proceed with \cref{lem:vth:non:match}: \begin{observation} \label{lem:vth:non:match} Characterizing a gate with non-matching discretization thresholds $V_{th}^{in}$ and $V_{th}^{out*}$, where matching $V_{th}^{in*}$ and $V_{th}^{out*}$ lead to an IDM channel with pure delay $\overline{\delta}_{min}$, results in delay functions $\delta_\uparrow(T)$, $\delta_\downarrow(T)$, which satisfy $\delta_\uparrow(-\delta^\uparrow_{min}) = \delta^\uparrow_{min}$ and $\delta_\downarrow(-\delta^\downarrow_{min}) = \delta^\downarrow_{min}$ for $\delta^\uparrow_{min} = \overline{\delta}_{min} + \Delta^+ \neq \delta^\downarrow_{min}=\overline{\delta}_{min} + \Delta^-$. $\Delta^+$ and $\Delta^-$ have opposite sign, with $\Delta^+>0$ for $V_{th}^{in} < V_{th}^{in*}$. \end{observation} \begin{proof}[Justification] The observation follows from refining the argument used for confirming \cref{lem:vth:inout}, where it was shown how matching $V_{th}^{in*}$ and $V_{th}^{out*}$ are achieved. For the non-matching case, we increase resp.\ decrease $V_{th}^{in}$, starting from $V_{th}^{in*}$, while keeping everything else, i.e., electronic characteristics, waveforms and $V_{th}^{out}$, unchanged. As illustrated in \cref{fig:im_char} for $V_{th}^{in}<V_{th}^{in*}$, it still takes $\overline{\delta}_{min}$ from hitting $V_{th}^{in*}$ (at time $t_o-\overline{\delta}_{min}$) to seeing a zero time glitch (at time $t_o$) at the output. The falling transition has already crossed $V_{th}^{in*}$ when it hits on $V_{th}^{in}$, whereas the rising transition still has some way to go: Denoting the switching waveforms of the preceding gate (driving the input) by $f_\uparrow$ and $f_\downarrow$, the pure delay for the rising resp.\ falling transition evaluates to $\delta^\uparrow_{min}= \overline{\delta}_{min}+\Delta^+$ and $\delta^\downarrow_{min} = \overline{\delta}_{min}+\Delta^-$ with \begin{equation} \Delta^+ = f_\uparrow^{-1}(V_{th}^{in*}) - f_\uparrow^{-1}(V_{th}^{in}),\ \ \ \Delta^- = f_\downarrow^{-1}(V_{th}^{in*}) - f_\downarrow^{-1}(V_{th}^{in})\label{eq:fdupmin}. \end{equation} Consequently, $\delta_\uparrow(-\delta^\uparrow_{min}) = \delta^\uparrow_{min}$ and $\delta_\downarrow(-\delta^\downarrow_{min}) = \delta^\downarrow_{min}$ indeed holds. Finally, since $f_\uparrow$ must obviously rise and $f_\downarrow$ must fall, it follows that if $\Delta^+>0$ (the case in \cref{fig:im_char}) then $\Delta^- <0$. \end{proof} \begin{figure}[tb] \centering \includegraphics[width=0.7\columnwidth]{buffer_vdd_half.pdf} \caption{Characterizing a gate with $V_{th}^{in}=V_{th}^{out}=V_{DD}/2$.} \label{fig:inverter_gidm} \end{figure} \cref{fig:inverter_gidm} shows the derived delay function for non-matching discretization thresholds. Clearly visible are the different pure delays $\delta^\uparrow_{min}\neq \delta^\downarrow_{min}$. Please note, that in our justification of \cref{lem:vth:non:match}, we focused on $\delta_{min}$ and how it changes with varying discretization threshold voltages. If the rising and falling switching waveforms were always the same, as is assumed in the analog channel model \cref{fig:analogy}, this would result in delay functions that are fixed in shape and are simply shifted along the 2\textsuperscript{nd} median. The actual delay functions of gates, obtained by analog simulations, for example, exhibit additional deviations (for $T \neq \delta^{\uparrow/\downarrow}_{min}$), however, since the shape of the input switching waveforms also vary. Consequently, the difference between $V_{th}^{in*}$ and $V_{th}^{in}$ will not always be passed in constant time. Finally, the dependency of the IDM on the particular choice of the discretization threshold voltages also reveals a different problem: \begin{observation}\label{lem:noosc} Different choices of $V_{th}^{out}$ can significantly change the digital model prediction of the IDM. \end{observation} \begin{proof}[Justification] Sub-threshold pulses are automatically removed by the comparator in \cref{fig:analogy}, i.e., they are completely invisible in the digital signal that is fed to the successor gate. Consequently analog traces, that do not cross $V_{th}^{out}$, e.g., high-frequency oscillations at intermediate voltage levels, may be totally suppressed: Assume an oscillatory behavior of a gate output with minimal voltage $V_0$ and maximal $V_1$. These oscillations would only be reflected in the digital discretization if $V_{th}^{out} \in (V_0, V_1)$. \end{proof} \section{Composable Involution Delays} \label{sec:gidm} In this section, we define our \emph{Composable Involution Delay Model} (CIDM), which allows to circumvent the problems presented in the previous section: According to \cref{lem:vth:non:match}, using non-matching thresholds introduces a pure delay shift. The major building blocks of our CIDM are hence \emph{PI channels}, which consist of a pure delay shifter with different shifts $\Delta^+$ and $\Delta^-$ for rising and falling transitions \footnote{A pure delay shifter with $\Delta^+\neq\Delta^-$ causes a constant extension/compression of up/down input pulses by $\pm(\Delta^+ - \Delta^-)$.} followed by an \gls{idm} channel. In order to also alleviate the problem of invisible oscillations identified in \cref{lem:noosc}, we re-shuffle the internal architecture of the original involution channels shown in \cref{fig:analogy} in order to expose trains of canceled transitions on the interconnecting wires. \begin{theorem}[PI channel properties]\label{thm:pcchannel} Consider a channel PI formed by the concatenation of a pure delay shifter $(\Delta^+,\Delta^-)$ with $\Delta^+,\Delta^- \in \mathbb{R}$ followed by an involution channel $c$, given via $\overline{\delta}_\uparrow(.)$ and $\overline{\delta}_\downarrow(.)$ with minimum delay $\overline{\delta}_{min}$. Then PI is \emph{not} an involution channel, but rather characterized by delay functions defined as \begin{equation} \delta_\uparrow(T)=\Delta^++\overline{\delta}_\uparrow(T+\Delta^+) \qquad \delta_\downarrow(T)=\Delta^-+\overline{\delta}_\downarrow(T+\Delta^-).\label{eq:defpcup} \end{equation} These functions satisfy \begin{align} \delta_\uparrow\bigl(-\delta_\downarrow(T) - (\Delta^+ - \Delta^-)\bigr) &= -T + (\Delta^+ - \Delta^-)\label{eq:dupddopc}\\ \delta_\downarrow\bigl(-\delta_\uparrow(T) + (\Delta^+ - \Delta^-)\bigr) &= -T - (\Delta^+ - \Delta^-)\label{eq:ddoduppc}\\ \delta_\uparrow(-\delta^\uparrow_{min}) &= \delta^\uparrow_{min}\label{eq:dupmindef}\\ \delta_\downarrow(-\delta^\downarrow_{min}) &= \delta^\downarrow_{min}\label{eq:ddomindef} \end{align} for $\delta^\uparrow_{min} = \overline{\delta}_{min} + \Delta^+$ and $\delta^\downarrow_{min} = \overline{\delta}_{min} + \Delta^-$. \end{theorem} \begin{proof} Consider an input signal consisting of a simple negative pulse, as depicted in \cref{fig:single_history}. Let $t_i'$ resp. $t_i$ be the time of the falling resp.\ rising input transition, $t_p'$ resp. $t_p$ the time of the falling resp.\ rising transition at the output of the pure delay shifter, and $t_o'$ resp. $t_o$ the time of the falling resp.\ rising transition after the involution channel. With $\overline{T}=t_p-t_o'$, we get $\overline{\delta}_\uparrow(\overline{T})=t_o-t_p$ as well as $t_p=t_i+\Delta^+$ and $t_p'=t_i'+\Delta^-$. For the delay function $\delta_\uparrow(T)$ of the PI channel, if we set $T=t_i-t_o' = t_i -t_p + t_p -t_o' = -\Delta^+ + \overline{T}$, we find \begin{align} \delta_\uparrow(T)&=t_o-t_i = t_o - t_p + t_p - t_i = \Delta^++\overline{\delta}_\uparrow(\overline{T})\nonumber\\ &= \Delta^++\overline{\delta}_\uparrow(T+\Delta^+)\label{eq:gduppc} \end{align} as asserted. By setting $T=-\overline{\delta}_{min} - \Delta^+$ and using $\overline{\delta}_\uparrow(-\overline{\delta}_{min})=\overline{\delta}_{min}$ the equality $\delta_\uparrow(-\overline{\delta}_{min} - \Delta^+) = \Delta^+ + \overline{\delta}_{min}$ is achieved, which confirms \cref{eq:dupmindef}. By analogous reasoning for an up-pulse at the input, which results in the same equations as above with $\Delta^+$ exchanged with $\Delta^-$ and $\overline{\delta}_\uparrow(\overline{T})$ with $\overline{\delta}_\downarrow(\overline{T})$, we also get \begin{align} \delta_\downarrow(T)&=t_o-t_i = t_o - t_p + t_p - t_i = \Delta^-+\overline{\delta}_\downarrow(\overline{T})\nonumber\\ &= \Delta^-+\overline{\delta}_\downarrow(T+\Delta^-)\label{eq:gddopc} \end{align} as asserted. Setting $T=-\overline{\delta}_{min} - \Delta^-$ and using $\overline{\delta}_\downarrow(-\overline{\delta}_{min})=\overline{\delta}_{min}$ confirms \cref{eq:ddomindef} as well. Using a simple parameter substitution allows to transform \cref{eq:gduppc} and \cref{eq:gddopc} to \begin{align} \overline{\delta}_\uparrow(\overline{T}) &= \delta_\uparrow(\overline{T}-\Delta^+) - \Delta^+ \\ \overline{\delta}_\downarrow(\overline{T}) &= \delta_\downarrow(\overline{T}-\Delta^-) - \Delta^-. \end{align} Utilizing these in the involution property of~$\overline{\delta}_\uparrow$ and~$\overline{\delta}_\downarrow$ provides \begin{align} \overline{T} &= -\overline{\delta}_\uparrow\bigl(-\overline{\delta}_\downarrow(\overline{T})\bigr)\nonumber\\ &= -\delta_\uparrow\bigl(-\overline{\delta}_\downarrow(\overline{T})-\Delta^+\bigr) + \Delta^+\nonumber\\ &= -\delta_\uparrow\bigl(-\bigl(\delta_\downarrow(\overline{T}-\Delta^-)-\Delta^-\bigr)-\Delta^+\bigr) + \Delta^+\nonumber\\ &= -\delta_\uparrow\bigl(-\delta_\downarrow(\overline{T}-\Delta^-) + \Delta^- - \Delta^+\bigr) + \Delta^+\nonumber. \end{align} If we substitute $T = \overline{T} - \Delta^-$ in the last line, we arrive at \begin{equation} T - (\Delta^+ - \Delta^-) = -\delta_\uparrow\bigl(-\delta_\downarrow(T) - (\Delta^+ - \Delta^-)\bigr), \end{equation} which confirms \cref{eq:dupddopc}. Doing the same for the reversed involution property of $c$, provides \begin{align} \overline{T} &= -\overline{\delta}_\downarrow\bigl(-\overline{\delta}_\uparrow(\overline{T})\bigr) \nonumber\\ &= -\delta_\downarrow\bigl(-\overline{\delta}_\uparrow(\overline{T})-\Delta^-\bigr) + \Delta^-\nonumber\\ &= -\delta_\downarrow\bigl(-\bigl(\delta_\uparrow(\overline{T}-\Delta^+)-\Delta^+\bigr)-\Delta^-\bigr) + \Delta^-\nonumber\\ &= -\delta_\downarrow\bigl(-\delta_\uparrow(\overline{T}-\Delta^+) + \Delta^+ - \Delta^-\bigr) + \Delta^-\nonumber. \end{align} If we substitute $T = \overline{T} - \Delta^+$ in the last line, we arrive at \begin{equation} T + (\Delta^+ - \Delta^-) = -\delta_\downarrow\bigl(-\delta_\uparrow(T) + \Delta^+ - \Delta^-\bigr), \end{equation} which confirms \cref{eq:ddoduppc}. \end{proof} Eq.~\cref{eq:defpcup} implies that $\delta_\uparrow(.)$ resp.\ $\delta_\downarrow(.)$ are the result of shifting $\overline{\delta}_\uparrow(.)$ resp.\ $\overline{\delta}_\downarrow(.)$ along the 2\textsuperscript{nd} median by $\Delta^+$ resp.\ $\Delta^-$. It is apparent from \cref{fig:inverter_gidm}, though, that the choice of $\Delta^+$, $\Delta^-$ cannot be arbitrary, as it restricts the range of feasible values for $T$ via the domain of $\overline{\delta}_\uparrow(.)$ resp.\ $\overline{\delta}_\downarrow(.)$ (see \cref{def:compatibility} for further details). This becomes even more apparent in the analog channel model. \cref{fig:channel_model_idm2gidm}~(a) shows an extended block diagram of an IDM channel where we applied two changes: First, we added a (one-input, one-output) zero-time Boolean gate~$G$. Second, we split the comparator at the end into a thresholder~$T\!h$ and a cancellation unit~$C$. The \emph{thresholder unit}~$T\!h$ outputs, for each transition on $u_d$, a corresponding $V_{th}$-crossing time of $u_r$, independently of whether it will actually be reached or not. For subthreshold pulses, the transition might even be scheduled in the past. The \emph{cancellation unit}~$C$ only propagates transitions that are in the correct temporal order. The components~$T\!h$ and~$C$ together implement the same functionality as a comparator. At the beginning of the channel, the Boolean gate $G$ (we assume a single-input gate for now) evaluates the input signal $u_i$ in zero time and outputs $u_g$, which is subsequently delayed by the pure delay shifter $\Delta^{+/-}$. Here lies the cause of the problem: If either $\Delta^+<0$ or $\Delta^-<0$ it is possible that transitions $u_p$ are in reversed temporal order which, after being delayed by the constant pure delay $\overline{\delta}_{min}$, have to be processed in this fashion by the slope delimiter. The latter is, however, only defined on traces encoded via the alternating Boolean signal transitions' \emph{\gls{wst}}, which occur in a strictly increasing temporal order and mark the points in time when the switching waveforms shall be changed. Placing the delay shifter in front of the gate, as shown in \cref{fig:channel_model_idm2gidm}~(b), does not change the situation, since the gate also expects transitions in the correct temporal order (note that this is not equal to \gls{wst} since the pure delay is still missing). \begin{figure}[tb] \centering \scalebox{0.7}{\input{channel_idm2gidm.tex}} \caption{Candidate channel models for \gls{cidm}.} \label{fig:channel_model_idm2gidm} \end{figure} One possibility to avoid transitions in the wrong temporal order at the Boolean gate is to move the canceling unit to the front, as shown in \cref{fig:channel_model_idm2gidm}~(c). This solves our present problems but has the consequence, that now transitions are interchanged among gates using the \emph{\gls{tct}} encoding: The \gls{tct} encoding gives, in sequence order, the points in time when the analog switching waveform would have crossed $V_{th}$ (it is not required that it actually does). Consequently, a signal given in \gls{tct} also exposes canceled transitions. Actually this is very convenient for us, since this allows us implicitly to detect oscillations independent of the chosen output threshold and thus solves design challenge~(3) from the introduction. Not all signals in \cref{fig:channel_model_idm2gidm} can actually be mapped to \gls{tct} or \gls{wst}; by suitably recombining the components in our CIDM channel, however, these encodings will be sufficient for our purposes. More specifically, \gls{tct} will be created by the scheduler $S$, subsequently modified by the delay shifter, altered by the cancellation unit $C$, evaluated by the Boolean gate and finally transformed to \gls{wst} by $\overline{\delta}_{min}$. Now we are finally ready to formally define a \gls{cidm} channel (see \cref{fig:channel_model_gidm}). Note that, although a PI channel differs by its internal structure significantly from the \gls{cidm} channel, they are equivalent with respect to \cref{thm:pcchannel}. \begin{definition} A \gls{cidm} channel comprises in succession of a pure delay shifter, a cancellation unit, a Boolean gate, a pure-delay unit, a shaping unit and a thresholding unit [see \cref{fig:channel_model_idm2gidm}(c)]. \end{definition} One may wonder whether CIDM channels could be partitioned also in a different fashion. The answer is yes, several other partitions are possible. For example, one could transmit signal $u_g$ and move the slew-rate limiter and the scheduler to the succeeding channel. This would, however, mean that properties of single CIDM channels depend on the properties of both predecessor and successor gate, which complicates the process of channel characterization and parametrization based on design parameters. \begin{figure}[tb] \centering \scalebox{0.75}{\input{channel_model_gidm.tex}} \caption{Channel model for \gls{cidm}.} \label{fig:channel_model_gidm} \end{figure} The main practical advantage of a \gls{cidm} channel, which is a generalization of an \gls{idm} channel (just set $\Delta^-=\Delta^+=0$), is the additional degree of freedom for gate characterization in conjunction with the fact that a single channel encapsulates a single gate. \section{Glitch Propagation in the CIDM} \label{sec:gidmisidm} Since CIDM channels do not satisfy the involution property, the question about faithful glitch propagation arises. After all, the proof of faithfulness of IDM \cite{FNNS19:TCAD} rests on the continuity of IDM channels, which has been proved only for involution delay functions. In this section, we will show that, for every modeling of a circuit with our CIDM channels, there is an equivalent modeling with IDM channels. Consequently, faithfulness of the IDM carries over to the CIDM. For this purpose, we consider two successive \gls{cidm} channels and investigate the \emph{logical channel}, i.e., the interconnection between two gates $G_1$ and $G_2$ as shown in \cref{fig:channel_model_proofs}. For conciseness, we integrate $\overline{\delta}_{min}$, the slew-rate limiter and the scheduler $S$ in a new block $DSS$, and $\Delta^{+/-}$ followed by~$T\!h$ in the new block $PT\!h$. Using this notation, the logical channel consists of the $DSS$ block of the predecessor gate $G_1$ and the $PT\!h$ block of the successor gate $G_2$. Overall, this is just an IDM channel followed by a pure delay shifter, which will be denoted in the sequel as \emph{IP channel}. The following \cref{thm:cpchannel} proves the somewhat surprising fact that every IP channel satisfies the properties of an involution channel: \begin{theorem}[IP channel properties]\label{thm:cpchannel} Consider an IP channel formed by an involution channel given via $\overline{\delta}_\uparrow(.)$, $\overline{\delta}_\downarrow(.)$, followed by a pure delay shifter $(\Delta^+,\Delta^-)$ with $\Delta^+,\Delta^- \in \mathbb{R}$. Then, it is an involution channel, characterized by some delay functions $\delta_\uparrow(.)$, $\delta_\downarrow(.)$. \end{theorem} \begin{proof} Consider an input signal consisting of a single negative pulse, as depicted in \cref{fig:single_history}. Let $t_i'$ resp. $t_i$ be the time of the falling resp.\ rising input transition, $t_c'$ resp. $t_c$ the time of the falling resp.\ rising transition at the output of the involution channel, and $t_o'$ resp. $t_o$ the time of the falling resp.\ rising transition after the pure delay shifter. With $\overline{T}=t_i-t_c'$, we get $\overline{\delta}_\uparrow(\overline{T})=t_c-t_i$ as well as $t_o'=t_c'+\Delta^-$ and $t_o=t_c+\Delta^+$. By setting $T=t_i-t_o' = t_i -t_c' + t_c' -t_o' = \overline{T} -\Delta^-$ for the delay function $\delta_\uparrow(T)$ of the IP channel we find \begin{align} \delta_\uparrow(T)&=t_o-t_i = t_o - t_c + t_c - t_i = \Delta^++\overline{\delta}_\uparrow(\overline{T})\nonumber\\ &= \Delta^++\overline{\delta}_\uparrow(T+\Delta^-).\label{eq:gdup} \end{align} By analogous reasoning for a an up-pulse at the input, which results in the same equations as above with $\Delta^+$ exchanged with $\Delta^-$ and $\overline{\delta}_\uparrow(\overline{T})$ with $\overline{\delta}_\downarrow(\overline{T})$, we also get \begin{align} \delta_\downarrow(T)&=t_o-t_i = t_o - t_c + t_c - t_i = \Delta^-+\overline{\delta}_\downarrow(\overline{T})\nonumber\\ &= \Delta^-+\overline{\delta}_\downarrow(T+\Delta^+).\label{eq:gddo} \end{align} Equations \eqref{eq:gdup} and \eqref{eq:gddo} are equivalent to \begin{align} \overline{\delta}_\uparrow(\overline{T}) &= \delta_\uparrow(\overline{T}-\Delta^-) - \Delta^+ \\ \overline{\delta}_\downarrow(\overline{T}) &= \delta_\downarrow(\overline{T}-\Delta^+) - \Delta^- \end{align} which can be used in the involution property of~$\overline{\delta}_\uparrow$ and~$\overline{\delta}_\downarrow$ to achieve \begin{align} \overline{T} &= -\overline{\delta}_\uparrow\bigl(-\overline{\delta}_\downarrow(\overline{T})\bigr) \nonumber\\ &= -\delta_\uparrow\bigl(-\overline{\delta}_\downarrow(\overline{T})-\Delta^-\bigr) + \Delta^+\nonumber\\ &= -\delta_\uparrow\bigl(-\bigl(\delta_\downarrow(\overline{T}-\Delta^+)-\Delta^-\bigr)-\Delta^-\bigr) + \Delta^+\nonumber\\ &= -\delta_\uparrow\bigl(-\delta_\downarrow(\overline{T}-\Delta^+)\bigr) + \Delta^+\label{equ:surprise} \end{align} which confirms that the IP channel is indeed an involution channel. \end{proof} We note that $\delta_{min}$ of the IP channel is usually different from $\overline{\delta}_{min}$ of the constituent IDM channel. Indeed, \cref{eq:gdup} above shows that $\delta_{min}$ is defined by $\delta_{min} - \Delta^+ = \overline{\delta}_\uparrow(-\delta_{min} + \Delta^-)$, for example, which reveals that we may (but need not) have $\delta_{min} \neq \overline{\delta}_{min}$. In addition, the IP channel is strictly causal only if $\Delta^+$, $\Delta^-$ satisfy certain conditions: From \cref{eq:gdup} and \cref{eq:gddo} and the required conditions $\delta_\uparrow(0)>0 \Leftrightarrow \delta_\downarrow(0) >0$, we get \begin{equation} \delta_\uparrow(0) = \Delta^+ + \overline{\delta}_\uparrow(\Delta^-) > 0 \Leftrightarrow \delta_\downarrow(0) = \Delta^- + \overline{\delta}_\downarrow(\Delta^+) > 0.\label{eq:gidmcausality} \end{equation} At this point, the question arises whether it can be ensured that the logical channels in \cref{fig:channel_model_proofs} are strictly causal. The answer is yes, provided the interconnected gates are \emph{compatible}, in the sense that the joined $PT\!h$ block of $G_2$ and the $DSS$ block of $G_1$ are compatible w.r.t.\ \cref{lem:vth:non:match}. More specifically, the pure delays $\Delta^+$, $\Delta^-$ embedded in $PT\!h$ of $G_2$ should ideally match (the switching waveforms of) the $DSS$ block in $G_1$: According to \cref{eq:fdupmin}, $-\Delta^+$ is the time the rising input waveform of $G_2$ requires to change from $V_{th}^{in*}$ to the actual threshold voltage $V_{th}^{in}$, while $-\Delta^-$ denotes the same for the falling input. Assuming $\Delta^+>0$ and $\Delta^-<0$, it can be seen from \cref{eq:gddo} that the overall delay function for falling transitions is derived by shifting the original one to the left and downwards (cp.\ \cref{fig:inverter_gidm}). Note that the 2\textsuperscript{nd} median is crossed at the same location since $\overline{\delta}_\downarrow(\overline{\delta}_{min}+\Delta^+) = \overline{\delta}_{min} - \Delta^-$, and thus results in $\overline{\delta}_{min}=\delta_{min}>0$ (which implies causality). The case of $\Delta^+<0$ and $\Delta^->0$ can be argued analogously, starting from \eqref{eq:gdup}. These considerations justify the following definition: \begin{figure}[tb] \centering \scalebox{0.75}{\input{channel_model_proofs.tex}} \caption{Channel model for proofs of the \gls{cidm}. Signals in blue have data type WST, those in green VCT.} \label{fig:channel_model_proofs} \end{figure} \begin{definition}[Compatibility of CIDM channels]\label{def:compatibility} Two interconnected CIDM channels are called \emph{compatible}, if the logical channel between them is strictly causal. \end{definition} Consequently, a logical channel connecting $G_1$ and $G_2$ is strictly causal if $\Delta^+$, $\Delta^-$ has been determined in accordance with \cref{lem:vth:non:match}. If this is not observed, non-causal effects like an output pulse crossing $V_{th}^{out}$ without the corresponding input pulse crossing $V_{th}^{in}$ could appear. Every chain of gates properly modeled in the CIDM can be represented by a chain of Boolean gates interconnected by causal IDM channels, with a ``dangling'' $PT\!h$ block at the very beginning and a $DSS$ block at the very end. Whereas the latter is just an IDM channel, this is not the case for the former. Fortunately, this does not endanger the applicability of the existing IDM results either: As stated in property C2) of a circuit in \cite[Sec.~III]{FNNS19:TCAD}, the original IDM assumes 0-delay channels for connecting an output port of a predecessor circuit $C_1$ with an input port of a successor circuit $C_2$. In the case of using CIDM for modeling $C_1$ and $C_2$, this amounts to combining the $DSS$ block of gate that drives the output port of $C_1$ with the $PT\!h$ block of the input of the gate in $C_2$ that is attached to the input port. Note that the analogous reasoning also applies to any feedback loop in a circuit. On the other hand, for an ``outermost'' input port of a circuit, we can just require that the connected gate must have a threshold voltage matching the external input signal, such that $\delta^\uparrow_{min}=\delta^\downarrow_{min}=0$ for the dangling $PT\!h$ component. Finally, in hierarchical simulations, where the output ports of some circuit are connected to the input ports of some other circuits, the situation explained in the previous paragraph reappears. As a consequence, all the results and all the machinery developed for the original IDM~\cite{FNNS19:TCAD} could in principle be applied also to circuits modeled with CIDM channels. Both impossibility results and possibility results, and hence faithfulness, hold, and even the IDM digital timing simulation algorithm, as well as the Involution Tool~\cite{OMFS20:INTEGRATION}, could be used without any change. Using the CIDM for circuit modeling is nevertheless superior to using the IDM, because its additional degree of freedom facilitates a more accurate characterization of the involved channels w.r.t.\ real circuits. We also note that it is possible to conduct a proof for the possibility of unbounded SPF directly in the CIDM (see \cref{sec:possibility}) as well. \section{Simulating Executions of Circuits}\label{sec:algorithm} In this section, we provide \cref{algo:iter} for timing simulation in the CIDM. Since our algorithm is supposed to be run in the Involution Tool \cite{OMFS20:INTEGRATION}, which utilizes Mentor\textsuperscript{\textregistered} ModelSim\textsuperscript{\textregistered} (version 10.5c), our implementation had to be adapted to its internal restrictions. The general idea is to replace the gates from standard libraries by custom gates represented by CIDM channels. According to \cref{fig:channel_model_gidm}, a custom gate $C$ consists of three main components: (i) a pure delay shifter $P_I=\Delta^{+/-}$ for each input $I$ (cancellation is done automatically by ModelSim), (ii) a Boolean function representing the embedded gate $G$, and (iii) an IDM channel $c$. Note that the output of $G$ is in \gls{wst} format, which facilitates direct comparison of the events ($evGO$) occurring in our CIDM simulation with the gate outputs obtained in classic simulations based on standard libraries. Unfortunately, the input and output of a CIDM channel, i.e., the input of component $P_I$ and the output of component $c$, is of type \gls{tct}, which is incompatible with discrete event simulations: In ModelSim signal transitions are represented by events, which are processed in ascending order of their scheduled time. Consequently, they cannot be scheduled in the past, which may, however, be needed for a transition $(t_n,x_n,o_n)$ with $o_n<0$. Therefore, for every CIDM channel $C$ we maintain a dedicated file $F(C)$, in which the simulation algorithm writes all the transitions $(t_n,x_n,o_n)$ generated by $C$. The event $evTI(C)$ that ModelSim inherently assigns to the output signal of $C$ is only used as a \emph{transition indicator}: For every edge $(C,I,\Gamma)$, it instantaneously triggers the occurrence of the ``duplicated'' ModelSim event $evTI(C,I,\Gamma)$, which signals to $I$ of $\Gamma$ the event that there is a new transition in the file $F(C)$. For an input port $i$, exactly the same applies, except that the input transitions in $F(i)$ and the transition indicator are externally supplied. By contrast, both the inputs and output of the gate $G$ embedded in a CIDM channel $C$, i.e., the output of component $P_I$ and the input of component $c$, are of type \gls{wst}. Consequently, we can directly use the ModelSim events $evGI(I,C)$ resp.\ $evGO(C)$ assigned to the output of $P_I$ for input $I$ of $C$ resp.\ the output of $G$ of $C$ for conveying \gls{wst} transitions. Still, cancellations that would cause occurrence times $t'<t_{now}$ must be prohibited explicitly (see \cref{algo:line:addpure} in \cref{algo:iter}). Our simulation algorithm uses the following functions: \begin{itemize} \item $(t, ev, Par) \gets getNextEvent()$: Returns the event $ev$ with the smallest scheduled time $t$ and possibly some additional parameters $Par$. If $t'$ denotes the time of the previous event, then $t\geq t'$ is guaranteed to hold. The possible types of events are $ev \in \{evTI(\Gamma,I,C), evGI(I,C), evGO(C)\}$, where $C$ is the channel the event belongs to, $I$ one of its inputs, and $\Gamma$ the vertex that feeds $I$. If multiple different events are scheduled for the same time $t$, they occur in the order first $evTI(\Gamma,I,C)$, then $evGI(I,C)$ and finally $evGO(C)$. If multiple instance of the same event are scheduled for the same time $t$, then only the event that has been scheduled last actually occurs. \item $sched(ev, (t, x))$: Schedules a new event $ev$ signaling the transition $x \in \{0,1,toggle\}$ at time $t$; the case $x=toggle$ means $x=1-x'$ for $x'$ denoting the last (= previous) transition. If the current simulation time is $t_{now}$, only $t \geq t_{now}$ is allowed. \item $init(ev, (-\infty, x))$: Initializes the initial state of the event $ev$, without scheduling it. \item $value \gets s(ev,t)$: Returns the value of the state function for event $ev\in\{evGI(I,C),evGO(C)\}$ at time $t$. \item $F(C) \gets (t, x, o)$: Adds a new \gls{tct} transition $(t,x,o)$ generated by the output of $C$ to the file $F(C)$, which buffers the \gls{tct} transitions of $C$. \item $(t', x) \gets F(\Gamma)$ reads the most recently added \gls{tct} transition $(t,x,o)$ from the file $F(\Gamma)$ and returns it as $(t',x)=(t+o,x)$. \item $Init(\Gamma)$: For both channels and input ports, the a fixed initial value $(-\infty,Init(\Gamma))$. \item $x \gets G.f(x_1,\dots,x_{G.d})$: Applies the combinatoric function $G.f$ (of the $G.d$-ary gate $G$ embedded in some channel $C$) to the list of logic input values $x_1,\dots,x_{G.d}$, and returns the result $x$. \item $postprocess()$: Implements internal bookkeeping. For example, it allows to export the \gls{tct} signals contained in our files $F(\Gamma)$ in $\gls{wst}$ format, for direct comparison with standard simulation results. \end{itemize} \cref{algo:iter} conceptually starts at time $t=-\infty$ and first takes care of ensuring a clean initial state. The simulation of the actual execution commences at $t=0$, where the conceptual ``reset'' that froze the initial state is released: Every channel whose initial state differs from the computation of its embedded gate in the initial state causes a corresponding transition of the gate output at $t=0$, which will be processed subsequently in the main simulation loop. \begin{algorithm} \footnotesize \begin{algorithmic}[1] \State \Comment{Executed at simulation time $t=-\infty$:} \ForAll{channels $C$} \State $F(C) \gets (-\infty,Init(C),0)$ \Comment{init file} \State $init(evTI(C),(-\infty,0))$ \label{line:initTI}\Comment{init toggle indicator} \State $init(evGO(C),(-\infty,Init(C)))$ \label{line:initGO}\Comment{init gate output} \ForAll{incoming edges $(\Gamma,I,C)$ of $C$:} \State $init(evGI(I,C), (-\infty, Init(\Gamma)))$ \label{line:initGI}\Comment{init gate inputs} \EndFor \EndFor \State \Comment{Executed at simulation time $t=0$:}\hfill \ForAll{channels $C$, with $d$-ary gate $G$ and incoming edges $(\Gamma_1,I_1,C), \dots, (\Gamma_d,I_d,C)$} \State $x = G.f(s(evGI(I_1,C),0),\dots,s(evGI(I_{d},C),0))$ \If{$x \neq Init(C)$} \State $sched(evGO(C),(0,x))$ \Comment{add reset transition} \EndIf \EndFor \State \Comment{Main simulation loop:}\hfill \State $(t, ev, Par) \gets getNextEvent()$ \While{$t\leq \tau$} \If{$ev = evTI(\Gamma,I,C)$}\Comment{$evTI$ go first} \State $(\Delta^+, \Delta^-, G) \gets Par$ \State $(t', x) \gets F(\Gamma)$\label{algo:line:readtt} \State $\delta_{min} \gets calcDelta(G.f, x, \Delta^+, \Delta^-)$ \State $sched(evGI(I,C), (\max\{t,(t' + \delta_{min})\},x)$\label{algo:line:addpure} \ElsIf{$ev = evGI(I,C)$}\Comment{$evGI$ come next} \State $(G) \gets Par$ \State $x \gets s(evGO(C),t)$\Comment{Current gate output} \State $y \gets G.f(s(evGI(I_1,C),t),\dots,s(evGI(I_{G.d},C),t))$ \If{$x\neq y$}\label{line:GOchangecheck} \State $sched(evGO(C),(t,y))$ \EndIf \ElsIf{$ev = evGO(C)$}\Comment{and finally $evGO$} \State $(x,\delta_\uparrow(.), \delta_\downarrow(.), \Delta^+, \Delta^-) \gets Par$ \State $(t', x') \gets F(C)$ \State $T \gets t - t'$ \If{$x = 1$} \State $o \gets \delta_\uparrow(T - \Delta^+) - \Delta^+$ \Else \State $o \gets \delta_\downarrow(T - \Delta^-) - \Delta^-$ \EndIf \State $F(C) \gets (t,x,o)$ \State $sched(evTI(C),(t,toggle))$ \Comment{triggers $evTI(C,I,\Gamma')$} \EndIf \State $(t, ev, Par) \gets getNextEvent()$ \EndWhile \State $postprocess()$\label{algo:line:postproc} ~\\ \Procedure{$calcDelta$}{$func, x, \Delta^+, \Delta^-$} \If{$func = not$}\ \If{$x = 1$} \Return $\Delta^-$ \Else\ \Return $\Delta^+$ \EndIf \ElsIf{$func = id$} \If{$x = 1$} \Return $\Delta^+$ \Else\ \Return $\Delta^-$ \EndIf \Else\ \State assert($\Delta^+ = \Delta^-$) \State \Return $\Delta^+$ \EndIf \EndProcedure \end{algorithmic} \caption{CIDM circuit simulation algorithm} \label{algo:iter} \end{algorithm} The algorithm uses the procedure $calcDelta$ for computing the actual $\Delta^+$, $\Delta^-$ employed in the $P_I$ component of a channel, which unfortunately depend on the embedded gate $G$: For an inverter, for example, a rising transition at the input leads to a falling transition at the output, so $\delta^\downarrow_{min}$ must be used. This is unfortunately not as easy for multi-input gates $G$, since the effect of any particular input transition \emph{on the output} needs to be known in advance. This is difficult in the case of an XOR gate, for example, as it depends on the (future) state of the other inputs. Currently, we hence only support $\Delta^+ = \Delta^-$ for multi-input gates in our simulation algorithm. \medskip Last but not least, we briefly mention a small but important addition to the Involution Tool, which we have implemented recently. Without knowledge about the delay channel, for example $\delta_{\infty}$ or $f_\uparrow$/$f_\downarrow$, it is impossible to determine if digital transition are too close, i.e., that pulses degrade. Note that this is not a problem of \gls{cidm} alone, but also appears with the widely used inertial delay, since the user is not able to tell how far away a certain pulse is from removal (based on the digital results alone, of course). For this purpose, we added the possibility to plot the analog trajectory at selected locations of the circuit. In more detail, we place full-range rising and falling waveforms such that the discretization thresholds are crossed at the predicted points in time. The switching between between those is, according to our analog channel model, instantaneous and thus only continuous in value. Nevertheless, the resulting waveform provides designers with the possibility to quickly identify critical timing issues in the circuit, which may need further investigation by means of accurate analog simulation. \medskip \cref{thm:simulationcorrectness} below will show that \cref{algo:iter} indeed computes valid executions for a circuit, provided all its logical channels (cp.\ \cref{fig:channel_model_proofs}) are strictly causal. As argued in \cref{sec:gidmisidm}, this is the case if all interconnected CIDM channels are compatible (see \cref{def:compatibility}), which will be assumed for the remainder of this section. Let $\delta_{min}^C>0$ be the smallest $\delta_{min}$ of any logical channel in the circuit. We start with some formal definition for the CIDM. \bfno{Signals.} Since we are dealing with two data types for signals in CIDM, we need to distinguish \gls{wst} and \gls{tct}. For \gls{wst}, we just re-use the original signal definition of the IDM, namely, a list of alternating transitions represented as tuples $(t_n,x_n)$, $n \geq 0$, where $t_n \in \mathbb{R} \cup \{-\infty\}$ denotes the time when a transition occurs, and $x_n \in \{0,1\}$ whether it is rising (1) or falling (0). More formally, every \gls{wst} signal $s=((t_n,x_n))_{n\geq 0}$ must satisfy the following properties: \begin{enumerate} \item[S1)] the initial transition is at time~$t_0=-\infty$. \item[S2)] the sequence $(x_n)_{n\geq 0}$ must be alternating, i.e., $x_{n+1} = 1- x_n$. \item[S3)] the sequence $(t_n)_{n\geq 0}$ must be strictly increasing and, if infinite, grow without a bound. \end{enumerate} To every \gls{wst} signal $s$ corresponds a \emph{state function} $s: \mathbb{R}\cup \{-\infty\} \to \{0,1\}$, with $s(t)$ giving the value of the most recent transition before or at $t$; it is hence constant in the half-open interval $[t_n,t_{n+1}]$. For \gls{tct}, we add an offset to the tuples used in \gls{wst}. A signal here is a list of alternating transitions represented as tuples $(t_n,x_n,o_n)$, $n \geq 0$, where $t_n \in \mathbb{R} \cup \{-\infty\}$ denotes the time when a transition is \emph{scheduled}, and $x_n \in \{0,1\}$ whether it is rising (1) or falling (0). The offset $o_n \in \mathbb{R}\cup \{-\infty\}$ specifies when the transition actually \emph{occurs} in the signal, namely, at time $t_n'=t_n + o_n$. Formally, every \gls{tct} signal $s=((t_n,x_n,o_n))_{n\geq 0}$ must satisfy the following properties: \begin{enumerate} \item[S1)] to S3) are the same as for \gls{wst}. \item[S4)] $o_0=0$ and the sequence $(t_n')_{n\geq 2}$ must satisfy $t_{n-2}' \leq t_n'$. \end{enumerate} If $t_n'\leq t_{n-1}'$, then we say that the transition $(x_n,t_n,o_n)$ \emph{cancels} the transition $(x_{n-1},t_{n-1},o_{n-1})$. Property S4) implies that only the most recent, i.e., previous, transition could be canceled. Since all non-canceled transitions are hence alternating and have strictly increasing occurrence times, we can define the \gls{tct} state function accordingly: To every \gls{tct} signal $s$ corresponds a function $s: \mathbb{R} \cup \{-\infty\}\to \{0,1\}$, with $s(t)$ giving the value of the most recent not canceled transition occurring before or at $t$. Note that this implies that the \gls{tct} state function $s_{tct}(t)$ at the input of a cancellation unit $C$ and the \gls{wst} state function $s_{wst}(t)$ at its output are the same. \smallskip \bfno{Circuits.} Circuits are obtained by interconnecting a set of input ports and a set of output ports, forming the external interface of a circuit, and a set of combinational gates represented by CIDM channels. Recall from \cref{fig:channel_model_gidm} that gates are embedded within a channel here. We constrain the way components are interconnected in a natural way by requiring that any channel input and output port is attached to only one input port or channel output port. Formally, a {\em circuit\/} is described by a directed graph where: \begin{enumerate} \item[C1)] A vertex $\Gamma$ can be either an {\em input port}, an {\em output port}, or a {\em channel}. \item[C2)] The \emph{edge} $(\Gamma,I,\Gamma')$ represents a $0$-delay connection from the output of $\Gamma$ to a fixed input $I$ of $\Gamma'$. \item[C3)] Input ports have no incoming edges. \item[C4)] Output ports have exactly one incoming edge and no outgoing one. \item[C5)] A channel that embeds a $d$-ary gate has $d$ inputs $I_1,\dots, I_d$, fed by incoming edges, ordered by some fixed order. \item[C6)] A channel $C$ that embeds a $d$-ary gate $G$ maps $d$ \gls{tct} input signals $s_{I_1},\dots, s_{I_d}$ to a \gls{tct} output signal $s_C = f_C(s_{I_1},\dots, s_{I_d})$, according to the CIDM channel function $f_C(.)$ of $C$ (which also comprises the Boolean gate function $G.f$). \end{enumerate \smallskip \bfno{Executions.} An {\em execution\/} of circuit~${C}$ is a collection of signals~$s_\Gamma$ for all vertices~$\Gamma$ of~${C}$ that respects the channel functions and input port signals. Formally, the following properties must hold: \begin{enumerate} \item[E1)] If~$i$ is an input port, then there are no restrictions on~$s_i$. \item[E2)] If~$o$ is an output port, then~$s_o = s_C$, where~$C$ is the unique channel connected to~$o$. \item[E3)] If vertex~$C$ is a channel with~$d$ inputs $I_1,\dots,I_d$, ordered according to the fixed order condition C5), and channel function~$f_C$, then $s_C = f_C(s_{\Gamma_1},\dots, s_{\Gamma_d})$, where $\Gamma_1,\dots,\Gamma_d$ are the vertices the inputs $I_1,\dots,I_d$ of $C$ are connected to via edges $(\Gamma_1,I_1,C), \dotsm (\Gamma_d,I_d,C)$. \end{enumerate} First, we show that all \gls{tct} signals occurring in the CIDM, namely, $u_i^I$, $u_p^I$ (and $u_s$, which is, however, the same as $u_i^{I'}$ of a successor channel) in \cref{fig:channel_model_gidm}, also satisfy property S4). Note that all that a pure delay shifter component $P_I=(\Delta^+,\Delta^-)$ does is to add $\Delta^+$ resp. $\Delta^-$ to the offset component of every \gls{tct} rising resp.\ falling transition in $u_i^I$ to generate $u_p^I$. \begin{lemma}[\gls{tct} correctness]\label{lem:tctcorrectness} Consider a strictly causal IDM channel fed by a \gls{wst} input signal. If its output is interpreted as a \gls{tct} signal, it satisfies properties S1)--S4). \end{lemma} \begin{proof} Properties S1)--S3) are immediately inherited from S1)--S3) of the \gls{wst} input signal. Assume that $(t_{n+1},x)$, $n\geq 1$, w.l.o.g.\ with $x=0$, is the first transition in the input signal of $c$ that causes the corresponding output $t_{n+1}'$ to violate S4). Let $t_{n-1}'$, $t_n'=t_n+\delta_\uparrow(t_n-t_{n-1}')$ and $t_{n+1}'=t_{n+1}+\delta_\downarrow(t_{n+1}-t_n')$ be the times the corresponding output transitions occur. By our assumption, $t_{n+1}' < t_{n-1}'$. Since $t_{n+1}\geq t_n$, we find by using monotonicity of $\delta_\downarrow(.)$ and the involution property that \begin{align} t_{n+1}' &= t_{n+1} + \delta_\downarrow(t_{n+1}-t_n-\delta_\uparrow(t_n-t_{n-1}')) \nonumber\\ &\geq t_{n+1} + \delta_\downarrow(-\delta_\uparrow(t_n-t_{n-1}'))\nonumber\\ &= t_{n+1}-t_n+t_{n-1}' \geq t_{n-1}'\nonumber, \end{align} which provides the required contradiction. The proof for $x=1$ is analogous. \end{proof} \cref{lem:tctcorrectness} in conjunction with the strict causality established for consistent CIDM channels in \cref{sec:gidmisidm} reveals that we can indeed use the inherent cancellation of out-of-order ModelSim events for implementing the cancellation unit $C$ in \cref{algo:line:addpure} of \cref{algo:iter}. Note that we must explicitly prohibit scheduling a canceling event before $t_{now}$, however. The following \cref{lem:evGOseq} shows that the gate input signals $evGI(I,C)$ and gate output signals $evGO(C)$ as well as every $evTI(\Gamma,I,C)$ are indeed of type \gls{wst}. This result is mainly implied by be internal workings of ModelSim and strict causality. \begin{lemma}[\gls{wst} correctness]\label{lem:evGOseq} Every signal $evTI(\Gamma,I,C)$, $evGI(I,C)$ and $evGO(C)$ maintained by \cref{algo:iter} for a circuit comprising only consistent CIDM channels is of type \gls{wst}. \end{lemma} \begin{proof} Condition S1) follows from \cref{line:initTI}--\cref{line:initGI}. Condition S2) follows for $evTI$ from using the toggle mode and for $evGO$ from \cref{line:GOchangecheck}. For $evGI$, it is implied by the fact that the signal $u_i^I$ of $C$ maintained via $F(\Gamma)$ and $evTI(\Gamma,I,C)$ is alternating in the absence of cancellations, as it is generated by the alternating signal $evGO$ in $\Gamma$. Since, by \cref{lem:tctcorrectness}, cancellations do not destroy this alternation for consistent CIDM channels either, the claim follows. As far as condition S3) is concerned, $t_{n+1}-t_n > 0$ for $n\geq 0$ follows from the inherent properties of ModelSim, which ensure that only the last of several instances of the same event $ev$ scheduled for the same time $t$ actually occurs. However, we also need to prove that $t_n$ grows without bound if there are infinitely many transitions in the signal. We first consider $evGO(C)$ and assume, for a contradiction, that the limit of the monotonically increasing sequence of occurrence times is $\lim_{n\to\infty} t_n = L < \infty$. Let $C=C_0$. By the pigeonhole principle, there must be an input $I$ of $C_0$ which causes a subsequence of infinitely many of the transitions $evGO(C_0)$. Consider the channel $C_1$ which feeds this $I$, and consider any $t_n^0$. This transition is caused by the, say, $k$-th transition of $evGI(I,C_0)$ and hence of $evTI(C_1,I,C_0)$, which also occur at time $t_n^0$. Obviously, their cause is the $k$-th transition of $evGO(C_1)$, which occurs at time $t_k^1$. Clearly, the latter did not cancel the $k-1$-st transition of $evTI(C_1,I,C_0)$, otherwise the $k$-th transition of $evTI(C_1,I,C_0)$ would not have happened either. From \cite[Lem.~4]{FNNS19:TCAD}, it follows that $t_n^0 - t_k^1 > \delta_{min} \geq \delta_{min}^C$, where $\delta_{min}$ is the one of the logical channel between the gate of $C_1$ and the gate of $C_0$ (which is an IDM channel). Consequently, all the infinitely many transitions of $evGO(C_1)$ that cause the subsequence of transitions of $evGO(C_0)$ must actually occur by time $L-\delta_{min}^C$. If we delete all the transitions in the signal $evGO(C_1)$ after time $L-\delta_{min}^C$, we can repeat exactly the same argument inductively. Repeating this $i$ times, we end up with some channel $C_i$ with infinitely many transitions of $evGO(C_i)$ by time $L-i\delta_{min}^C$. Clearly, for $i > L/\delta_{min}^C$, this is impossible, which provides the required contradiction. Since infinitely many transitions of either $evTI(\Gamma,I,C)$ and $evGI(I,C)$ in finite time can be traced back to infinitely many transitions in finite time of $evGO(\Gamma)$, the above contradiction also rules out these possibilities, which completes our proof. \end{proof} We get the following corollaries: \begin{lemma}[File consistency]\label{lem:consistencyTIandF} For every channel $C$, $evTI(C)=evTI(C,I,\Gamma)$ and $F(C)$ can never become inconsistent, i.e., all successors $\Gamma$ of $C$ can read $F(C)$ when processing $evTI(C,I,\Gamma)$ in \cref{algo:line:readtt} before $evTI(C)$ is toggled again. \end{lemma} \begin{proof} Assume that the processing of $evGO(C)$ of channel $C$, at time $t$, writes a new transition to $F(C)$ and toggles $evTI(C)$. This causes the scheduling of $evTI(C,I,\Gamma)$ for time $t$ for $\Gamma$. In order to overwrite $F(C)$ before $\Gamma$ could read it, a canceling transition must have been written to $F(C)$ by time $t$. This cannot happen strictly before $t$, though, as then both the original $evGO(C)$ and $evTI(C)$ would have been canceled. It cannot happen at $t$ either, since this is prohibited by \cref{lem:consistencyTIandF}. Hence, we have established the required contradiction. \end{proof} \begin{theorem}[Correctness of simulation]\label{thm:simulationcorrectness} For any $0 \leq \tau < \infty$, the simulation \cref{algo:iter} applied to a circuit with compatible CIDM channels always terminates with a unique execution up to time $\tau$. \end{theorem} \begin{proof} According to \cref{lem:evGOseq}, the simulation time $t$ of the main simulation loop grows without bound, unless the execution consists of finitely many events only. By \cref{lem:tctcorrectness} and \cref{lem:consistencyTIandF}, the simulation algorithm correctly simulates the behavior of CIDM channels and thus indeed generates an execution $E$ of our circuit. To show that $E$ is unique, assume that there is another execution $E'$ that satisfies E1)--E3). As it must differ from $E$ in some transition generated for some channel $C$, $E'$ cannot adhere to the correct channel function $f_C$, hence violates E3). \end{proof} \section{Experiments} \label{sec:experiments} In this section, we validate our theoretical results by means of simulation experiments. This requires two different setups: (i) To validate the CIDM, we incorporated \cref{algo:iter} in our Involution Tool \cite{OMFS20:INTEGRATION} and compared its predictions to other models. (ii) To establish the mandatory prerequisite for these experiments, namely, an accurate characterization of the delay functions, we employed a fairly elaborate analog simulation environment. \begin{sloppypar} Relying on the \SI{15}{\nm} Nangate Open Cell Library featuring FreePDK15$^\text{TM}$ FinFET models~\cite{Nangate15} ($V_{DD}=\SI{0.8}{\V}$), we developed a Verilog description of our circuits and used the Cadence\textsuperscript{\textregistered} tools Genus\textsuperscript{TM} and Innovus\textsuperscript{TM} (version 19.11) for optimization, placement and routing. We then extracted the parasitic networks between gates from the final layout, which resulted in accurate SPICE\ models that were simulated with Spectre\textsuperscript{\textregistered} (version 19.1). These results were used both for gate characterization and as a golden reference for our digital simulations. \end{sloppypar} Like in \cite{FNNS19:TCAD}, our main target circuit is a custom inverter chain. In order to highlight the improved modeling accuracy of CIDM, it consists of seven alternating high- and low-threshold inverters. They were implemented by increasing the channel length of p- respectively nMOS transistors, which varies the transistor threshold voltages \cite[Fig.~2]{Ase98}. For comparison purposes, we conducted experiments with a standard inverter chain as well. Regarding gate characterization for \gls{idm}, we used two different approaches. Recall from \cref{obs:singlefixesall} that fixing a single discretization threshold pins the value of all consistent $\delta_{min}$, $V_{th}^{in}$ and $V_{th}^{out}$ throughout the circuit. In the variant of \gls{idm} called IDM*, we chose $V_{th}^{out*}=V_{DD}/2$ for the last inverter in the chain, and determined the actual value of its matching $V_{th}^{in*}$ by means of analog simulations. To obtain consistent discretization thresholds for the whole circuit, we repeated this characterization, starting from $V_{th}^{out*}=V_{th}^{in*}$ for the next inverter up the chain. We thereby obtained values in the range $[0.301,0.461]\ \si{\V}$, with $V_{th}^{in*}=\SI{0.455}{\V}$ for the first gate. Obviously, characterizing a circuit in this fashion is very time-consuming, as only a single gate in a path can be processed at a time. \begin{figure}[tb] \centering \includegraphics[width=0.98\columnwidth]{results_area_comb.pdf} \caption{Accuracy, expressed as the normalized total deviation area of the digital predictions, relative to SPICE\ for the standard inverter chain (top) and high/low threshold inverter chain (bottom). Lower bars indicate better results.} \label{fig:results_area_comb} \end{figure} Moreover, forks (that is, joins) are problematic for this characterization, since the input characterization thresholds of two of its end most certainly do not coincide. Reversing the direction of characterization, i.e., starting at the front and propagating towards the end, would solve this problem but adds a similar difficulty at the inputs of multi-input gates. Needless to say, feedback loops most probably make any such attempt impossible. Alternatively, we also separately characterized every gate for $V_{th}^{out*}=V_{DD}/2$ and determined the matching $V_{th}^{in*}$, which we will refer to as IDM+. Note carefully that the discretization thresholds of connected gate out- and inputs differ for IDM+, such that an error is introduced at every interconnecting edge. Since the signals are very steep near $V_{DD}/2$, the introduced error is typically rather small. This is even more pronounced due to the fact that the deviation of the input thresholds is usually smaller than that of the output thresholds due to the natural amplification of a gate. Note that this was verified by our simulations of the standard inverter chain. However, although the misprediction is small, it is introduced for each transition at every gate. While this might be negligible for small circuits like our chain, the error quickly accumulates for larger devices leading to deviations even for very broad pulses. Thus, the IDM*\ can be expected to deliver worse results than pure/inertial delay while being a computationally much more expensive approach. Indeed, for the gates used in our standard inverter chain, we recognized a clear bias towards $V_{th}^{in*}<V_{DD}/2$ for $V_{th}^{out*}=V_{DD}/2$. Results for these delay functions are shown in \cref{fig:results_area_comb}. Finally, characterizing gates for \gls{cidm} was simply executed for $V_{th}^{out}=V_{th}^{in}=V_{DD}/2$. The results for stimulating the standard inverter chain, with 2,500 normally distributed pulses of average duration $\mu$ and standard deviation $\sigma$, obtained by the Involution Tool for IDM*, IDM+, \gls{cidm} and the default inertial delay model, are shown in \cref{fig:results_area_comb} (top). The accuracy of the model predictions are presented relative to our digitized SPICE\ simulations, which gets subtracted from the trace obtained with a digital delay model. Summing up the area (without considering the sign), we obtain a metric that can be used to compare the similarity of two traces. Since the absolute values of the area are inexpressive, we normalize the results and use the inertial delay model as baseline. For short pulses, IDM*, IDM+\ and \gls{cidm} perform similarly. We conjecture that this is a consequence of the narrow range for $V_{th}^{out*}$ and $V_{th}^{in*}$ ($[0.39156, 0.4]\ \si{\V}$), and therefore the induced error due to non-perfect matchings in IDM+\ is negligible. For broader pulses, we observe a reduced accuracy of IDM*\ and IDM+, which is primarily an artifact of the imperfect approximation of the real delay function by the ones supported by the Involution Tool. We even observed settings, where \gls{cidm} does not even beat the inertial delay model, which can also be traced to this cause. For our custom inverter chain [\cref{fig:results_area_comb} (bottom)], \gls{cidm} outperforms, as expected, the other models considerably, whereas IDM+\ occasionally delivers poor results, even compared to inertial delays. This is a consequence of the non-matching threshold values and the accumulating error. IDM*\ achieves much better predictions, but still falls short compared to \gls{cidm}. For broader pulses, \gls{cidm} and the inertial delay model perform similar, since they use the same maximum delay $\delta_\uparrow(\infty)$ and $\delta_\downarrow(\infty)$. The degradation of IDM*\ is once again a result of the imperfect delay function approximations. Finally, analog simulations in \cref{fig:experimental_results} revealed that an oscillation slightly below $V_{DD}/2$ at the input of a low-threshold inverter can still result in full range switches at the end of the chain. For the fast \gls{idm} characterization such traces get removed. Note that albeit the presented oscillation is visible in \gls{idm} when characterized from back to front there are still infinitely many possibilities that can not be detected. Even if these do not propagate further it is important to know if the circuit has stabilized or not. The digital simulation results are shown in \cref{fig:experimental_results}. While \gls{idm} removes the oscillation and thus does not indicate any transition at the output \gls{cidm} correctly predicts the regeneration of the pulses. Note that we also added the capability to display the analog waveform $u_r$ that corresponds to the digital threshold crossings to the Involution Tool. This makes it possible for the user to investigate if the pulses are ill shaped and thus the circuit needs improvements. To summarize the results of our experiments, we highlight that the characterization procedure for \gls{idm} either requires high effort (IDM*) or may lead to modeling inaccuracies (IDM+). The \gls{cidm} clearly outperforms all other models w.r.t.\ modeling accuracy for our custom inverter chain, and is also the only model that can faithfully predict the ``de-cancellation'' of sub-threshold pulses. \begin{figure}[tb] \centering \includegraphics[width=0.98\columnwidth]{experimental_results.pdf} \caption{Recovering sub-threshold waveforms in an inverter chain using the \gls{cidm}.} \label{fig:experimental_results} \end{figure} \section{Conclusions} \label{sec:conclusion} We presented the \glsentryfull{cidm}, a generalization of the \glsentryfull{idm} that retains its faithful glitch-propagation properties. Its distinguishing properties are wider applicability, composability, easier characterization of the delay functions, and exposure of canceled pulse trains at interconnecting wires. The CIDM and our novel digital timing simulation algorithm have been developed on sound theoretical foundations, which allowed us to rigorously prove their properties. Analog and digital simulations for inverter chains were used to confirm our theoretical predictions. Despite this considerable step forward towards a faithful delay model, there is still some room for improvement, in particular, for accurately modeling the delay of multi-input gates. \input ms.bbl \end{document}
2024-02-18T23:40:19.327Z
2021-04-23T02:19:06.000Z
algebraic_stack_train_0000
2,045
11,826
proofpile-arXiv_065-10029
\section{Introduction}\label{sec:intro} Modified theories of gravity \cite{Nojiri:2010wj,Sotiriou:2008rp,Capozziello:2011et,Avelino:2016lpj,Lobo:2008sg} have recently received much attention, as an alternative to dark energy models \cite{Copeland:2006wr,Horndeski:1974wa,Deffayet:2011gz}, in order to explain the late-time cosmic acceleration \cite{Perlmutter:1998np,Riess:1998cb}. In fact, a popular theory extensively analyzed in the literature is $f(R)$ gravity, which generalizes the Hilbert-Einstein Lagrangian to an arbitrary function of the Ricci curvature scalar $R$. The only restriction imposed on the function $f$ is that it needs to be analytic, namely, it possesses a Taylor expansion about any point. Indeed, earlier interest in $f(R)$ gravity was motivated by inflationary scenarios \cite{Starobinsky:1980te} and it has been extremely successful in accounting for the accelerated expansion of the universe \cite{Capozziello:2002rd,Carroll:2003wy}, where the conditions to have viable cosmological models have also been derived (see \cite{Nojiri:2010wj,Sotiriou:2008rp,Capozziello:2011et} for details). It has also been shown that $f(R)$ gravity is strongly constrained by local observations \cite{Olmo:2006eh,Olmo:2005zr,Olmo:2005hc}, at the laboratory and Solar System scales, unless screening mechanisms are invoked \cite{Brax:2010kv,Brax:2012gr,Brax:2012bsa}. One may approach $f(R)$ gravity through several formalisms at a fundamental level \cite{Nojiri:2010wj,Sotiriou:2008rp,Capozziello:2011et}, namely, one may consider that the metric represents the fundamental field of the theory, and consequently obtain the gravitational field equations by varying the action with respect to the metric. However, one may also consider the so-called Palatini (metric-affine) formalism \cite{Olmo:2011uz}, where the theory possesses two fundamental fields, namely, the metric and the connection, and the action is varied with respect to both. Note that in general relativity, both metric and Palatini formalisms are equivalent, contrary to $f(R)$ gravity. This is transparent if one considers the scalar-tensor representation of $f(R)$ gravity, where the metric formalism corresponds to a Brans-Dicke type with a parameter $\omega_{\rm BD}=0$, while the Palatini formalism is equivalent to a Brans-Dicke theory with $\omega_{\rm BD}=-3/2$, so that both approaches yield different dynamics. However, a third approach exists, denoted hybrid metric-Palatini gravity \cite{Harko:2011nh}, that essentially consists of a hybrid combination of both metric and Palatini formalisms, which cures several of the problematic issues that arise in these approaches \cite{Nojiri:2010wj,Sotiriou:2008rp,Capozziello:2011et}. The linear version of hybrid metric-Palatini gravity consists of adding to the Hilbert-Einstein Lagrangian $R$ an $f({\cal R})$ term constructed {\it a la} Palatini, and it was shown that the theory can pass the Solar System observational constraints even if the scalar field is very light \cite{Harko:2011nh,Capozziello:2015lza,Harko:2018ayt,Harko:2020ibn,Capozziello:2013uya}. This implies the existence of a long-range scalar field, which is able to modify the cosmological \cite{Capozziello:2012ny,Carloni:2015bua} and galactic dynamics \cite{Capozziello:2012qt,Capozziello:2013yha}, but leaves the Solar System unaffected \cite{Capozziello:2013wq}. A plethora of applications exist in the literature, such as in cosmology \cite{Boehmer:2013oxa,Lima:2014aza,Lima:2015nma,Paliathanasis:2020fyp,Rosa:2021ish} and extra-dimensions \cite{Fu:2016szo,Rosa:2020uli}, stringlike configurations \cite{Harko:2020oxq,Bronnikov:2020zob}, black holes and wormholes \cite{Capozziello:2012hr,Bronnikov:2019ugl,Bronnikov:2020vgg,KordZangeneh:2020ixt,Chen:2020evr}, stellar configurations \cite{Danila:2016lqx} and test of binary pulsars \cite{Avdeev:2020jqo}, among other applications (we refer the reader to \cite{Harko:2018ayt,Harko:2020ibn} for more details). However, one may consider further generalizations of the linear hybrid metric-Palatini theory, by taking into account an $f(R,{\cal R})$ extension \cite{Tamanini:2013ltp,Koivisto:2013kwa}. Further applications have been considered to cosmological models \cite{Rosa:2017jld,Rosa:2019ejh,Luis:2021xay}, and compact objects \cite{Rosa:2018jwp,Rosa:2020uoi}. In fact, one can show that the generalized hybrid metric-Palatini theory of gravity admits a scalar-tensor representation in terms of two interacting scalar fields. In this context, it was shown that upon an appropriate choice of the interaction potential, one of the scalar fields behaves like dark energy, inducing a late-time accelerated expansion of the universe, while the other scalar field behaves like pressureless dark matter that, together with ordinary baryonic matter, dominates the intermediate phases of cosmic evolution. It has been argued that this unified description of dark energy and dark matter gives rise to viable cosmological solutions, which reproduces the main features of the evolution of the universe \cite{Sa:2020qfd,Sa:2020fvn,Sa:2021eft}. It is also interesting to note that recently a class of scalar-tensor theories has been proposed that includes non-metricity, so that it unifies the metric, Palatini and hybrid metric-Palatini gravitational actions with a non-minimal interaction \cite{Borowiec:2020lfx}. It is important to further investigate the nature of the additional scalar degrees of freedom contained in the generalized hybrid metric-Palatini gravity in the weak-field limit. In \cite{Bombacigno:2019did}, it was shown that performing an analysis at the lowest order of the parametrized post-Newtonian structure of the model, one scalar field can have long range interactions, mimicking in that way dark matter effects. In the context of gravitational waves propagation, it was shown that it is possible to have well-defined physical degrees of freedom, provided by suitable constraints on model parameters. In this work, we build on the latter work and pursue the analysis of the post-Newtonian corrections in the scalar-tensor representation of the generalized hybrid metric-Palatini gravity in the Einstein frame. Using an adequate redefinition of the scalar fields, we show that one of scalar degrees of freedom of the theory contributes to the enhancement of the gravitational attraction, while the other mediates a repulsive force. These results are consistent and weakly constrained by observations, although a model for which the scalar fields are short-ranged seems to be preferable. The work is outlined in the following manner: In Sec. \ref{sec:theory}, we present the action and field equations, and the scalar-tensor representation in the Jordan and Einstein frames, of the generalized hybrid metric-Palatini gravity. In Sec. \ref{sec:weak}, we consider in detail the weak field regime and analyze the perturbative field equations around a Minkowski background in the Jordan and Einstein frames, including a few particular cases of interest that must be considered separately. Finally, in Sec. \ref{sec:concl}, we discuss our results and conclude. \section{Generalized hybrid metric-Palatini gravity}\label{sec:theory} \subsection{Action and equations of motion} Consider the action $S$ of the generalized hybrid metric-Palatini gravity given by \begin{equation}\label{genaction} S=\frac{1}{2\kappa^2}\int_\Omega\sqrt{-g}f\left(R,\mathcal R\right)d^4x+\int_\Omega\sqrt{-g}\mathcal L_md^4x, \end{equation} where $\kappa^2=8\pi G$, $G$ is the gravitational constant, we use units in which the speed of light is $c=1$, $\Omega$ is the spacetime volume, $g$ is the determinant of the spacetime metric $g_{ab}$, where latin indexes $a,b$ run from $0$ to $3$, $R$ is the metric Ricci scalar, $\mathcal R=g^{ab}\mathcal{R}_{ab}$ is the Palatini Ricci scalar, where the Palatini Ricci tensor is written in terms of an independent connection $\hat{\Gamma}^c_{ab}$ as $\mathcal{R}_{ab}=\partial_c\hat\Gamma^c_{ab}-\partial_b\hat\Gamma^c_{ac}+\hat\Gamma^c_{cd}\hat\Gamma^d_{ab}-\hat\Gamma^c_{ad}\hat\Gamma^d_{cb}$, $\partial_a$ denotes a partial derivative with respect to the coordinate $x^a$, $f\left(R,\mathcal R\right)$ is a well-behaved function of $R$ and $\mathcal R$, and $\mathcal L_m$ is the matter Lagrangian minimally coupled to the metric $g_{ab}$. A variation of Eq. \eqref{genaction} with respect to the metric $g_{ab}$ yields the modified field equations \begin{eqnarray} \frac{\partial f}{\partial R}R_{ab}+\frac{\partial f}{\partial \mathcal{R}}\mathcal{R}_{ab}-\frac{1}{2}g_{ab}f\left(R,\cal{R}\right) \nonumber \\ -\left(\nabla_a\nabla_b-g_{ab}\Box\right)\frac{\partial f}{\partial R}=\kappa^2 T_{ab}\label{genfield}, \end{eqnarray} where $\nabla_a$ denotes a covariant derivative and $\Box=\nabla^a\nabla_a$ is the d'Alembert operator, both written in terms of the metric $g_{ab}$, and $T_{ab}$ is the stress-energy tensor defined in the usual manner as \begin{equation}\label{setensor} T_{ab}=-\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\mathcal L_m\right)}{\delta\left(g^{ab}\right)}. \end{equation} On the other hand, varying Eq. \eqref{genaction} with respect to the independent connection $\hat\Gamma^c_{ab}$, the relevant part of the connection equation can be written as \begin{equation}\label{eomgamma} \hat\nabla_c\left(\sqrt{-g}\frac{\partial f}{\partial\mathcal R}g^{ab}\right)=0, \end{equation} where $\hat\nabla_a$ is the covariant derivative written in terms of $\hat\Gamma^c_{ab}$. For a detailed account of the role of torsion in the derivation of the above equation, see \cite{Afonso:2017bxr}. From that result one finds that for bosonic fields, which is the case we are interested in here, torsion can be trivialized via a projective transformation. Standard algebraic manipulations then lead us to conclude that there exists a metric $\hat{g}_{ab}$ conformally related to $g_{ab}$ defined as \begin{equation}\label{defhab} \hat{g}_{ab}=\frac{\partial f}{\partial \mathcal R}g_{ab}, \end{equation} for which the connection $\hat\Gamma^c_{ab}$ is the Levi-Civita connection, i.e., we can write \begin{equation} \hat\Gamma^a_{bc}=\frac{1}{2}\hat{g}^{ad}\left(\partial_b \hat{g}_{dc}+\partial_c \hat{g}_{bd}-\partial_d \hat{g}_{bc}\right). \end{equation} \subsection{Scalar-tensor representation} In a wide variety of cases of interest, it is useful to express the action given in Eq. \eqref{genaction} in a dynamically equivalent scalar-tensor representation. This can be achieved via the addition of two auxiliary fields $\alpha$ and $\beta$ in the following form \begin{eqnarray}\label{auxaction} S&=&\frac{1}{2\kappa^2}\int_\Omega\sqrt{-g}\left[f\left(\alpha,\beta\right)+\frac{\partial f}{\partial\alpha}\left(R-\alpha\right)\right. \nonumber \\ &&\left.+ \frac{\partial f}{\partial\beta}\left(R-\beta\right)\right]d^4x+\int_\Omega\sqrt{-g}\mathcal L_md^4x. \end{eqnarray} At this point one verifies that setting $\alpha=R$ and $\beta=\mathcal R$ recovers the original action \eqref{genaction}. Let us define two scalar fields $\varphi$ and $\psi$ by the following \begin{equation} \varphi=\frac{\partial f}{\partial \alpha}, \qquad \psi=\frac{\partial f}{\partial \mathcal \beta} . \end{equation} With these definitions, the auxiliary action \eqref{auxaction} takes the form \begin{eqnarray}\label{almostaction} S&=&\frac{1}{2\kappa^2}\int_\Omega\sqrt{-g}\left[\varphi R +\psi\mathcal R-V\left(\varphi,\psi\right)\right]d^4x \nonumber \\ &&+\int_\Omega\sqrt{-g}\mathcal L_md^4x, \end{eqnarray} where the function $V\left(\varphi,\psi\right)$ assumes the role of the scalar fields interaction potential and is defined as \begin{equation} V\left(\varphi,\psi\right)=-f\left(\alpha,\beta\right)+\varphi\alpha+\psi\beta, \end{equation} and the auxiliary fields $\alpha$ and $\beta$ should be regarded as functions of $\varphi$ and $\psi$. Given the conformal relation between $\hat{g}_{ab}$ and $g_{ab}$ provided in Eq. \eqref{defhab}, which becomes $\hat{g}_{ab}=+\psi g_{ab}$ according to the definitions above, one can show that the $R$ and $\mathcal R$ are related via the expression \begin{equation} \mathcal R=R+\frac{3}{\psi^2}\partial^a\psi\partial_a\psi-\frac{3}{\psi}\Box\psi. \end{equation} This allows us to eliminate the dependence in $\mathcal R$ of the action given in Eq. \eqref{almostaction}, thus yielding \begin{eqnarray} S&=&\frac{1}{2\kappa^2}\int_\Omega\sqrt{-g}\left[\left(\varphi+\psi\right)R+\frac{3}{2\psi}\partial^a\psi\partial_a\psi \right. \nonumber\\ &&\left.-V\left(\varphi,\psi\right)\right]d^4x+\int_\Omega\sqrt{-g}\mathcal L_md^4x.\label{jordanaction1} \end{eqnarray} The action in Eq. \eqref{jordanaction1} has proven to be useful in numerous analyses. However, in this case we will perform an additional redefinition of the scalar fields for convenience. Consider the scalar fields $\phi$ and $\lambda$ defined as \begin{equation} \phi=\varphi+\psi,\qquad s\lambda^2=\psi, \end{equation} where $s=\pm 1$ represents the sign of $\psi$. With these definitions, Eq. \eqref{jordanaction1} becomes \begin{eqnarray}\label{jordanframe} S&=&\frac{1}{2\kappa^2}\int_\Omega\sqrt{-g}\left[\phi R+6s\partial^a\lambda\partial_a\lambda\right. \nonumber\\ &&\left.-\bar V\left(\phi,\lambda\right)\right]d^4x+\int_\Omega\sqrt{-g}\mathcal L_md^4x, \end{eqnarray} where $\bar V$ is a new potential written in terms of the scalar fields $\phi$ and $\lambda$. The action in Eq. \eqref{jordanframe} describes the scalar-tensor representation of the theory in the Jordan frame. The weak-field phenomenology of the theory in this frame has already been explored in \cite{Bombacigno:2019did}. We shall now perform a change of frame to the Einstein frame to carry out the analysis in those variables. \subsection{Equations in the Einstein frame} To switch from the Jordan frame to the Einstein frame, we perform a conformal transformation in the metric of the form $\tilde g_{ab}=\phi g_{ab}$. Consequently, the action in Eq. \eqref{jordanframe} takes the form \begin{eqnarray} S&=& \frac{1}{2\kappa^2}\int_\Omega\sqrt{-\tilde g}\left[\tilde R+\frac{6s}{\phi}\tilde \nabla_a\lambda\tilde\nabla^a\lambda-\frac{3}{2\phi^2}\tilde \nabla_a\phi\tilde\nabla^a\phi \right. \nonumber \\ &&\left. -\frac{\bar V\left(\phi,\lambda\right)}{\phi^2}\right]d^4x+\int_\Omega\sqrt{-g}\mathcal L_md^4x.\label{almosteinstein} \end{eqnarray} To finalize, we perform one further redefinition of the scalar fields as \begin{equation}\label{defscatilde} \tilde \phi = \sqrt{\frac{3}{2}}\frac{\log \phi}{\kappa},\quad\quad\tilde\lambda=\frac{\sqrt{6}}{\kappa}\lambda. \end{equation} These redefinitions allow us to write the action in the final form \begin{eqnarray}\label{finalaction} S&=&\int_\Omega\sqrt{-\tilde g}\left[\frac{\tilde R}{2\kappa^2}+\frac{s}{2}e^{-\sqrt{\frac{2}{3}}\kappa\tilde\phi}\tilde \nabla_a\tilde\lambda\tilde\nabla^a\tilde\lambda-\frac{1}{2}\tilde \nabla_a\tilde\phi\tilde\nabla^a\tilde\phi \right. \nonumber\\ &&\left. -\tilde W\left(\tilde\phi,\tilde\lambda\right)\right]d^4x+\int_\Omega\sqrt{-g}\mathcal L_md^4x, \end{eqnarray} with the new potential $\tilde W\left(\tilde\phi,\tilde\lambda\right)$ defined as \begin{equation}\label{defW} \tilde W\left(\tilde\phi,\tilde\lambda\right)=\frac{\bar V\left(\phi,\lambda\right)}{2\kappa^2}e^{-2\sqrt{\frac{2}{3}}\kappa\tilde\phi}, \end{equation} where $\phi$ and $\lambda$ can be written in terms of $\tilde\phi$ and $\tilde\lambda$ via the definitions in Eq.\eqref{defscatilde}. From this point onward, all variables defined in the Einstein frame will be labelled with a tilde. For consistency, we will also denote $\tilde V\left(\tilde\phi,\tilde\lambda\right)\equiv \bar V\left(\phi(\tilde\phi),\lambda(\tilde\lambda)\right)$. The action in Eq. \eqref{finalaction} depends on three independent variables, namely the metric $g_{ab}$, and the scalar fields $\phi$ and $\lambda$. Performing a variation of Eq. \eqref{finalaction} with respect to these variables yields, respectively \begin{eqnarray}\label{eommetric} \tilde G_{ab} &+& \frac{1}{2}\tilde g_{ab}\left[e^{-2\sqrt{\frac{2}{3}}\kappa\tilde \phi}\tilde V\left(\tilde \phi,\tilde \lambda\right)\right. \nonumber \\ &+&\left.\kappa^2\left(\partial_c\tilde \phi\partial^c\tilde \phi-e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi}s\partial_c\tilde \lambda\partial^c\tilde \lambda\right)\right] -\kappa^2\partial_a\tilde \phi\partial_b\tilde \phi \nonumber\\ &+& \kappa^2se^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi}\partial_a\tilde \lambda\partial_b\tilde \lambda =\kappa^2e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi}T_{ab}, \end{eqnarray} \begin{eqnarray}\label{eomtildephi} \Box\tilde \phi-\frac{1}{2\kappa^2}e^{-2\sqrt{\frac{2}{3}}\kappa\tilde \phi}\left(\tilde V_{\tilde\phi}-2\sqrt{\frac{2}{3}}\kappa\tilde \phi \ \tilde V\left(\tilde \phi,\tilde \lambda\right)\right) \nonumber\\ -\frac{1}{\sqrt{6}}e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi}s\kappa\partial_a\tilde \lambda\partial^a\tilde \lambda=\sqrt{\frac{2}{3}}\kappa T, \end{eqnarray} \begin{equation}\label{eomtildelambda} \Box\tilde \lambda-\sqrt{\frac{2}{3}}\kappa\partial_a\tilde \phi\partial^a\tilde \lambda+\frac{s}{2\kappa^2}e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi}\ \tilde V_{\tilde \lambda}=0, \end{equation} where the subscripts $\tilde \phi$ and $\tilde \lambda$ denote partial derivatives with respect to these scalar fields, and $T=\tilde g^{ab}T_{ab}$ is the trace of the stress-energy tensor. From the above equations, it is worth noting that the scalar field $\tilde\phi$ is sourced by both $\tilde\lambda$ and the matter stress-energy density $T$, whereas $\tilde\lambda$ only couples to itself and to $\tilde\phi$. According to this, $\tilde \lambda$ can be regarded as a kind of dark matter fluid, which gravitates but does not directly feel the presence of matter, in interaction with the dark energy field $\tilde \phi$. This structure of the field equations suggests potential applications of this type of models to scenarios with interacting dark sectors. \section{The weak field regime}\label{sec:weak} \subsection{Perturbative equations} Let us now analyze the effects of the scalar fields $\tilde \phi$ and $\tilde \lambda$ in a slightly curved space. To do so, we shall consider a system of local coordinates in which the metric can be written in terms of a Minkowskian spacetime $\tilde \eta_{ab}$ plus a small perturbation $\tilde h_{ab}$ \begin{equation}\label{gabpert} \tilde g_{ab}\approx\tilde \eta_{ab}+\tilde h_{ab}, \end{equation} with $|\tilde h_{ab}|\ll 1$. In the same way, the scalar fields will be written as \begin{equation}\label{fieldpert} \tilde \phi = \tilde \phi_0+\delta \tilde \phi, \qquad \tilde \lambda = \tilde \lambda_0+\delta\tilde \lambda, \end{equation} where $\tilde \phi_0$ and $\tilde \lambda_0$ represent the (approximately constant) background values and $\delta\tilde \phi$ and $\delta\tilde \lambda$ are local fluctuations of order $\mathcal O\left(\tilde h_{ab}\right)$. Note that these fluctuations should vanish outside the region where the metric is described by Eq. \eqref{gabpert}. More relevant, perhaps, is the fact that we have freedom to set $\tilde{\phi}_0$ to zero without loss of generality. This is so because we can choose the constant $\kappa^2$ in Eq. \eqref{jordanframe} such that $\phi_0=1$ at our cosmic reference time $t_0$, thus implying that $\tilde{\phi}_0=0$ according to Eq. \eqref{defscatilde}. For generality, however, we will keep this quantity arbitrary until it becomes convenient to fix its reference value. In the weak-field regime, derivatives of the background fields are negligible, as the evolution of the scalar fields is very slow due to the large difference between cosmological and solar system scales. Consequently, curvature terms and first order derivatives of the background metric can be discarded. Time derivatives shall also be neglected because the motion of the sources is expected to be non-relativistic, and thus the D'Alembert operator $\Box$ effectively becomes the Laplacian operator $\nabla^2$. Furthermore, we shall assume that matter perturbations are described by a presureless perfect fluid, i.e., we write the perturbed stress-energy tensor $\delta T_{ab}$ as \begin{equation} \delta T^{ab}=\rho u^au^b, \end{equation} where $\rho$ is the energy density and $u^a$ is the 4-velocity of the fluid elements. This implies that the only non-vanishing component of $\delta T_{ab}$ is $\delta T_{00}=\rho$, with the space components $\delta T_{ij}=0$ vanishing, where the indexes $i, j$ run from $1$ to $3$. Also, the trace becomes $\delta T=-\rho$. Fixing the gauge as \begin{equation} \partial_b\left(\tilde h_a^b-\frac{1}{2}\delta_a^b\tilde h\right)=0 , \end{equation} the resultant equations of motion for the perturbed metric $\tilde h_{ab}$ and the scalar field fluctuations $\delta\tilde \phi$ and $\delta \tilde \lambda$ become, \begin{eqnarray}\label{eqmetricpert} &&-\frac{\nabla^2 \tilde h_{ab}}{2}=\kappa^2e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}\left(\delta T_{ab}-\tilde \eta_{ab}\frac{\delta T}{2}\right) \nonumber \\ &&\qquad - \frac{1}{2}\tilde V e^{-2\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}\left(\tilde h_{ab}-\tilde \eta_{ab}\frac{\tilde h}{2}\right) +\frac{ e^{-2\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}}{6}\tilde \eta_{ab} \times \nonumber \\ && \qquad \qquad \times \left[\left(3\tilde V_{\tilde \phi}-2\sqrt{6}\kappa \tilde V\right)\delta \tilde \phi+3\tilde V_{\tilde \lambda} \delta \tilde \lambda\right], \end{eqnarray} \begin{equation}\label{eqphipert} \left(\nabla^2-m_\phi^2\right)\delta\tilde \phi=a_{\phi}\delta\tilde \lambda-\sqrt{\frac{2}{3}}\kappa\rho , \end{equation} \begin{equation}\label{eqlambdapert} \left(\nabla^2-m_\lambda^2\right)\delta\tilde \lambda=a_{\lambda}\delta\tilde \phi , \end{equation} where $m_\phi$ and $m_\lambda$ are the masses of the scalar fields $\delta\tilde \phi$ and $\delta\tilde \lambda$, respectively, and $a_{\phi}$ and $a_{\lambda}$ are the coupling constants between $\tilde \phi$ and $\tilde \lambda$. These quantities can be written in terms of the potential $\tilde W$ and its derivatives as \begin{equation} m_\phi^2 = \frac{1}{2\kappa^2}\left(e^{-2\sqrt{\frac{2}{3}}\kappa\tilde \phi}\tilde V\right)_{\tilde \phi\tilde \phi} \ , \qquad m_\lambda^2=\frac{1}{2\kappa^2}\tilde V_{\tilde \lambda\tilde \lambda}, \end{equation} \begin{equation}\label{eq:aphi} a_{\phi} \frac{1}{2\kappa^2}\left(e^{-2\sqrt{\frac{2}{3}}\kappa\tilde \phi}\tilde V\right)_{\tilde \phi\tilde \lambda} \end{equation} \begin{equation} a_{\lambda} -\frac{{s}}{2\kappa^2}\left(e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi}\tilde V\right)_{\tilde \lambda\tilde \phi} \,, \end{equation} respectively. \subsection{Analysis of the general case} \subsubsection{Perturbation equations for the scalar fields} Equations \eqref{eqphipert} and \eqref{eqlambdapert} constitute a system of two coupled differential equations for $\delta\tilde \phi$ and $\delta\tilde \lambda$. To simplify the analysis, it is useful to perform a new change of variables in order to decouple this system, as was done in \cite{Bombacigno:2019did}. To do so, we write the system of Eqs. \eqref{eqphipert} and \eqref{eqlambdapert} in the following matrix form \begin{equation}\label{mateq1} \left(I_{2\times 2}\nabla^2-A\right)\Phi=\mathcal T, \end{equation} where $I_{2\times 2}$ is the identity matrix in two dimensions and we define the matrix $A$ by \begin{equation}\label{defA} A=\begin{pmatrix}m_\lambda^2 & a_{\lambda} \\ a_{\phi} & m_\phi^2\end{pmatrix}, \end{equation} and the vectors $\Phi$ and $\mathcal T$ as \begin{equation} \Phi=\begin{pmatrix}\delta\tilde \lambda \\ \delta\tilde \phi\end{pmatrix}, \qquad \mathcal T=-\sqrt{\frac{2}{3}}\kappa\begin{pmatrix}\rho \\ 0\end{pmatrix}, \end{equation} respectively. A decoupled system of equations can be obtained via the diagonalization of Eq. \eqref{mateq1}. Let $P$ be the matrix of the eigenvectors of $A$ and $P^{-1}$ its inverse. These two matrices take the forms \begin{equation}\label{defP} P=\begin{pmatrix}p_{11} & p_{12} \\ p_{21} & p_{22}\end{pmatrix}=\begin{pmatrix}-\frac{M_+^2-m_\lambda^2}{a_\phi} & \frac{M_+^2-m_\phi^2}{a_\phi} \\ 1 & 1\end{pmatrix}, \end{equation} \begin{equation}\label{defPinv} P^{-1}=\begin{pmatrix}\bar p_{11} & \bar p_{12} \\ \bar p_{21} & \bar p_{22}\end{pmatrix}=\begin{pmatrix}-\frac{a_\phi}{M^2_0} & \frac{M_+^2-m_\phi^2}{M^2_0} \\ \frac{a_\phi}{M^2_0} & \frac{M_+^2-m_\lambda^2}{M^2_0}\end{pmatrix}, \end{equation} where we have defined the auxiliary constants $M_\pm^2$ and $M_0$ (with units of mass) as the combinations \begin{equation}\label{eq:mpm} M_\pm^2=\frac{1}{2}\left[m_\lambda^2+m_\phi^2 \pm M_0^2\right], \end{equation} \begin{equation}\label{eq:M0} M_0^2=\sqrt{4 a_{\lambda}a_{\phi}+\left(m_\lambda^2-m_\phi^2\right)^2}. \end{equation} With the forms of $P$ and $P^{-1}$ defined above, the matrix $A_D=P^{-1}AP$ is diagonal. Let us also define the new scalar field vector as $\Phi_D=P^{-1}\Phi$ and the new matter vector as $\mathcal T_D=P^{-1}\mathcal T$. As a result, Eq. \eqref{mateq1} becomes \begin{equation} \left(I_{2\times 2}\nabla^2-A_D\right)\Phi_D=\mathcal T_D. \end{equation} The decoupled version of the system of Eqs. \eqref{eqphipert} and \eqref{eqlambdapert} becomes then \begin{equation}\label{eqphidec} \left(\nabla^2-M_\phi^2\right)\delta\phi_D=-\bar p_{21}\sqrt{\frac{2}{3}}\kappa\rho, \end{equation} \begin{equation}\label{eqlambdadec} \left(\nabla^2-M_\lambda^2\right)\delta\lambda_D=-\bar p_{11}\sqrt{\frac{2}{3}}\kappa\rho, \end{equation} where $\delta\phi_D$ and $\delta\lambda_D$ are the decoupled scalar fields, and $M_\phi$ and $M_\lambda$ are their respective masses. The new scalar fields and masses can be written in terms of the old scalar fields $\delta\phi$ and $\delta\lambda$, as well as their masses $m_\phi$ and $m_\lambda$, and their coupling constants $a_{\phi}$ and $a_{\lambda}$, as well as the previously defined constants $M_\pm^2$ and $M^2_0$ as \begin{equation} \delta\phi_D=\frac{1}{M_0^2}\left[\left(M_+^2-m_\lambda^2\right)\delta\tilde \phi+a_{\phi}\delta\tilde \lambda\right], \end{equation} \begin{equation} \delta\lambda_D=\frac{1}{M_0^2}\left[\left(M_+^2-m_\phi^2\right)\delta\tilde \phi-a_{\phi}\delta\tilde \lambda\right], \end{equation} \begin{equation}\label{sfDmasses} M_\phi^2=M_-^2, \qquad M_\lambda^2=M_+^2. \end{equation} We are now able to solve Eqs. \eqref{eqphidec} and \eqref{eqlambdadec} with the usual Laplace transform methods, i.e., we write both $\delta\lambda_D$ and $\delta\phi_D$ in terms of their Laplace transforms $\delta\tilde\lambda_D$ and $\delta\tilde\phi_D$, respectively, insert these forms into Eqs. \eqref{eqphidec} and \eqref{eqlambdadec}, manipulate the results in the momentum space, and invert the Laplace transforms using a convolution. In the end, we arrive at the following solutions for $\delta\lambda_D$ and $\delta\phi_D$: \begin{equation}\label{soldlambdaD} \delta\lambda_D\left(x\right)=\frac{\kappa}{4\pi}\bar p_{11}\sqrt{\frac{2}{3}}\int\frac{\rho\left(x'\right)}{|x-x'|}e^{-M_\lambda|x-x'|}d^3x', \end{equation} \begin{equation}\label{soldphiD} \delta\phi_D\left(x\right)=\frac{\kappa}{4\pi}\bar p_{21}\sqrt{\frac{2}{3}}\int\frac{\rho\left(x'\right)}{|x-x'|}e^{-M_\phi|x-x'|}d^3x'. \end{equation} \subsubsection{Perturbation equations for the metric} Let us now turn to the metric equations given in Eq. \eqref{eqmetricpert}. The second term on the RHS is proportional to the potential $\tilde W$, which is assumed to be of the order of the cosmological constant. In the weak-field, slow-motion regime used for solar system tests, the contributions of the potential are thus negligible when compared to the local sources given by the stress-energy tensor of the fluid contribution. Thus, we shall neglect this term. Finally, the last term on the RHS depends on products between potential terms and perturbations in the scalar fields. In the Einstein frame, these terms are of the order of magnitude of those coming from scalar fields in the matter sector, which are also negligible when compared to the dominant fluid terms. Therefore, these terms shall also be discarded (which justifies the absence of any dependence on the sign $s$ on the right-hand side of the metric perturbation equations). Consequently, the two independent equations for the metric take the forms \begin{equation} \nabla^2\tilde h_{00}=-\kappa^2 e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}\rho, \end{equation} \begin{equation} \nabla^2\tilde h_{ij}=-\delta_{ij}\kappa^2 e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}\rho, \end{equation} where $\delta_{ij}$ is the Kronecker delta. The above equations can be integrated directly to yield the following solutions for $\tilde h_{00}$ and $\tilde h_{ij}$ \begin{equation}\label{solh00tilde} \tilde h_{00}=\frac{\kappa^2}{4\pi}e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}\int\frac{\rho\left(x'\right)}{|x-x'|}d^3x', \end{equation} \begin{equation}\label{solhijtilde} \tilde h_{ij}=\delta_{ij}\frac{\kappa^2}{4\pi}e^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}\int\frac{\rho\left(x'\right)}{|x-x'|}d^3x'. \end{equation} From Eqs. \eqref{solh00tilde} and \eqref{solhijtilde} one can extract the PPN parameters of the theory in the Einstein frame. To do so, recall that we have considered a system of units where $\kappa^2=8\pi G$ and that the integral in these equations represents the Newtonian potential. This way, we can write \begin{eqnarray} \tilde h_{00} &=& 2\tilde G_{\text{EF}}U_N\left(x\right), \label{ppn1} \\ \tilde h_{ij} &=& 2\delta_{ij}\tilde G_{\text{EF}}\tilde \gamma U_N\left(x\right), \label{ppn2} \end{eqnarray} where $\tilde G_{\text{EF}}$ is the effective gravitational constant in the Einstein frame, $\tilde \gamma$ is a PPN parameter, $U_N\left(x\right)$ is the Newtonian potential written in terms of the distance to the source $x$. Thus, we verify that \begin{equation}\label{defgeff} \tilde \gamma=1,\qquad \tilde G_{\text{EF}}=Ge^{-\sqrt{\frac{2}{3}}\kappa\tilde \phi_0}=G/\phi_0. \end{equation} Consequently, we observe that in the Einstein frame, the parameter $\tilde \gamma$ is the same as in GR and that the effective gravitational constant $\tilde G_{\text{EF}}$ is a simple rescaling of the Newtonian constant $G$ depending on the background field $ \phi_0$. \subsubsection{Recovering the results in the Jordan frame}\label{sec:backframe} Let us now perform the inverse conformal transformation back to the Jordan frame in such a way that we can compare our results to the ones previously obtained in \cite{Bombacigno:2019did}. To do so, let us start by solving the integrals in Eqs. \eqref{soldlambdaD}, \eqref{soldphiD}, \eqref{solh00tilde} and \eqref{solhijtilde} far from a spherically symmetric source. The solutions take the forms \begin{equation} \delta\lambda_D\left(r\right)=\frac{\kappa}{4\pi}\bar p_{11}\sqrt{\frac{2}{3}}\frac{M_\odot}{r}e^{-M_\lambda r}, \end{equation} \begin{equation} \delta\phi_D\left(r\right)=\frac{\kappa}{4\pi}\bar p_{21}\sqrt{\frac{2}{3}}\frac{M_\odot}{r}e^{-M_\phi r}, \end{equation} \begin{equation} \tilde h_{00}\left(r\right)=\frac{2\tilde G_{\text{EF}}M_\odot}{r}, \end{equation} \begin{equation} \tilde h_{ij}\left(r\right)=\delta_{ij}\frac{2\tilde G_{\text{EF}}M_\odot}{r}, \end{equation} where $M_\odot$ is the mass of the source and $r$ is the radial distance from the source. Note that to perform the inverse conformal transformation, we only care about the scalar field $\tilde \phi$, as the scalar field $\tilde \lambda$ was not involved in the transformation. Thus, let us use $\Phi=P\Phi_D$ to recover $\delta\tilde \phi$ in terms of $\delta\lambda_D$ and $\delta\phi_D$ as $\delta\tilde \phi=p_{21}\delta\lambda_D+p_{22}\delta\phi_D$, or more explicitly \begin{equation} \delta\tilde\phi\left(r\right)=\sqrt{\frac{2}{3}}\frac{\kappa M_\odot}{4\pi r}\left(p_{21}\bar p_{11}e^{-M_\lambda r}+p_{22}\bar p_{21}e^{-M_\phi r}\right). \end{equation} To recover the solutions in the Jordan frame, we need to find the scalar field $\phi$ used for the conformal transformation. This field is related to the field $\tilde\phi$ as written in Eq. \eqref{defscatilde}. Inserting the relations $\tilde\phi=\tilde\phi_0+\delta\tilde\phi$ and $\phi=\phi_0+\delta\phi$ and keeping only the terms up to first order, we verify that \begin{equation}\label{phisrel} \delta\tilde\phi=\sqrt{\frac{3}{2}}\frac{\delta\phi}{\phi_0\kappa}, \end{equation} which allows us to write $\delta\phi$ in the form \begin{eqnarray} \label{eq:deltaphi} \delta\phi\left(r\right)&=&\frac{4 G M_\odot}{3r}\left(p_{21}\bar p_{11}e^{-M_\lambda r}+p_{22}\bar p_{21}e^{-M_\phi r}\right) \nonumber \\ &=& \frac{4 G M_\odot}{3r}\frac{a_\phi}{M^2_0}\left(e^{-M_\phi r}-e^{-M_\lambda r}\right) . \end{eqnarray} We are now in conditions to perform the inverse conformal transformation. At this point one should recall that, by a convenient choice of units, we had freedom to set $\phi_0=1$ at the reference cosmic time $t_0$, in such a way that the perturbation of $\tilde g_{ab}=\phi g_{ab}$ yields a consistent zeroth order Minkowskian limit in the coordinates chosen. Accordingly, using the expansions $\tilde g_{ab}=\tilde\eta_{ab}+\tilde h_{ab}$ and $g_{ab}=\eta_{ab}+ h_{ab}$ for the metrics, $\phi=1+\delta\phi$ for the scalar field, and keeping only the first order terms, we obtain \begin{equation}\label{habsrel} \tilde h_{ab}=h_{ab}+\eta_{ab}\delta\phi. \end{equation} This result allows us to write the solutions for the perturbations $h_{00}$ and $h_{ij}$ in the forms \begin{equation} h_{00}\left(r\right)=\frac{2GM_\odot}{r}\left[1+\frac{2}{3}\frac{a_\phi}{M^2_0}\left(e^{-M_\phi r}-e^{-M_\lambda r}\right)\right], \end{equation} \begin{equation} h_{ij}\left(r\right)=\delta_{ij}\frac{2GM_\odot}{r}\left[1-\frac{2}{3}\frac{a_\phi}{M^2_0}\left(e^{-M_\phi r}-e^{-M_\lambda r}\right)\right], \end{equation} where we have used Eqs. \eqref{defgeff} and \eqref{phisrel} to write $\tilde G_{\text{eff}}=G/\phi_0 $, from which we can extract the effective gravitational constant in the Jordan frame $G_{\text{eff}}$ and the $\gamma$ PPN parameter given by \begin{equation}\label{Geff1} G_{\text{eff}}=G\left[1+\frac{2}{3}\frac{a_\phi}{M^2_0}\left(e^{-M_\phi r}-e^{-M_\lambda r}\right)\right], \end{equation} \begin{equation}\label{gammaeff1} \gamma=\frac{3M^2_0-2a_\phi\left(e^{-M_\phi r}-e^{-M_\lambda r}\right)}{3M^2_0+2a_\phi \left(e^{-M_\phi r}-e^{-M_\lambda r}\right)} , \end{equation} respectively. At this stage, we confirm that the exponential dependences of $h_{00}$, $h_{ij}$ and $\delta\phi$ presented here are consistent with those found in \cite{Bombacigno:2019did} [see their equations (50-52) and (55-56)], though there is no transparent correspondence between parameters due to the various redefinitions and assumptions involved. In any case, assuming that the effective potentials are at a minimum, our definition of $M_\phi$ coincides with theirs, and our $M_\lambda$ corresponds to their $M_\xi$. From the above results one readily sees that the scalar degrees of freedom contribute in a mixed manner to $G_{\text{eff}}$, with a piece that enhances the gravitational attraction (proportional to $e^{-M_\phi r}$) and another that mediates a repulsive force (proportional to $e^{-M_\lambda r}$). It is tempting to argue that the existence of this repulsive force could have been guessed already from the action in Eq. \eqref{jordanaction1}, where the kinetic term associated to $\psi$ appears with a {\it positive} sign. Indeed, the transition to the representation in terms of $\lambda$ took care of this fact by specifying the possibility of splitting the domain of $\psi$ in two sectors with different signs, in such a way that for $s=+1$ the action in Eq. \eqref{jordanframe} can be seen as representing a ghost scalar $\lambda$ while for $s=-1$ it contributes with a positive kinetic energy. In this latter case, one should make sure that the combination $\phi=\varphi+\psi$ (with $\psi<0$) does not change sign in non-perturbative scenarios in order to avoid breakdowns in the evolution of initial data. However, it should also be noted that the sign $s$ enters in the expressions for $G_{\text{eff}}$ and $\gamma$ in a non-linear manner, via the definitions of $M_\phi^2$ and $M_\lambda^2$. The repulsive character of the $e^{-M_\lambda r}$ term, therefore, cannot be directly related to the sign of $s$ but rather to some nontrivial combination of the two dynamical scalar degrees of freedom present in the theory. Compatibility with observations requires that the radial dependence of $G_{\text{eff}}$ be negligible within the scales accessible to observations. This can be achieved in different ways. One of them is by making the amplitude $a_\phi/M^2_0$ sufficiently small. Another possibility would be to have very massive scalar modes, such that $M_\phi r$ and $M_\lambda r$ become much bigger than unity, leading to vanishing exponentials. Both possibilities would automatically recover the predictions of GR. A third possibility is to have very light fields, with $M_\phi r$ and $M_\lambda r$ approaching zero in the scales of interest. Assuming that these products are small and expanding the exponentials as $e^{-M r}\approx 1-Mr +O(M^2 r^2)$, we find that \begin{equation}\label{eq:gammaeffapp1} \gamma\approx 1-\frac{4a_\phi}{3M^2_0}\left(M_\lambda-M_\phi\right)L \ , \end{equation} with $L$ being a scale of the order of a few astronomical units. Given that current data set $|\gamma-1|<10^{-5}$, it follows that $|\frac{4a_\phi}{3M^2_0}\left(M_\lambda-M_\phi\right)L|<10^{-5}$, which sets a weak constraint on the model parameters. The limit in which $M_\phi^2$ becomes degenerate with $M_\lambda^2$ deserves some attention because it coincides with $M_0^2\to 0$. Taking the limit $M_0^2\to 0$ in Eqs. \eqref{Geff1} and \eqref{gammaeff1}, one finds that this limit is smooth, leading to \begin{equation}\label{Geff2} G_{\text{eff}}=G\phi_0\left[1+\frac{\sqrt{2}}{3}\frac{a_\phi}{M}re^{-\frac{M r}{\sqrt{2}}}\right], \end{equation} \begin{equation}\label{gammaeff2} \gamma=\frac{3M-\sqrt{2}a_\phi r e^{-\frac{M r}{\sqrt{2}}}}{3M+\sqrt{2}a_\phi r e^{-\frac{M r}{\sqrt{2}}}}. \end{equation} As we can see, in this limit the repulsive component in the effective Newton's constant disappears, and compatibility with experiments still requires a short range field or a small amplitude $a_\phi/M$, where we have defined $M^2\equiv m_\lambda^2+m_\phi^2$. An expansion similar to Eq. \eqref{eq:gammaeffapp1} then leads to \begin{equation}\label{eq:gammaeffapp2} \gamma\approx 1-\frac{2\sqrt{2}a_\phi}{3M}Le^{-\frac{M L}{\sqrt{2}}} \ , \end{equation} which also sets a weak constraint on the parameters. It should be noted that the particular case $a_\phi=0$ must be analyzed independently and cannot be guessed from these general formulas given above. We also find troubles when the matrix $A$ in Eq. \eqref{defA} becomes degenerate, which forces a reconsideration of the method used to solve the equations. These two particular cases will be studied next. \subsection{Non-diagonalizable matrix $A$ \label{sec:C}} The approach presented in the previous section can only be applied in the general case where the matrix A, given by Eq. \eqref{defA}, is diagonalizable. In fact, if one considers the particular case for which the determinant of the matrix $A$ vanishes, it follows that $P$ in Eq.\eqref{defP} becomes a matrix of rank 1 and ceases to be invertible. As a consequence, $A$ will only have one eigenvalue with algebraic degeneracy of $2$ and only one eigenvector, which confirms that in this case it is not diagonalizable anymore and a different method is necessary to solve the system of equations. The condition that the determinant of the matrix $A$ vanishes is equivalent to the relation $m_\phi^2 m_\lambda^2-a_\phi a_\lambda=0$ and forces a separate analysis of that particular case. Performing a Fourier transform of Eqs. \eqref{eqphipert} and \eqref{eqlambdapert} we find the following relation \begin{equation} \delta\hat{\tilde \phi}=\kappa\sqrt{\frac{2}{3}}\frac{(k^2+m_\lambda^2)}{k^2(k^2+M^2)}\hat\rho \ , \end{equation} where we used $M^2\equiv m_\phi^2+m_\lambda^2$ (already defined when considering the limit $M_0^2\to 0$), and a hat denotes a Fourier transform. Considering that $\rho(\vec{x})$ represents a delta-like distribution, $ \rho(\vec{x}')=M_\odot \delta^{(3)}(\vec{x}')$, to simplify the integrations, we find that \begin{equation} \delta\tilde\phi(r)=\kappa \sqrt{\frac{2}{3}}\frac{M_\odot}{4\pi M^2 r}\left(m_\lambda^2+m_\phi^2 e^{-M r}\right) \ . \end{equation} On the other hand, the expression for $\delta\hat {\tilde\lambda}$ becomes, \begin{equation} \delta\hat{\tilde \lambda}=-\frac{a_\lambda \delta\hat{\tilde \phi}}{(k^2+m_\lambda^2)} \ , \end{equation} which after some algebraic manipulations yields \begin{equation} \delta\tilde\lambda(r)=-a_\lambda\sqrt{\frac{2}{3}}\frac{\kappa M_\odot}{4\pi M^2 r}\left(1-e^{-M r}\right) \ . \end{equation} Consequently, inverting the conformal transformation and using Eq. \eqref{habsrel}, the metric perturbations can be found to be \begin{eqnarray} h_{00}&=&\frac{2GM_\odot}{r}\left[1+\frac{2}{3M^2}\left(m_\lambda^2+m_\phi^2e^{-M r}\right)\right] \,, \\ h_{ij}&=&\frac{2GM_\odot}{r}\left[1-\frac{2}{3M^2}\left(m_\lambda^2+m_\phi^2e^{-M r}\right)\right]\delta_{ij} \ . \end{eqnarray} Using the definitions in Eqs. \eqref{ppn1} and \eqref{ppn2}, one can again extract both the effective gravitational constant $G_{\text{eff}}$ and the $\gamma$ PPN parameter, which in this case are given by \begin{eqnarray} G_{\text{eff}}&=&G\left[1+\frac{2}{3M^2}\left(m_\lambda^2+m_\phi^2e^{-M r}\right)\right] \,, \\ \gamma_{\text{eff}}&=&\frac{3M^2-2\left(m_\lambda^2+m_\phi^2e^{-M r}\right)}{3M^2+2\left(m_\lambda^2+m_\phi^2e^{-M r}\right)} \,, \end{eqnarray} respectively. The expression for $G_{\text{eff}}$ indicates that the repulsive degree of freedom mediated by a combination of the two scalar fields in the general case is no longer present when the determinant of $A$ vanishes. The net effect on $G_{\text{eff}}$ is a constant shift of its bare value plus a standard (attractive) Yukawa-type correction. In a sense, we could say that one of the resulting scalar degrees of freedom has infinite range (vanishing mass) while the other has a range $1/M$. This is consistent with the fact that for this choice of parameters $M_0^2$ becomes $M_0^2=M^2$ and leads to $M_+^2=M^2$ and $M_-^2=0$. Interestingly, the amplitude of these corrections no longer depends on $a_\phi$ but is entirely determined by the diagonal elements of the matrix $A$. There are several cases of interest in the resulting expression for $\gamma_{\rm eff}$. For short range fields, $Mr\gg 1$ in laboratory and solar system scales, the exponential term rapidly vanishes and we get \begin{equation} \gamma_{\text{eff}}\approx \frac{3M^2-2m_\lambda^2}{3M^2+2m_\lambda^2}=1-\frac{4m_\lambda^2}{5m_\lambda^2+3m_\phi^2} \ . \end{equation} In order to have compatibility with current observations, we must have $|\gamma-1|<10^{-5}$, which implies that $m_\phi^2\ge 10^5 m_\lambda^2$. In the opposite extreme, we have the case of long range fields, $0<M r\ll 1$ over astrophysical scales, and leads to \begin{equation} \gamma_{\text{eff}}\approx \frac{1}{5} \ , \end{equation} which is in clear conflict with observations. The case $m_\lambda^2=m_\phi^2$ leads to important simplifications, \begin{equation} \gamma_{\text{eff}}=\frac{2-e^{-\sqrt{2}m_\phi r}}{5+e^{-\sqrt{2}m_\phi r}} \ , \end{equation} but does not improve in any way the viability of the theory, which is in clear conflict with observations. \subsection{Particular case $a_\phi=0$} The particular case discussed above led to a partial decoupling between the scalar degrees of freedom of the general case, in the sense that the effective Newton constant and PPN parameter $\gamma$ did not depend on the parameters $a_\phi$ and $a_\lambda$, which are responsible for the direct coupling between $\delta \tilde\phi$ and $\delta \tilde\lambda$ in Eqs. (\ref{eqphipert}) and (\ref{eqlambdapert}). A more obvious way to partially decouple these two degrees of freedom is by considering a situation with $a_\phi=0$, in such a way that the weak field dynamics of $\delta \tilde\phi$ becomes independent of $\delta \tilde\lambda$. This choice of $a_\phi$ constraints the effective potential to take the form \begin{equation}\label{specialpot} \tilde V(\tilde \phi,\tilde \lambda)=A(\tilde \phi)+B(\tilde \lambda)e^{2\sqrt{\frac{2}{3}}\kappa\tilde \phi} \end{equation} Here we discuss this particular case in some detail. Proceeding similarly as above, in this case Eq. (\ref{eqphipert}) decouples from Eq. (\ref{eqlambdapert}) and leads to the Fourier relation \begin{equation} \delta\hat{\tilde \phi}=\sqrt{\frac{2}{3}}\frac{\kappa\hat\rho}{(k^2+m_\phi^2)} \ , \end{equation} which can be inverted to obtain \begin{equation} \delta\tilde\phi(r)=\sqrt{\frac{2}{3}}\frac{\kappa M_\odot}{4\pi r} e^{-m_\phi r} \ , \end{equation} where again we assumed $\hat \rho(\vec{x}')=M_\odot \delta^{(3)}(\vec{x}')$. The Fourier modes corresponding to $\delta\tilde \lambda$ take the form \begin{equation} \delta\hat{\tilde \lambda}=-\frac{a_\lambda \delta\hat{\tilde \phi}}{(k^2+m_\lambda^2)} \ , \end{equation} and after some algebraic manipulations we find its position space representation as \begin{equation} \delta\tilde\lambda(r)=-a_\lambda\sqrt{\frac{2}{3}}\frac{\kappa M_\odot}{8\pi } e^{-m_\phi r} \ , \end{equation} which has no $1/r$ behavior and, therefore, is finite at $r\to 0$ and decays at a much slower pace as $r\to \infty$. Finally, proceeding as in previous sections, the metric perturbations become \begin{eqnarray} h_{00}&=&\frac{2GM_\odot}{r}\left(1+\frac{2}{3}e^{-m_\phi r}\right), \\ h_{ij}&=&\frac{2GM_\odot}{r}\left(1-\frac{2}{3}e^{-m_\phi r}\right)\delta_{ij} \ , \end{eqnarray} from which we extract \begin{eqnarray} G_{\text{eff}}&=&G\left(1+\frac{2}{3}e^{-m_\phi r}\right) , \\ \gamma_{\text{eff}}&=&\frac{3-2e^{-m_\phi r}}{3+2e^{-m_\phi r}} \ . \end{eqnarray} We readily see that, as expected, there is no trace of the scalar $\delta \tilde\lambda$ in these expressions, which has completely decoupled from the weak field limit. This case is also free from the repulsive Yukawa correction of the general case and also lacks of any constant shift associated to a zero mass mode. Obviously, only when $m_\phi r\gg 1$ will the theory pass the weak field observational tests. The situation is thus similar to what we found above in Sec. \ref{sec:C} but without any possibility to set bounds on the parameter $m_\lambda^2$ that characterizes the second scalar field at this perturbation level. \section{Conclusions}\label{sec:concl} We have studied the weak field, slow motion limit of hybrid metric-Palatini $f(R,{\cal R})$ gravity working in the Einstein frame of the corresponding scalar-tensor representation of this family of gravity theories. We have seen that the resulting dynamics is described by a metric and two dynamical scalar degrees of freedom, with the scalars mixing in different ways to yield a variety of scenarios. The results found here are consistent with those obtained by other means in \cite{Bombacigno:2019did}, though we identify various particular cases of interest not explicitly addressed in that work. This is, in part, possible thanks to the simplifications that our notation allows in the transit from the first line of (\ref{eq:deltaphi}) to the second line. We have shown that, in the general case, the effective Newton constant is affected by both an attractive and a repulsive contribution, though the origin of the repulsive mode cannot be easily traced back to the negative sign with which one of the kinetic terms contributes to the total action. This is so because the only term that has a dependence on that sign, the constant $a_\lambda$, appears non-linearly in the effective parameters (via the quantity $M_0^2$ defined in Eq. \eqref{eq:M0}) and contributes in the same way to the amplitude of the Yukawa terms. The case of short range scalars and when the ratio $a_\phi/M^2_0$ are compatible with observations, while a scenario with long range fields cannot be ruled out, though it is harder to constrain. We mention that we restricted our analysis of the general case to those cases in which the parameter $M_0^2$ is positive or zero. A negative value for this quantity would lead to oscillatory terms in the effective metric instead of the standard Yukawa-type corrections. Since there is no evidence supporting that kind of behavior, we omitted their discussion for the sake of clarity. Furthermore, we pointed out the existence of two singular cases in the general discussion, namely, when $a_\phi=0$ and when $m_\phi^2 m_\lambda^2-a_\phi a_\lambda=0$, and analyzed them separately. In the latter case, we observed a partial decoupling of the scalar field $\delta\tilde\lambda$ from the weak field limit, whereas in the former this scalar is completely decoupled. We managed to establish some viability criteria for the $m_\phi^2 m_\lambda^2-a_\phi a_\lambda=0$ case, finding that one of the scalars must be much heavier than the other ($m_\phi^2>10^5 m_\lambda^2$), being short ranged. A similar requirement is needed in the $a_\phi=0$ configuration, though in this case there are no constraints on the mass $m_\lambda^2$. Note also that the decay of $\delta\tilde\lambda$ with the distance to the source is much slower than that of $\delta\tilde\phi$. Whether this may lead to relevant cosmological effects will be explored in more detail elsewhere. \begin{acknowledgments} JLR was supported by the European Regional Development Fund and the programme Mobilitas Pluss (MOBJD647). FSNL acknowledges support from the Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) Scientific Employment Stimulus contract with reference CEECINST/00032/2018, and thanks funding from the research grants No. UID/FIS/04434/2020, No. PTDC/FIS-OUT/29048/2017 and No. CERN/FIS-PAR/0037/2019. GJO is funded by the Spanish projects FIS2017-84440-C2-1-P (MINECO/FEDER, EU), PROMETEO/2020/079 (Generalitat Valenciana), i-COOPB20462 (CSIC), and the Edital 006/2018 PRONEX (FAPESQ-PB/CNPQ, Brazil, Grant 0015/2019). The authors thank F. Bombacigno for useful comments. \end{acknowledgments}
2024-02-18T23:40:19.374Z
2021-04-23T02:13:26.000Z
algebraic_stack_train_0000
2,049
8,558
proofpile-arXiv_065-10248
\section{Introduction} The theory of automatic and biautomatic groups was developed in the 1980s, and is explored in a book by D.~B.~A.~Epstein et al.\ \cite{epstein}. Roughly speaking, a group $G$ is biautomatic if it can be equipped with a regular set of normal forms, in such a way that paths starting and ending at neighbouring vertices in a Cayley graph of $G$ fellow-travel; see Section~\ref{ssec:biauto} for a precise definition. Biautomaticity implies various geometric and algorithmic properties: for instance, a biautomatic group is finitely presented, satisfies a quadratic isoperimetric inequality, has solvable conjugacy problem, and finitely generated abelian subgroups of biautomatic groups are undistorted. There has been substantial interest in biautomaticity of various classes of non-positively curved groups. In particular, word-hyperbolic groups \cite{cannon} and cubulated groups \cite{niblo-reeves}---that is, groups acting geometrically and cellularly on CAT(0) cube complexes---are biautomatic; more generally, Helly groups (introduced recently in \cite{ccgho}) are biautomatic. Artin groups of finite type \cite{charney92} and central extensions of word-hyperbolic groups \cite{neumann-reeves} are also biautomatic. For several decades, it has been an open question whether or not all CAT(0) groups---groups acting geometrically on CAT(0) spaces---are biautomatic. However, I.~J.~Leary and A.~Minasyan have recently \cite{leary-minasyan} constructed examples of CAT(0) groups that are not biautomatic. More precisely, the paper \cite{leary-minasyan} studies commensurating HNN-extensions of $\mathbb{Z}^n$ (called \emph{Leary--Minasyan groups} in this paper), defined for a matrix $A \in GL_n(\mathbb{Q})$ and a finite-index subgroup $L \leq \mathbb{Z}^n \cap A^{-1}(\mathbb{Z}^n)$ by the presentation \[ G(A,L) = \langle x_1,\ldots,x_n,t \mid x_ix_j = x_jx_i \text{ for } 1 \leq i < j \leq n, t\mathbf{x}^{\mathbf{v}}t^{-1} = \mathbf{x}^{A\mathbf{v}} \text{ for } \mathbf{v} \in L \rangle, \] where we write $\mathbf{x}^{\mathbf{w}} := x_1^{w_1} \cdots x_n^{w_n}$ for any $\mathbf{w} = (w_1,\ldots,w_n) \in \mathbb{Z}^n$. It was shown \cite[Theorem~1.1]{leary-minasyan} that such a group $G(A,L)$ is CAT(0) if and only if $A$ is conjugate in $GL_n(\mathbb{R})$ to an orthogonal matrix, and biautomatic if and only if $A$ has finite order. Thus, such groups provide first examples of CAT(0) groups that are not biautomatic. A special case of Leary--Minasyan groups for $n = 1$ are the \emph{Baumslag--Solitar groups}, defined for $p,q \in \mathbb{Z} \setminus \{0\}$ by the presentation $BS(p,q) = \langle x,t \mid tx^pt^{-1} = x^q \rangle$. It is well-known that $BS(p,q)$ is biautomatic if $|p| = |q|$ (because it is cubulated, for instance), and that $BS(p,q)$ cannot be embedded in a biautomatic group if $|p| \neq |q|$ \cite[Corollary~6.8]{gersten-short}. This motivated a question \cite[Question~12.2]{leary-minasyan}, which asks if a similar dichotomy is true for arbitrary Leary--Minasyan groups. We settle this question in the affirmative: that is, $G(A,L)$ is either biautomatic or not embeddable into a biautomatic group. \begin{thm} \label{thm:cor} Let $A \in GL_n(\mathbb{Q})$, and let $L$ be a finite-index subgroup of $\mathbb{Z}^n \cap A^{-1}(\mathbb{Z}^n)$. Then $G(A,L)$ is a subgroup of a biautomatic group if and only if $A$ has finite order. In particular, there exist CAT(0) groups that are not embeddable into biautomatic groups. \end{thm} Theorem~\ref{thm:cor} is a consequence of Theorem~\ref{thm:main} below, which is a statement about commensurators of abelian subgroups in biautomatic groups. Given a group $G$ and a subgroup $H \leq G$, we define the \emph{commensurator} $\Comm_G(H)$ of $H$ in $G$ as the set of elements $g \in G$ for which both $H \cap gHg^{-1}$ and $H \cap g^{-1}Hg$ have finite index in $H$; it is easily seen to be a subgroup of $G$. A related concept is that of \emph{abstract commensurator} $\Comm(H)$ of a group $H$, whose elements are equivalence classes of isomorphisms between finite-index subgroups of $H$, forming a group under composition (see Section~\ref{ssec:comm} for a precise definition). For any $H \leq G$, there is a canonical map $\Comm_G(H) \to \Comm(H)$ which sends $g \in \Comm_G(H)$ to the equivalence class of the isomorphism $\varphi\colon H \cap g^{-1}Hg \to H \cap gHg^{-1}$ defined as $\varphi(h) = ghg^{-1}$. \begin{thm} \label{thm:main} Let $G$ be a biautomatic group and let $H \leq G$ be a finitely generated abelian subgroup. Then the image of $\Comm_G(H)$ in $\Comm(H)$ is finite. In particular, there exists a finite-index subgroup $\Comm_G^0(H) \unlhd \Comm_G(H)$ such that every element of $\Comm_G^0(H)$ centralises some finite-index subgroup of $H$. \end{thm} Theorem~\ref{thm:main} can be seen as a generalisation of \cite[Theorem~1.2]{leary-minasyan}---the only difference between these two results is that $H$ is assumed to be finitely generated in the former and $\mathcal{M}$-quasiconvex (for some biautomatic structure $(Y,\mathcal{M})$ for $G$) in the latter; see Section~\ref{ssec:qconv} for a definition of $\mathcal{M}$-quasiconvex subsets. Indeed, $\mathcal{M}$-quasiconvexity implies finite generation for any subgroup of $G$, and so we may deduce \cite[Theorem~1.2]{leary-minasyan} from Theorem~\ref{thm:main}. Note that Theorem~\ref{thm:cor} can be seen as an immediate corollary of Theorem~\ref{thm:main}. Indeed, if $G$ is biautomatic and $\widehat{G} \leq G$ is a subgroup isomorphic to a Leary--Minasyan group $G(A,L)$ (with an isomorphism sending $\widehat{t} \in \widehat{G}$ to $t \in G(A,L)$ and a subgroup $H < \widehat{G}$ to the subgroup $\langle x_1,\ldots,x_n \rangle < G(A,L)$), then we have $\widehat{t} \in \Comm_{\widehat{G}}(H) \leq \Comm_G(H)$. It then follows from Theorem~\ref{thm:main} that $t^k \in \Comm_G^0(H)$ for some $k \in \mathbb{Z}_{\geq 1}$, implying that $A^k = I_n$, and so that $A$ has order $\leq k$. Conversely, if $A$ has finite order then $G(A,L)$ is itself biautomatic by \cite[Theorem~1.1]{leary-minasyan}, and so it is a subgroup of a biautomatic group. In fact, the class of CAT(0) groups not embeddable into biautomatic groups is more general than merely the groups $G(A,L)$. In particular, if $X$ is a proper CAT(0) space with no Euclidean factors, and if $K \leq \operatorname{Isom}(X)$ is a closed subgroup acting minimally and cocompactly on $X$, then one can show that a lattice $G$ in $\operatorname{Isom}(\mathbb{E}^n) \times K$ is not a subgroup of a biautomatic group unless $G$ has discrete image under the projection $\operatorname{Isom}(\mathbb{E}^n) \times K \to \operatorname{Isom}(\mathbb{E}^n)$. Indeed, it has been pointed out by S.~Hughes \cite[Theorem~7.7 and its proof]{hughes} that in this case there exists an element $t \in G$ that commensurates a subgroup $H \cong \mathbb{Z}^n$ of $G$, but $t^k$ does not centralise any finite index subgroup of $H$ (for any $k \geq 1$). The class of lattices in $\operatorname{Isom}(\mathbb{E}^n) \times K$, with $K$ as above, contains all CAT(0) Leary--Minasyan groups, and is studied in more detail by S.~Hughes in \cite{hughes}. Our proof of Theorem~\ref{thm:main} relies on a triangulation of the sphere $\mathbb{S}^{N-1}$ associated to a biautomatic structure $(X,\mathcal{L})$ for $\mathbb{Z}^N$, described by W.~D.~Neumann and M.~Shapiro in \cite{neumann-shapiro}; see Remark~\ref{rmk:neumann-shapiro} for more details. We equip such a triangulation with an additional structure: namely, we define a \emph{polyhedral function} $f\colon \mathbb{R}^N \to \mathbb{R}$ (see Section~\ref{ssec:poly} for a definition) whose restriction to each polyhedral cone $\{ \beta \mathbf{v} \mid \mathbf{v} \in \Delta, \beta \in [0,\infty) \}$, where $\Delta \subseteq \mathbb{S}^{N-1}$ is a simplex in this triangulation, is linear and homogeneous. This function is chosen in such a way that it roughly approximates lengths of words in $\mathcal{L}$ representing an element of $\mathbb{Z}^N$. The key point of this construction is that, for a group $G$ with a biautomatic structure $(Y,\mathcal{M})$, it allows us to deal not only with an $\mathcal{M}$-quasiconvex abelian subgroup $\widehat{H} \leq G$, but also with any subgroup of such an $\widehat{H}$. The structure of the paper is as follows. In Section~\ref{sec:prelim}, we introduce definitions and main results on commensurators, biautomatic groups and polyhedral functions. In Section~\ref{sec:geompoly}, we prove several results on polyhedral functions, and in Section~\ref{sec:biauto-poly} we associate a polyhedral function to a biautomatic structure for $\mathbb{Z}^N$. In Section~\ref{sec:proof}, we use these results to prove Theorem~\ref{thm:main}. \begin{ack} I would like to thank Ian Leary and Ashot Minasyan for their paper \cite{leary-minasyan} that inspired this research, and Sam Hughes for his useful comments. \end{ack} \section{Preliminaries} \label{sec:prelim} Throughout this paper, we denote by $\langle {-}, {-} \rangle$ the standard inner product on $\mathbb{R}^n$, and by $\| {-} \|$ the standard $\ell_2$-norm on $\mathbb{R}^n$, so that $\| \mathbf{v} \|^2 = \langle \mathbf{v},\mathbf{v} \rangle$ for all $\mathbf{v} \in \mathbb{R}^n$. We also write $I_n \in GL_n(\mathbb{R})$ for the $n \times n$ identity matrix. We denote by $[G : H]$ the index of a subgroup $H$ in a group $G$. We write $Z(G)$ for the centre of a group $G$, and $C_G(S)$ for the centraliser of a subset $S \subseteq G$. \subsection{Commensurators} \label{ssec:comm} Given a group $G$ and a subgroup $H \leq G$, the \emph{commensurator} of $H$ in $G$ is \[ \Comm_G(H) := \{ g \in G \mid [H : H \cap gHg^{-1}] < \infty \text{ and } [H : H \cap g^{-1}Hg] < \infty \}. \] It is easy to check that $\Comm_G(H)$ is a subgroup of $G$ containing $H$. A related notion is that of an abstract commensurator of a group $H$. In order to define it, let $\mathcal{C}_H$ be the set of all isomorphisms $\varphi\colon A \to B$, where $A$ and $B$ are finite-index subgroups of $H$. We say $\varphi,\psi \in \mathcal{C}_H$ are \emph{equivalent}, denoted $\varphi \sim_H \psi$ if there exists a finite-index subgroup $A$ of $H$ contained in the domains of both $\varphi$ and $\psi$ such that $\varphi(h) = \psi(h)$ for any $h \in A$; we denote by $[\varphi]$ or $[\varphi]_H$ the equivalence class of $\varphi$ in $\mathcal{C}_H$. We then define the \emph{abstract commensurator} of $H$ as \[ \Comm(H) := \mathcal{C}_H / {\sim_H}. \] Given two isomorphisms $\varphi\colon A \to B$ and $\varphi'\colon A' \to B'$ between finite-index subgroups $A,A',B,B' \leq H$, we may define the product $[\varphi][\varphi']$ of the classes $[\varphi],[\varphi'] \in \Comm(H)$ to be the equivalence class of the map $\psi\colon (\varphi')^{-1}(A \cap B') \to \varphi(A \cap B')$ defined by $\psi(h) = \varphi(\varphi'(h))$. It is easy to verify that this makes $\Comm(H)$ into a group. Now given an element $g \in \Comm_G(H)$ for groups $H \leq G$, we have $\varphi_g \in \mathcal{C}_H$, where $\varphi_g\colon H \cap g^{-1}Hg \to H \cap gHg^{-1}$ is the map defined by $\varphi_g(h) = ghg^{-1}$. Thus we have a canonical map $\Phi\colon \Comm_G(H) \to \Comm(H)$ sending $g \mapsto [\varphi_g]$, and this map can be easily checked to be a group homomorphism. It follows from the definitions that $g \in \ker(\Phi)$ precisely when $\varphi_g$ coincides with identity on some finite-index subgroup of $H \cap g^{-1}Hg$, which happens if and only if $g$ centralises a finite-index subgroup of $H$. We will be interested in commensurators of finitely generated free abelian groups. In that case, it is easy to see that $\Comm(\mathbb{Z}^n)$ is isomorphic to $GL_n(\mathbb{Q})$. Indeed, given a matrix $A \in GL_n(\mathbb{Q})$, the intersection $L := \mathbb{Z}^n \cap A^{-1}(\mathbb{Z}^n)$ will have finite index in $\mathbb{Z}^n$, and we may define a map $\psi_A\colon L \to AL$ sending $\mathbf{v} \mapsto A\mathbf{v}$. This gives a map $GL_n(\mathbb{Q}) \to \Comm(\mathbb{Z}^n)$ sending $A \mapsto [\psi_A]$, which can be checked to be a group isomorphism. We will also need to relate (abstract) commensurators of two groups one of which is a finite-index subgroup of another, and so we will use the following result. It is well-known, but we give a proof here for completeness. \begin{lem} \label{lem:commfi} Let $G$ be a group, and let $H,H' \leq G$ be subgroups such that $H' \leq H$ and $[H:H'] < \infty$. Then $\Comm_G(H) = \Comm_G(H')$, and there exists an isomorphism $\Psi\colon \Comm(H') \to \Comm(H)$ such that $\Phi = \Psi \circ \Phi'$, where $\Phi\colon \Comm_G(H) \to \Comm(H)$ and $\Phi'\colon \Comm_G(H') \to \Comm(H')$ are the canonical maps. \end{lem} \begin{proof} Let $g \in G$, and denote by $K_\pm$ and $K_\pm'$ the groups $g^{\pm 1} H g^{\mp 1}$ and $g^{\pm 1} H' g^{\mp 1}$, respectively; note that $[K_\pm : K_\pm'] = [H : H'] < \infty$. If we have $[H' : H' \cap K_\pm'] < \infty$, then \[ [H : H \cap K_\pm] \leq [H : H' \cap K_\pm'] = [H:H'] [H' : H' \cap K_\pm'] < \infty. \] Thus $\Comm_G(H') \subseteq \Comm_G(H)$. On the other hand, if $[H : H \cap K_\pm] < \infty$ then \begin{align*} [H' : H' \cap K_\pm'] &\leq [H : H' \cap K_\pm'] \\ &= [H : H \cap K_\pm] [H \cap K_\pm : H' \cap K_\pm] [H' \cap K_\pm : H' \cap K_\pm'] \\ &\leq [H : H \cap K_\pm] [H : H'] [K_\pm : K_\pm'] < \infty. \end{align*} Thus $\Comm_G(H) \subseteq \Comm_G(H')$, and so $\Comm_G(H) = \Comm_G(H')$. Now any isomorphism $\varphi'\colon A' \to B'$ between finite-index subgroups of $H'$ is also an isomorphism between finite-index subgroups of $H$, and so represents a class $[\varphi']_H \in \Comm(H)$. Moreover, two such isomorphisms agree on a finite-index subgroup of $H'$ if and only if they agree on a finite-index subgroup of $H$. It follows that the map $\Psi\colon \Comm(H') \to \Comm(H)$, defined by $\Psi\left( [\varphi']_{H'} \right) = [\varphi']_H$, is well-defined and injective; furthermore, this map is easily seen to be a group isomorphism. Finally, given any isomorphism $\varphi\colon A \to B$ between finite-index subgroups of $H$ we may consider the map \[ \varphi' = \varphi|_{A \cap H' \cap \varphi^{-1}(B \cap H')}\colon A \cap H' \cap \varphi^{-1}(B \cap H') \to B \cap H' \cap \varphi(A \cap H'), \] so that $[\varphi']_{H'} \in \Comm(H')$ and $[\varphi']_H = [\varphi]_H \in \Comm(H)$; thus $\Psi$ is surjective. By construction, both $\Phi$ and $\Psi \circ \Phi'$ send an element $g \in \Comm_G(H) = \Comm_G(H')$ to the map represented by $\varphi_g\colon A \to gAg^{-1}, h \mapsto ghg^{-1}$ for some finite-index subgroup $A$ of $H$, implying that $\Phi = \Psi \circ \Phi'$, as required. \end{proof} \subsection{Biautomatic groups} \label{ssec:biauto} Here we briefly introduce biautomatic groups and their main properties which we will be using in this paper. We refer the interested reader to \cite{epstein} for a more comprehensive account. We fix a group $G$ with a finite generating set $Y$ of $G$; we view $Y$ as an abstract alphabet together with a fixed injective map $Y \hookrightarrow G$. We will always assume that $Y$ is \emph{symmetric} (i.e.\ the image $S$ of $Y$ in $G$ satisfies $S = S^{-1}$) and \emph{contains the identity} (i.e.\ contains an element $1 \in Y$ mapping to the identity $1_G \in G$). We denote by $Y^*$ the free monoid on $Y$, i.e.\ the set of all words with letters in $Y$, forming a monoid under concatenation. Given a word in $U \in Y^*$, we denote by $|U| \in \mathbb{Z}_{\geq 0}$ its length; furthermore, for any $t \in \mathbb{Z}_{\geq 0}$ we set $U(t)$ to be the prefix of $U$ of length $t$ if $t \leq |U|$, and we set $U(t) = U$ for $t > |U|$. We have a monoid homomorphism $Y^* \to G$ induced by the inclusion $Y \hookrightarrow G$; we denote by $\overline{U} \in G$ the image of $U \in Y^*$ under this map, and we say that $U$ \emph{represents} $\overline{U}$. Given an element $g \in G$ we also denote by $|g|_Y$ the distance between vertices $1$ and $g$ in the Cayley graph $\Cay(G,Y)$; that is, $|g|_Y$ is equal to the length of the shortest word in $Y^*$ representing $g$, and so for any $U \in Y^*$ we have $\left| \overline{U} \right|_Y \leq |U|$. A notion that appears in the theory of biautomatic groups is that of a \emph{finite state automaton} (\emph{FSA}). Roughly, a (deterministic) FSA $\mathfrak{A}$ over $Y$ is a finite directed multigraph $\Gamma$ with an assignment of a \emph{starting state} $v_0 \in V(\Gamma)$ and \emph{accept states} $\mathcal{A} \subseteq V(\Gamma)$, and with edges labelled by elements of $Y$ in such a way that for each vertex $v \in V(\Gamma)$ and each $y \in Y$, there exists at most one edge starting at $v$ and labelled by $y$; see \cite[Definition~1.2.1]{epstein} for a detailed definition. Given such an $\mathfrak{A}$, we say a subset $\mathcal{M} \subseteq Y^*$ is \emph{recognised} by $\mathfrak{A}$ if $\mathcal{M}$ is the set of words that label directed paths in $\Gamma$ starting at $v_0$ and ending in $\mathcal{A}$. A subset $\mathcal{M} \subseteq Y^*$ is said to be a \emph{regular language} if it is recognised by some FSA over $Y$. This allows us to define biautomatic groups, as follows. \begin{defn} \label{defn:biauto} Let $G$ be a group, let $Y$ be a finite symmetric generating set of $G$ containing the identity, and let $\mathcal{M} \subseteq Y^*$. We say that $(Y,\mathcal{M})$ is a \emph{biautomatic structure} for $G$ if \begin{enumerate} \item $\mathcal{M}$ is a regular language; and \item \label{it:biauto-ft} $\mathcal{M}$ satisfies the (\emph{two-sided}) \emph{fellow traveller property}: that is, there exists a constant $\lambda \geq 0$ such that if $U,V \in \mathcal{M}$ and $x,y \in Y$ are such that $\overline{U} = \overline{xVy}$, then $\left| \overline{U(t)}^{-1} \overline{x} \overline{V(t)} \right|_Y \leq \lambda$ for all $t \in \mathbb{Z}_{\geq 0}$. \end{enumerate} We say a biautomatic structure $(Y,\mathcal{M})$ for $G$ is \emph{finite-to-one} if for each $g \in G$ there exist only finitely many $U \in \mathcal{M}$ such that $\overline{U} = g$. We say $G$ is \emph{biautomatic} if it has a biautomatic structure. \end{defn} Condition~\ref{it:biauto-ft} in Definition~\ref{defn:biauto} has a more geometric description. In particular, if $U,V \in \mathcal{M}$ are such that $\overline{U} = \overline{xVy}$ for some $x,y \in Y$, then there exist paths $P_U$ and $P_V$ in $\Cay(G,Y)$ starting and ending distance $\leq 1$ away and labelled by $U$ and $V$, respectively. For any $t \in \mathbb{Z}_{\geq 0}$, we set $P_U(t)$ to be the initial subpath of $P_U$ of length $t$ if $t \leq |U|$, and we set $P_U(t) = P_U$ if $t > |U|$; we define $P_V(t)$ similarly. Condition~\ref{it:biauto-ft} then says that the endpoints of $P_U(t)$ and $P_V(t)$ are distance $\leq \lambda$ apart for any $t$. It is known that every biautomatic group admits a finite-to-one biautomatic structure: see \cite[Theorem~2.5.1]{epstein}. For such a biautomatic structure, we have the following observation that will be crucial in our arguments. \begin{lem}[see {\cite[Lemma~2.3.9]{epstein}}] \label{lem:simlengths} Let $(Y,\mathcal{M})$ be a finite-to-one biautomatic structure on a group $G$. Then there exists a constant $\kappa \geq 0$ such that if $U,V \in \mathcal{M}$ and $x,y \in Y$ are such that $\overline{U} = \overline{xVy}$, then $|U|-\kappa \leq |V| \leq |U|+\kappa$. \qed \end{lem} \subsection{Quasiconvex subsets and subgroups} \label{ssec:qconv} An important notion in biautomatic groups is that of quasiconvexity, defined as follows. We refer the interested reader to the paper \cite{gersten-short} by S.~M.~Gersten and H.~B.~Short for more details. \begin{defn} \label{defn:qconv} Let $(Y,\mathcal{M})$ be a biautomatic structure for a group $G$. We say a subset $S \subseteq G$ is \emph{$\mathcal{M}$-quasiconvex} if there exists a constant $\nu \geq 0$ such that any path in $\Cay(G,Y)$ that starts and ends at vertices in $S$ and is labelled by a word in $\mathcal{M}$ belongs to the $\nu$-neighbourhood of $S$. \end{defn} An important consequence of quasiconvexity is that if an $\mathcal{M}$-quasiconvex subset is a subgroup, then it is itself biautomatic. More precisely, we have the following result. \begin{thm}[see {\cite[Theorem~3.1 and its proof]{gersten-short}}] \label{thm:qconv-biauto} Let $(Y,\mathcal{M})$ be a biautomatic structure on a group $G$, and let $H \leq G$ be an $\mathcal{M}$-quasiconvex subgroup. Then there exists a biautomatic structure $(X,\mathcal{L})$ for $H$ and a constant $\mu \geq 0$ such that for any $V \in \mathcal{M}$ with $\overline{V} \in H$, there exists $U \in \mathcal{L}$ with $\overline{U} = \overline{V}$, $|U| = |V|$ and $\left| \overline{U(t)}^{-1} \overline{V(t)} \right|_Y \leq \mu$ for all $t \in \mathbb{Z}_{\geq 0}$. Moreover, if $(Y,\mathcal{M})$ is finite-to-one then so is $(X,\mathcal{L})$. \qed \end{thm} We refer to the biautomatic structure $(X,\mathcal{L})$ given by Theorem~\ref{thm:qconv-biauto} as the biautomatic structure \emph{associated} to $(Y,\mathcal{M})$. It follows from Theorem~\ref{thm:qconv-biauto} that the quasiconvexity relation between groups equipped with biautomatic structures is transitive in the sense of Lemma~\ref{lem:qconv-trans} below. This result is straightforward, but we give a proof for completeness. \begin{lem} \label{lem:qconv-trans} Let $G$ be a group with a biautomatic structure $(Y,\mathcal{M})$, let $H \leq G$ be an $\mathcal{M}$-quasiconvex subgroup with the associated biautomatic structure $(X,\mathcal{L})$, and let $K \leq H$ be an $\mathcal{L}$-quasiconvex subgroup. Then $K$ is $\mathcal{M}$-quasiconvex in $G$. \end{lem} \begin{proof} It follows by Theorem~\ref{thm:qconv-biauto} that there exists $\mu \geq 0$ such that for any $V \in \mathcal{M}$ with $\overline{V} \in H$, there exists $U \in \mathcal{L}$ with $\overline{U} = \overline{V}$ and $\left| \overline{V(t)}^{-1} \overline{U(t)} \right|_Y \leq \mu$ for all $t \in \mathbb{Z}_{\geq 0}$. Now if in addition we have $\overline{V} \in K$, then $\overline{U} \in K$ and so, by Definition~\ref{defn:qconv}, for each $t \in \mathbb{Z}_{\geq 0}$ there exists $k_t \in K$ such that $\left| \overline{U(t)}^{-1} k_t \right|_X \leq \nu$, where $\nu \geq 0$ is some universal constant. If we set $\delta := \max \{ |\overline{x}|_Y \mid x \in X \}$, we then have \[ \left| \overline{V(t)}^{-1} k_t \right|_Y \leq \left| \overline{V(t)}^{-1} \overline{U(t)} \right|_Y + \left| \overline{U(t)}^{-1} k_t \right|_Y \leq \mu + \delta \nu, \] for all $t$, and so any path in $\Cay(G,Y)$ represented by $V$ whose endpoints belong to $K$ is in the $(\mu+\delta\nu)$-neighbourhood of $K$. Thus $K$ is $\mathcal{M}$-quasiconvex in $G$, as required. \end{proof} One of the main sources of $\mathcal{M}$-quasiconvex subgroups in a biautomatic group $G$ are centralisers of finite subsets, as per the following result. \begin{prop}[{\cite[Proposition~4.3]{gersten-short}}] \label{prop:centr-qconv} Let $(Y,\mathcal{M})$ be a biautomatic structure on a group $G$, and let $S \subseteq G$ be a finite subset. Then $C_G(S)$ is $\mathcal{M}$-quasiconvex. \qed \end{prop} \subsection{Polyhedral functions} \label{ssec:poly} Finally, we introduce polyhedral functions, which we will use to approximate lengths of words in $\mathcal{L}$, where $(X,\mathcal{L})$ is a biautomatic structure for $\mathbb{Z}^n$. \begin{defn} \label{defn:polyhedral} Given a finite subset $Z = \{ \mathbf{z}_1,\ldots,\mathbf{z}_\alpha\} \subseteq \mathbb{R}^n$, a \emph{polyhedral cone over $Z$} is the set $C(Z) = \left\{ \sum_{j=1}^\alpha \mu_j \mathbf{z}_j \,\middle|\, \mu_1,\ldots,\mu_\alpha \geq 0 \right\} \subseteq \mathbb{R}^n$. Given a polyhedral cone $C = C(Z)$ and $\mathbf{y} \in \mathbb{R}^n$ such that $\langle \mathbf{z},\mathbf{y} \rangle > 0$ for all $\mathbf{z} \in Z$ (equivalently, $\langle \mathbf{z},\mathbf{y} \rangle > 0$ for all $\mathbf{z} \in C \setminus \{\mathbf{0}\}$), we define a function $f_{C,\mathbf{y}}\colon \mathbb{R}^n \to \mathbb{R}$ by \[ f_{C,y}(\mathbf{v}) = \begin{cases} \langle \mathbf{v},\mathbf{y} \rangle & \text{if } \mathbf{v} \in C, \\ 0 & \text{otherwise}. \end{cases} \] We say $f\colon \mathbb{R}^n \to \mathbb{R}$ is a \emph{polyhedral function} if there exists a finite collection of functions $f_{C_1,\mathbf{y}_1},\ldots,f_{C_m,\mathbf{y}_m}\colon \mathbb{R}^n \to \mathbb{R}$ as above such that \begin{enumerate} \item \label{it:polyhedral-cover} $\mathbb{R}^n = \bigcup_{j=1}^m C_j$; \item \label{it:polyhedral-compat} if $\mathbf{v} \in C_j \cap C_k$ then $\langle \mathbf{v},\mathbf{y}_j \rangle = \langle \mathbf{v},\mathbf{y}_k \rangle$; and \item \label{it:polyhedral-defn} $f(\mathbf{v}) = \max \{ f_{C_j,\mathbf{y}_j}(\mathbf{v}) \mid 1 \leq j \leq m \}$ for all $\mathbf{v} \in \mathbb{R}^n$. \end{enumerate} \end{defn} Note that if $f$ is a polyhedral function then $f$ is continuous and positively homogeneous: that is, $f(\mu \mathbf{v}) = \mu f(\mathbf{v})$ for all $\mathbf{v} \in \mathbb{R}^n$ and $\mu \geq 0$. Such functions have been studied before, for instance, in \cite{melzer}: in their notation, a polyhedral function is precisely a function that belongs to $\mathscr{P}_+(E_n)$ and is positive (that is, $f(\mathbf{v}) \geq 0$ for all $\mathbf{v}$, with equality if and only if $\mathbf{v} = \mathbf{0}$). In this paper we will use the following well-known alternative characterisation of polyhedral cones. Here, a \emph{linear halfspace} is a subset of $\mathbb{R}^n$ of the form $\{ \mathbf{v} \in \mathbb{R}^n \mid \langle \mathbf{v},\mathbf{w} \rangle \geq 0 \}$ for some $\mathbf{w} \in \mathbb{R}^n \setminus \{\mathbf{0}\}$. \begin{thm}[J.~Farkas, H.~Minkowski and H.~Weyl; see {\cite[Theorems~6~\&~7]{kaibel}}] \label{thm:coneequiv} Let $C \subseteq \mathbb{R}^n$ be a subset. Then the following are equivalent: \begin{enumerate} \item $C$ is a polyhedral cone over some finite subset $Z \subseteq \mathbb{R}^n$; \item there exist linear halfspaces $K_1,\ldots,K_\beta \subseteq \mathbb{R}^n$ such that $C = \bigcap_{j=1}^\beta K_j$. \qed \end{enumerate} \end{thm} \begin{ex} \label{ex:intro} An example of a polyhedral function is depicted in Figure~\ref{fig:poly}, where the dotted line denotes $f^{-1}(c)$ for some constant $c > 0$. Here we set $C_j = C\left( \{ \mathbf{z}_{j,1},\mathbf{z}_{j,2} \} \right)$ for $1 \leq j \leq 6$, where \begin{align*} \mathbf{z}_{1,1} &= \textstyle\left( \frac{1}{4},\frac{1}{2} \right), & \mathbf{z}_{1,2} &= \textstyle\left( 0,\frac{1}{2} \right), & \mathbf{y}_1 &= (0,2), \\ \mathbf{z}_{2,1} = \mathbf{z}_{3,1} &= \textstyle\left( \frac{1}{4},\frac{1}{2} \right), & \mathbf{z}_{2,2} = \mathbf{z}_{3,2} &= \textstyle\left( \frac{1}{4},0 \right), & \mathbf{y}_2 = \mathbf{y}_3 &= (4,0), \\ \mathbf{z}_{4,1} &= \textstyle\left( \frac{1}{4},0 \right), & \mathbf{z}_{4,2} &= \textstyle\left( 0,-\frac{1}{2} \right), & \mathbf{y}_4 &= (4,-2), \\ \mathbf{z}_{5,1} &= \textstyle\left( -\frac{1}{2},0 \right), & \mathbf{z}_{5,2} &= \textstyle\left( 0,-\frac{1}{2} \right), & \mathbf{y}_5 &= (-2,-2), \\ \mathbf{z}_{6,1} &= \textstyle\left( 0,\frac{1}{2} \right), & \mathbf{z}_{6,2} &= \textstyle\left( -\frac{1}{2},0 \right), & \mathbf{y}_6 &= (-2,2). \end{align*} It is easy to check that the conditions \ref{it:polyhedral-cover} and \ref{it:polyhedral-compat} in Definition~\ref{defn:polyhedral} are satisfied. \end{ex} \begin{figure}[ht] \begin{tikzpicture} \newcommand{5.3}{5.3} \fill [violet!20] (0,5.3) -- (0,0) -- (5.3/2,5.3); \fill [orange!50!red!20] (5.3/2,5.3) -- (0,0) -- (5.3,0) -- (5.3,5.3); \fill [yellow!30] (0,0) rectangle (5.3,-5.3); \fill [green!25] (0,0) rectangle (-5.3,-5.3); \fill [blue!50!green!20] (0,0) rectangle (-5.3,5.3); \draw [violet,->,ultra thick] (0,4.75) node [below right] {$C_1$} (0,0) -- (0,2) node [below right] {$\mathbf{y}_1$}; \draw [orange!50!red,->,ultra thick] (4.75,4.75) node [below left] {$C_2=C_3$} (0,0) -- (4,0) node [above left] {$\mathbf{y}_2=\mathbf{y}_3$}; \draw [yellow!60!black,->,ultra thick] (4.75,-4.75) node [above left] {$C_4$} (0,0) -- (4,-2) node [below left] {$\mathbf{y}_4$}; \draw [green!70!black,->,ultra thick] (-4.75,-4.75) node [above right] {$C_5$} (0,0) -- (-2,-2) node [left] {$\mathbf{y}_5$}; \draw [blue!50!green!70!black,->,ultra thick] (-4.75,4.75) node [below right] {$C_6$} (0,0) -- (-2,2) node [above] {$\mathbf{y}_6$}; \draw [dotted,ultra thick] (0,3) -- (1.5,3) -- (1.5,0) -- (0,-3) -- (-3,0) -- (0,3) -- cycle; \draw [black,very thin] (0,0) -- (0,5.3) (1,0) -- (1,5.3) (2,2) -- (2,5.3) (3,4) -- (3,5.3) (0,0) -- (5.3,0) (1,1) -- (5.3,1) (1,2) -- (5.3,2) (2,3) -- (5.3,3) (2,4) -- (5.3,4) (3,5) -- (5.3,5) (0,0) -- (0,-5.3) (1,0) -- (1,-5.3) (2,0) -- (2,-5.3) (3,0) -- (3,-5.3) (4,0) -- (4,-5.3) (5,0) -- (5,-5.3) (-1,0) -- (-1,-5.3) (-2,0) -- (-2,-5.3) (-3,0) -- (-3,-5.3) (-4,0) -- (-4,-5.3) (-5,0) -- (-5,-5.3) (0,0) -- (-5.3,0) (0,1) -- (-5.3,1) (0,2) -- (-5.3,2) (0,3) -- (-5.3,3) (0,4) -- (-5.3,4) (0,5) -- (-5.3,5); \fill [white,path fading=fade bottom] (-5.3-0.001,5.3+0.001) rectangle (5.3+0.001,4.7); \fill [white,path fading=fade top] (-5.3-0.001,-5.3-0.001) rectangle (5.3+0.001,-4.7); \fill [white,path fading=fade right] (-5.3-0.001,-5.3-0.001) rectangle (-4.7,5.3+0.001); \fill [white,path fading=fade left] (5.3+0.001,-5.3-0.001) rectangle (4.7,5.3+0.001); \end{tikzpicture} \caption{A representation of a polyhedral function $f\colon \mathbb{R}^2 \to \mathbb{R}$. See Examples \ref{ex:intro}, \ref{ex:section} and \ref{ex:biauto} for details.% } \label{fig:poly} \end{figure} \section{Geometry of polyhedral functions} \label{sec:geompoly} Our first result on polyhedral functions says that the restriction of a polyhedral function to a linear subspace is also polyhedral. \begin{lem} \label{lem:polyhedralsubsp} Let $\theta\colon \mathbb{R}^n \to \mathbb{R}^N$ be a linear isometric embedding, and let $f\colon \mathbb{R}^N \to \mathbb{R}$ be a polyhedral function. Then $f \circ \theta\colon \mathbb{R}^n \to \mathbb{R}$ is a polyhedral function. \end{lem} \begin{proof} Since any linear isometric embedding can be expressed as a composite of linear isometric embeddings $\theta'\colon \mathbb{R}^{n'} \to \mathbb{R}^{n'+1}$, it is enough to consider the case $N = n+1$. In particular, under this assumption there exists $\mathbf{u} \in \mathbb{R}^{n+1}$ with $\langle \mathbf{u},\mathbf{u} \rangle = 1$ such that $\theta(\mathbb{R}^n) = \{ \mathbf{v} \in \mathbb{R}^{n+1} \mid \langle \mathbf{v},\mathbf{u} \rangle = 0 \}$. We first show that if $C \subseteq \mathbb{R}^{n+1}$ is a polyhedral cone then so is $\theta^{-1}(C) \subseteq \mathbb{R}^n$. Indeed, by Theorem~\ref{thm:coneequiv}, in that case we have $C = \bigcap_{j=1}^\beta K_j$ for some linear halfspaces $K_1,\ldots,K_\beta \subseteq \mathbb{R}^{n+1}$; for $1 \leq j \leq \beta$, let $\mathbf{w}_j \in \mathbb{R}^{n+1} \setminus \{\mathbf{0}\}$ be such that $K_j = \{ \mathbf{v} \in \mathbb{R}^{n+1} \mid \langle \mathbf{v},\mathbf{w}_j \rangle \geq 0 \}$. We then have $\mathbf{w}_j - \langle \mathbf{u},\mathbf{w}_j \rangle \mathbf{u} \in \theta(\mathbb{R}^n)$ and so we may set $\mathbf{w}_j' := \theta^{-1}(\mathbf{w}_j - \langle \mathbf{u},\mathbf{w}_j \rangle \mathbf{u})$; moreover, we set $K_j' := \{ \mathbf{v}' \in \mathbb{R}^n \mid \langle \mathbf{v}',\mathbf{w}_j' \rangle \geq 0 \}$, so that either $K_j' = \mathbb{R}^n$ or $K_j' \subseteq \mathbb{R}^n$ is a linear halfspace. Given any $\mathbf{v}' \in \mathbb{R}^n$ we have $\langle \theta(\mathbf{v}'),\mathbf{u} \rangle = 0$, and so \begin{align*} \mathbf{v}' \in K_j' &\iff \langle \mathbf{v}',\mathbf{w}_j' \rangle \geq 0 \iff \big\langle \theta(\mathbf{v}'), \mathbf{w}_j-\langle \mathbf{u},\mathbf{w}_j\rangle \mathbf{u} \big\rangle \geq 0 \iff \langle \theta(\mathbf{v}'), \mathbf{w}_j \rangle \geq 0 \\ &\iff \theta(\mathbf{v}') \in K_j, \end{align*} implying that $K_j' = \theta^{-1}(K_j)$. It follows that $\theta^{-1}(C) = \bigcap_{j=1}^\beta K_j'$ and so (by Theorem~\ref{thm:coneequiv}) $\theta^{-1}(C) \subseteq \mathbb{R}^n$ is a polyhedral cone, as claimed. Now since $f\colon \mathbb{R}^{n+1} \to \mathbb{R}$ is polyhedral, there exist functions $f_{C_1,\mathbf{y}_1},\ldots,f_{C_m,\mathbf{y}_m}\colon \mathbb{R}^{n+1} \to \mathbb{R}$ as in Definition~\ref{defn:polyhedral}. For $1 \leq j \leq m$, we set $\mathbf{y}_j' := \theta^{-1}(\mathbf{y}_j - \langle \mathbf{u},\mathbf{y}_j \rangle \mathbf{u})$ and $C_j' := \theta^{-1}(C_j)$. Given any $\mathbf{z}' \in C_j'$ we have $\theta(\mathbf{z}') \in C_j$ and $\langle \theta(\mathbf{z}'),\mathbf{u} \rangle = 0$, and so \[ \langle \mathbf{z}',\mathbf{y}_j' \rangle = \big\langle \theta(\mathbf{z}'),\mathbf{y}_j-\langle \mathbf{u},\mathbf{y}_j \rangle \mathbf{u} \big\rangle = \langle \theta(\mathbf{z}'),\mathbf{y}_j \rangle > 0, \] implying that we may define a function $f_{C_j',\mathbf{y}_j'}\colon \mathbb{R}^n \to \mathbb{R}$ as in Definition~\ref{defn:polyhedral}. We claim that $f \circ \theta$ may be constructed from the functions $f_{C_j',\mathbf{y}_j'}$ as in Definition~\ref{defn:polyhedral}. Note first that we have \[ \mathbb{R}^n = \theta^{-1}(\mathbb{R}^{n+1}) = \theta^{-1}\Bigg( \bigcup_{j=1}^m C_j \Bigg) = \bigcup_{j=1}^m \theta^{-1}(C_j) = \bigcup_{j=1}^m C_j', \] showing condition~\ref{it:polyhedral-cover}. Moreover, if $\mathbf{v}' \in C_j' \cap C_k'$ then we have $\theta(\mathbf{v}') \in C_j \cap C_k$ and so \begin{align*} \langle \mathbf{v}',\mathbf{y}_j' \rangle &= \big\langle \theta(\mathbf{v}'),\mathbf{y}_j-\langle \mathbf{u},\mathbf{y}_j \rangle \mathbf{u} \big\rangle = \langle \theta(\mathbf{v}'),\mathbf{y}_j \rangle \\ &= \langle \theta(\mathbf{v}'),\mathbf{y}_k \rangle = \big\langle \theta(\mathbf{v}'),\mathbf{y}_k-\langle \mathbf{u},\mathbf{y}_k \rangle \mathbf{u} \big\rangle = \langle \mathbf{v}',\mathbf{y}_k' \rangle, \end{align*} showing condition~\ref{it:polyhedral-compat}. Finally, for any $\mathbf{v}' \in \mathbb{R}^n$ we have $f_{C_j,\mathbf{y}_j} \circ \theta(\mathbf{v}') = 0 = f_{C_j',\mathbf{y}_j'}(\mathbf{v}')$ if $\mathbf{v}' \notin C_j'$, and \[ f_{C_j,\mathbf{y}_j} \circ \theta(\mathbf{v}') = \langle \theta(\mathbf{v}'),\mathbf{y}_j \rangle = \big\langle \theta(\mathbf{v}'), \mathbf{y}_j-\langle \mathbf{u},\mathbf{y}_j \rangle \mathbf{u} \big\rangle = \langle \mathbf{v}',\mathbf{y}_j' \rangle = f_{C_j',\mathbf{y}_j'}(\mathbf{v}') \] if $\mathbf{v}' \in C_j'$, implying that $f_{C_j,\mathbf{y}_j} \circ \theta = f_{C_j',\mathbf{y}_j'}$. Thus $f \circ \theta(\mathbf{v}') = \max \{ f_{C_j',\mathbf{y}_j'}(\mathbf{v}') \mid 1 \leq j \leq m \}$ for all $\mathbf{v}' \in \mathbb{R}^n$, showing condition~\ref{it:polyhedral-defn}. It follows that $f \circ \theta$ is polyhedral, as required. \end{proof} We now turn our attention to group actions that preserve the values of a polyhedral function. In particular, in Proposition~\ref{prop:finite} we show that a polyhedral function cannot be $G$-invariant for any infinite subgroup $G \leq GL_n(\mathbb{R})$. In order to prove this, we will use the following result. \begin{lem} \label{lem:fixhplanes} Let $\{ \mathbf{y}_j \mid j \in \mathcal{I} \}$ be a spanning set of $\mathbb{R}^n$, and let $A \in GL_n(\mathbb{R})$ be such that $A(K_j) = K_j$, where $K_j := \{ \mathbf{v} \in \mathbb{R}^n \mid \langle \mathbf{v},\mathbf{y}_j \rangle = 1 \}$, for each $j \in \mathcal{I}$. Then $A = I_n$. \end{lem} \begin{proof} Since the $\mathbf{y}_j$ span $\mathbb{R}^n$, some subset $\{ \mathbf{y}_{j_1},\ldots,\mathbf{y}_{j_n} \}$ of the $\mathbf{y}_j$ form a basis for $\mathbb{R}^n$. Now for each $k \in \{ 1,\ldots,n \}$, there exists a unique $\mathbf{z}_k \in \mathbb{R}^n$ such that $\langle \mathbf{z}_k,\mathbf{y}_{j_k} \rangle = 1$ and $\langle \mathbf{z}_k,\mathbf{y}_{j_\ell} \rangle = 0$ for all $\ell \neq k$; moreover, it is easy to see that $\{ \mathbf{z}_1,\ldots,\mathbf{z}_n \}$ is a basis for $\mathbb{R}^n$---the basis \emph{dual} to $\{ \mathbf{y}_{j_1},\ldots,\mathbf{y}_{j_n} \}$. Now let $\mathbf{z} = \sum_{k=1}^n \mathbf{z}_k$. It is then easy to verify that $\bigcap_{\ell=1}^n K_{j_\ell} = \{\mathbf{z}\}$, and that $\bigcap_{1 \leq \ell \leq n, \ell \neq k} K_{j_\ell} = \{ \mathbf{z}+\lambda \mathbf{z}_k \mid \lambda \in \mathbb{R} \}$ for all $k$. As $A(K_{j_\ell}) = K_{j_\ell}$ for all $\ell$, it thus follows that $A\mathbf{z} = \mathbf{z}$ and $\{ A\mathbf{z}+\lambda A\mathbf{z}_k \mid \lambda \in \mathbb{R} \} = \{ \mathbf{z}+\lambda \mathbf{z}_k \mid \lambda \in \mathbb{R} \}$ for all $k$. This implies that for each $k$, there exists $\lambda_k \in \mathbb{R} \setminus \{0\}$ such that $A\mathbf{z}_k = \lambda_k \mathbf{z}_k$. But then we have \[ \sum_{k=1}^n \mathbf{z}_k = \mathbf{z} = A\mathbf{z} = \sum_{k=1}^n A\mathbf{z}_k = \sum_{k=1}^n \lambda_k \mathbf{z}_k. \] As the $\mathbf{z}_k$ are linearly independent, it follows that $\lambda_k = 1$, and so $A\mathbf{z}_k = \mathbf{z}_k$, for all $k$. As the $\mathbf{z}_k$ span $\mathbb{R}^n$, it thus follows that $A = I_n$, as required. \end{proof} \begin{prop} \label{prop:finite} Let $f\colon \mathbb{R}^n \to \mathbb{R}$ be a polyhedral function, and let $G \leq GL_n(\mathbb{R})$ be a subgroup. If $f(\mathbf{v}) = f(A\mathbf{v})$ for all $\mathbf{v} \in \mathbb{R}^n$ and all $A \in G$, then $G$ is finite. \end{prop} \begin{proof} We use the notation of Definition~\ref{defn:polyhedral}. In what follows, an \emph{affine hyperplane} is a subset of $\mathbb{R}^n$ of the form $\{ \mathbf{v} \in \mathbb{R}^n \mid \langle \mathbf{v},\mathbf{y} \rangle = c \}$ for some $\mathbf{y} \in \mathbb{R}^n \setminus \{\mathbf{0}\}$ and $c \in \mathbb{R}$. Let $\mathcal{I} \subseteq \{1,\ldots,m\}$ be the set of all $j$ such that $C_j \subseteq \mathbb{R}^n$ has non-empty interior (if $C_j = C(Z_j)$, this is equivalent to saying that $Z_j$ spans $\mathbb{R}^n$). Then, by condition~\ref{it:polyhedral-cover} in Definition~\ref{defn:polyhedral}, we have $\mathbb{R}^n \setminus \bigcup_{j \in \mathcal{I}} C_j \subseteq \bigcup_{j \notin \mathcal{I}} C_j$, where the left hand-side is open and the right hand side is a finite union of convex subsets with empty interior; this implies that the left hand side is actually empty, and so $\mathbb{R}^n = \bigcup_{j \in \mathcal{I}} C_j$. Therefore, conditions \ref{it:polyhedral-compat} and \ref{it:polyhedral-defn} imply that $f^{-1}(1) \subseteq \bigcup_{j \in \mathcal{I}} K_j$, where we set $K_j := \{ \mathbf{v} \in \mathbb{R}^n \mid \langle \mathbf{v},\mathbf{y}_j \rangle = 1 \}$. Moreover, for each $j \in \mathcal{I}$, the set $K_j \cap f^{-1}(1)$ has non-empty interior in $K_j$. The converse is also true: if $K$ is an affine hyperplane not equal to any $K_j$ for $j \in \mathcal{I}$, then we have $K \cap f^{-1}(1) \subseteq K \cap \bigcup_{j \in \mathcal{I}} K_j = \bigcup_{j \in \mathcal{I}} (K \cap K_j)$, and the latter is a finite union of $(n-2)$-dimensional affine subspaces which therefore must have empty interior in $K$. Thus, the set $\mathcal{K} := \{ K_j \mid j \in \mathcal{I} \}$ of affine hyperplanes consists of precisely those $K$ for which $K \cap f^{-1}(1)$ has non-empty interior in $K$. Now the group $G$ has a canonical action on the set of all affine hyperplanes in $\mathbb{R}^n$. Moreover, if $A \in G$ then by the assumptions $A\left( f^{-1}(1) \right) = f^{-1}(1)$, implying that if $K \cap f^{-1}(1)$ has non-empty interior in an affine hyperplane $K$, then $A(K) \cap f^{-1}(1)$ has non-empty interior in $A(K)$. Thus the set $\mathcal{K}$ is $G$-invariant, and so we have homomorphism $\Phi\colon G \to \Sym(\mathcal{K})$. As $\mathcal{K}$ is finite, it is then enough to show that $\Phi$ is injective. We first claim that the set $\{ \mathbf{y}_j \mid j \in \mathcal{I} \}$ spans $\mathbb{R}^n$. Suppose for contradiction that the set $\{ \mathbf{y}_j \mid j \in \mathcal{I} \}$ spans a proper subspace of $\mathbb{R}^n$. Then there exists $\mathbf{u} \in \mathbb{R}^n \setminus \{\mathbf{0}\}$ such that $\langle \mathbf{u},\mathbf{y}_j \rangle = 0$ for all $j \in \mathcal{I}$. But, as shown above, we have $\mathbb{R}^n = \bigcup_{j \in \mathcal{I}} C_j$, and so in that case $\mathbf{u} \in C_k$ for some $k \in \mathcal{I}$. This is impossible, as we have $\langle \mathbf{z},\mathbf{y}_k \rangle > 0$ for all $\mathbf{z} \in C_k \setminus \{\mathbf{0}\}$. Thus indeed $\{ \mathbf{y}_j \mid j \in \mathcal{I} \}$ spans $\mathbb{R}^n$, as claimed. Now let $A \in \ker(\Phi)$, so that $A(K_j) = K_j$ for all $j \in \mathcal{I}$. It then follows from Lemma~\ref{lem:fixhplanes} that $A = I_n$. Therefore, $\Phi\colon G \to \Sym(\mathcal{K})$ is injective and so $G$ is finite, as required. \end{proof} \begin{ex} \label{ex:section} Let $f$ be the polyhedral function depicted in Figure~\ref{fig:poly}, and $\theta\colon \mathbb{R} \to \mathbb{R}^2$ be an isometric embedding whose image is the diagonal $\{ (v,v) \mid v \in \mathbb{R} \} \subset \mathbb{R}^2$. We may then check that $f\circ \theta(v) = 2\sqrt{2} \left| v \right|$ for all $v \in \mathbb{R}$, and so $f \circ \theta$ is polyhedral, as per Lemma~\ref{lem:polyhedralsubsp}: in the notation of Definition~\ref{defn:polyhedral} we could take \begin{align*} C_1 &:= C(\{1\}) = [0,\infty), & \mathbf{y}_1 &:= 2\sqrt{2}, \\ C_2 &:= C(\{-1\}) = (-\infty,0], & \mathbf{y}_2 &:= -2\sqrt{2} \end{align*} to define $f \circ \theta$. A straightforward calculation shows that for any non-identity matrix $A \in GL_2(\mathbb{R})$ there exists $\mathbf{v} \in \mathbb{R}^2$ such that $f(A\mathbf{v}) \neq f(\mathbf{v})$, and so if $G \leq GL_2(\mathbb{R})$ is such that $f(\mathbf{v}) = f(A\mathbf{v})$ for all $\mathbf{v} \in \mathbb{R}^2$ and $A \in G$, then $G$ must be trivial. On the other hand, $f \circ \theta(-v) = f \circ \theta(v)$ for all $v \in \mathbb{R}$. Nevertheless, if $G \leq GL_1(\mathbb{R})$ is a subgroup such that $f \circ \theta(v) = f \circ \theta(Av)$ for all $v \in \mathbb{R}$ and $A \in G$, then we must have $G \leq \left\{ \begin{pmatrix}1\end{pmatrix},\begin{pmatrix}-1\end{pmatrix} \right\}$, and so $G$ is still finite, as per Proposition~\ref{prop:finite}. \end{ex} \section{A polyhedral function associated to a biautomatic structure on \texorpdfstring{$\mathbb{Z}^n$}{Z{\textasciicircum}n}} \label{sec:biauto-poly} In this section, we associate to any biautomatic structure $(X,\mathcal{L})$ on $\mathbb{Z}^N$ a polyhedral function. Our aim is to do this in such a way that given an element $\mathbf{v} \in \mathbb{Z}^N$ represented by a word $U \in \mathcal{L}$, the length $|U|$ of $U$ can be roughly approximated by $f(\mathbf{v})$: see Proposition~\ref{prop:biauto-poly}. We first need the following auxiliary result. \begin{lem} \label{lem:indep} Let $(X,\mathcal{L})$ be a finite-to-one biautomatic structure on $\mathbb{Z}^N$, and suppose that there exist $U_0,V_1,U_1,\ldots,U_\alpha,V_\alpha \in X^*$ such that $U_0 \cdot V_1^* \cdot U_1 \cdot {} \cdots {} \cdot V_\alpha^* \cdot U_\alpha \subseteq \mathcal{L}$. Then the set $\{ \mathbf{z}_1,\ldots,\mathbf{z}_\alpha \} \subseteq \mathbb{Q}^N$, where $\mathbf{z}_j = \overline{V_j} / |V_j|$, is linearly independent. [For the avoidance of doubt, we do allow having $\mathbf{z}_j = \mathbf{z}_{j'}$ for $j \neq j'$---that is, we claim that the $\mathbf{z}_j$ become linearly independent after deleting repetitions.] \end{lem} \begin{proof} Suppose for contradiction that the set $\{ \mathbf{z}_1,\ldots,\mathbf{z}_\alpha \}$ is not independent. Then there exist $\mu_1,\ldots,\mu_\alpha \in \mathbb{R}$, not all zero, such that if $\mathbf{z}_j = \mathbf{z}_{j'}$ for some $1 \leq j < j' \leq \alpha$ then $\mu_{j'} = 0$, and such that \[ \sum_{j=1}^\alpha \mu_j \mathbf{z}_j = 0. \] Since $\mathbf{z}_j \in \mathbb{Q}^N$ we can also choose $\mu_j \in \mathbb{Q}$ for all $j$. Without loss of generality, assume also that $\mu_j > 0$ for some $j$, and (by rescaling the $\mu_j$ if necessary) that $\frac{\mu_j}{|V_j|} \in \mathbb{Z}$ for all $j$. We now consider two cases---depending on whether or not $\mu_j < 0$ for some $j$---obtaining a contradiction in each. Suppose first that $\mu_j \geq 0$ for all $j$] It then follows that for each $\beta \in \mathbb{Z}_{\geq 0}$, the word \[ U_0 V_1^{\beta\mu_1/|V_1|} U_1 \cdots V_\alpha^{\beta\mu_\alpha/|V_\alpha|} U_\alpha \in \mathcal{L} \] represents the element $\sum_{j=0}^\alpha \overline{U_j} \in \mathbb{Z}^N$. As $\mu_j > 0$ for some $j$, this gives infinitely many words representing a single element of $\mathcal{L}$, contradicting the fact that $(X,\mathcal{L})$ is finite-to-one. Suppose now that, on the contrary, $\mu_j < 0$ for some $j$. Let $\mathcal{I}_+ = \{ j \mid \mu_j > 0 \}$ and $\mathcal{I}_- = \{ j \mid \mu_j < 0 \}$; by the assumptions, both $\mathcal{I}_+$ and $\mathcal{I}_-$ are non-empty. Then for each $\beta \in \mathbb{Z}_{\geq 0}$, the words \begin{align*} W_\beta^+ &:= U_0 V_1^{\beta\mu_1^+/|V_1|} U_1 \cdots V_\alpha^{\beta\mu_\alpha^+/|V_\alpha|} U_\alpha \in \mathcal{L} \shortintertext{and} W_\beta^- &:= U_0 V_1^{\beta\mu_1^-/|V_1|} U_1 \cdots V_\alpha^{\beta\mu_\alpha^-/|V_\alpha|} U_\alpha \in \mathcal{L}, \end{align*} where, for $\varepsilon\in\{\pm\}$, $\mu_j^\varepsilon = |\mu_j|$ if $j \in \mathcal{I}_\varepsilon$ and $\mu_j^\varepsilon = 0$ otherwise, represent the same element of $\mathbb{Z}^N$. This means that $W_\beta^+$ and $W_\beta^-$ satisfy the fellow traveller property (see Definition~\ref{defn:biauto}) for some constant $\lambda \geq 0$ independent of $\beta$. Now let $j_+ = \min \mathcal{I}_+$ and $j_- = \min \mathcal{I}_-$. Then the prefixes of $W_\beta^+$ and $W_\beta^-$ of length $t = t(\beta) = \beta \mu + \sum_{j=0}^\alpha |U_j|$, where $\mu = \min \{\mu_{j_+},-\mu_{j_-}\ $, are \begin{align*} W_\beta^+(t) &:= U_0 \cdots U_{j_+-1} V_{j_+}^{\left\lfloor \beta\mu/|V_{j_+}| \right\rfloor} Y_\beta^+ \shortintertext{and} W_\beta^-(t) &:= U_0 \cdots U_{j_--1} V_{j_-}^{\left\lfloor \beta\mu/|V_{j_-}| \right\rfloor} Y_\beta^- \end{align*} respectively, where $Y_\beta^+$ and $Y_\beta^-$ are some words of length $\leq |V_{j_+}| + |V_{j_-}| + \sum_{j=0}^\alpha |U_j|$. But this means that $\overline{W_\beta^+(t)} - \overline{W_\beta^-(t)}$ is bounded distance away from the point $\frac{\beta\mu}{|V_{j_+}|}\overline{V_{j_+}} - \frac{\beta\mu}{|V_{j_-}|}\overline{V_{j_-}} = \beta\mu(\mathbf{z}_{j_-}-\mathbf{z}_{j_+}) \in \mathbb{Q}^N$ with respect to any fixed norm on $\mathbb{Q}^N$. As by assumptions $\mu > 0$ and $\mathbf{z}_{j_+} \neq \mathbf{z}_{j_-}$, it follows that $\overline{W_\beta^+(t)} - \overline{W_\beta^-(t)}$ is arbitrarily far from the origin for large $\beta$, contradicting the fellow traveller property.\end{proof} \begin{prop} \label{prop:biauto-poly} Let $(X,\mathcal{L})$ be a finite-to-one biautomatic structure on $\mathbb{Z}^N$. Then there exist a polyhedral function $f\colon \mathbb{R}^N \to \mathbb{R}$ and a constant $\xi \geq 0$ such that \[ f(\overline{U}) - \xi \leq \left| U \right| \leq f(\overline{U}) + \xi \] for all $U \in \mathcal{L}$. \end{prop} \begin{proof} As $(X,\mathcal{L})$ is finite-to-one and as $\mathbb{Z}^N$ has polynomial growth, it follows that $\mathcal{L}$ has polynomial growth as well---that is, there exists a polynomial $g(x)$ such that for each $n \geq 0$ there are at most $g(n)$ words $U \in \mathcal{L}$ with $|U| \leq n$. It follows from \cite[Proposition~1.3.8]{epstein} that $\mathcal{L}$ is \emph{simply starred}: that is, there exist integers $\alpha_1,\ldots,\alpha_m \in \mathbb{Z}_{\geq 0}$ and words $U_{j,0},V_{j,1},U_{j,1},\ldots,V_{j,\alpha_j},U_{j,\alpha_j} \in X^*$ (for each $j \in \{1,\ldots,m\}$) such that \begin{equation} \label{eq:langunion} \mathcal{L} = \bigcup_{j=1}^m U_{j,0} \cdot (V_{j,1})^* \cdot U_{j,1} \cdot {} \cdots {} \cdot (V_{j,\alpha_j})^* \cdot U_{j,\alpha_j}. \end{equation} By Lemma~\ref{lem:indep}, for each $j$ the set $\{ \mathbf{z}_{j,1},\ldots,\mathbf{z}_{j,\alpha_j} \} \subseteq \mathbb{R}^N$, where $\mathbf{z}_{j,k} = \frac{\overline{V_{j,k}}}{|V_{j,k}|}$, is linearly independent. Thus, if $H_{j,k} \subseteq \mathbb{R}^N$ is the affine hyperplane $\{ \mathbf{v} \in \mathbb{R}^N \mid \langle \mathbf{z}_{j,k},\mathbf{v} \rangle = 1 \}$, then for $1 \leq j \leq m$ the intersection $\bigcap_{k=1}^{\alpha_j} H_{j,k}$ is non-empty, and so contains a point $\mathbf{y}_j \in \bigcap_{k=1}^{\alpha_j} H_{j,k}$. For each $j \in \{ 1,\ldots,m \}$, let $C_j \subseteq \mathbb{R}^N$ be the polyhedral cone over $\{ \mathbf{z}_{j,1},\ldots,\mathbf{z}_{j,\alpha_j} \}$; notice that we have $\langle \mathbf{z}_{j,k}, \mathbf{y}_j \rangle = 1 > 0$ for all $k$ by construction, and so we may define a function $f_{C_j,\mathbf{y}_j}\colon \mathbb{R}^N \to \mathbb{R}$ as in Definition~\ref{defn:polyhedral}. We then define $f\colon \mathbb{R}^N \to \mathbb{R}$ by setting \[ f(\mathbf{v}) := \max \{ f_{C_j,\mathbf{y}_j}(\mathbf{v}) \mid 1 \leq j \leq m \}. \] We claim the following. \begin{lem} \label{lem:polyhedral} $f$ is a polyhedral function; in particular, in the above notation, conditions \ref{it:polyhedral-cover}, \ref{it:polyhedral-compat} and \ref{it:polyhedral-defn} in Definition~\ref{defn:polyhedral} are satisfied. \end{lem} We postpone the proof of Lemma~\ref{lem:polyhedral} until later, and finish the proof of Proposition~\ref{prop:biauto-poly} first. We now define a few constants, as follows. We set $\delta := \max \left\{ \sum_{k=1}^{\alpha_j} |U_{j,k}| \,\middle|\, 1 \leq j \leq m \right\}$, $\zeta := \max \{ \|\mathbf{y}_j\| \mid 1 \leq j \leq m \}$, and $\eta := \max \{ \| \overline{x} \| \mid x \in X \}$. We then set \[ \xi := \zeta\eta\delta + \delta. \] Notice that if $\mathbf{v},\mathbf{v}' \in C_j$ for some $j$ then condition~\ref{it:polyhedral-compat} in Definition~\ref{defn:polyhedral} implies that $f(\mathbf{v}) = \langle \mathbf{v},\mathbf{y}_j \rangle$ and $f(\mathbf{v}') = \langle \mathbf{v}',\mathbf{y}_j \rangle$, and therefore \[ |f(\mathbf{v})-f(\mathbf{v}')| = |\langle \mathbf{v}-\mathbf{v}',\mathbf{y}_j \rangle| \leq \left\| \mathbf{v}-\mathbf{v}' \right\| \left\| \mathbf{y}_j \right\| \leq \zeta \left\| \mathbf{v}-\mathbf{v}' \right\|; \] this implies, by considering values of $f$ at some intermediate points on the geodesic connecting $\mathbf{v}$ and $\mathbf{v}'$, that in fact $|f(\mathbf{v})-f(\mathbf{v}')| \leq \zeta \left\| \mathbf{v}-\mathbf{v}' \right\|$ for \emph{any} $\mathbf{v},\mathbf{v}' \in \mathbb{R}^N$. In particular, it follows that if $\mathbf{v},\mathbf{v}' \in \mathbb{Z}^N$ then $|f(\mathbf{v})-f(\mathbf{v}')| \leq \zeta\eta|\mathbf{v}-\mathbf{v}'|_X$. Now let $U \in \mathcal{L}$, so that by \eqref{eq:langunion} we have $U = U_{j,0} V_{j,1}^{\beta_1} U_{j,1} \cdots V_{j,\alpha_j}^{\beta_{\alpha_j}} U_{j,\alpha_j}$ for some $j$ and some $\beta_1,\ldots,\beta_{\alpha_j} \in \mathbb{Z}_{\geq 0}$. We set $\mathbf{v} := \sum_{k=1}^{\alpha_j} \beta_k \overline{V_{j,k}}$, so that we have $\overline{U}-\mathbf{v} = \sum_{k=0}^{\alpha_j} \overline{U_{j,k}}$, implying that \[ \left| f(\overline{U}) - f(\mathbf{v}) \right| \leq \zeta\eta \left| \sum_{k=0}^{\alpha_j} \overline{U_{j,k}} \right|_X \leq \zeta\eta\sum_{k=0}^{\alpha_j} |U_{j,k}| \leq \zeta\eta\delta. \] On the other hand, we have $\mathbf{v} = \sum_{k=1}^{\alpha_j} \beta_k|V_{j,k}| \mathbf{z}_{j,k}$, and we can compute that \[ f_{C_j,\mathbf{y}_j}(\mathbf{v}) = \langle \mathbf{v},\mathbf{y}_j \rangle = \sum_{k=1}^{\alpha_j} \beta_k|V_{j,k}| \langle \mathbf{z}_{j,k},\mathbf{y}_j \rangle = \sum_{k=1}^{\alpha_j} \beta_k|V_{j,k}|. \] Moreover, it follows from the condition~\ref{it:polyhedral-compat} in Definition~\ref{defn:polyhedral} that we have $f(\mathbf{v}) = f_{C_j,\mathbf{y}_j}(\mathbf{v})$, and therefore \[ |U| = \sum_{k=0}^{\alpha_j} |U_{j,k}| + \sum_{k=1}^{\alpha_j} \beta_k |V_{j,k}| = \sum_{k=0}^{\alpha_j} |U_{j,k}| + f(\mathbf{v}). \] We thus have \[ \left| |U| - f(\overline{U}) \right| \leq \big| |U| - f(\mathbf{v}) \big| + \left| f(\mathbf{v}) - f(\overline{U}) \right| \leq \left| \sum_{k=0}^{\alpha_j} |U_{j,k}| \right| + \zeta\eta\delta \leq \delta + \zeta\eta\delta = \xi, \] as required. \end{proof} We now prove Lemma~\ref{lem:polyhedral} that was stated in the proof of Proposition~\ref{prop:biauto-poly}. \begin{proof}[Proof of Lemma~\ref{lem:polyhedral}] The condition~\ref{it:polyhedral-defn} follows from the construction---so we only need to check \ref{it:polyhedral-cover} and \ref{it:polyhedral-compat}. In what follows, we set $\delta := \max \left\{ \sum_{k=0}^{\alpha_j} |U_{j,k}| \,\middle|\, 1 \leq j \leq m \right\}$. As $(X,\mathcal{L})$ is a finite-to-one biautomatic structure, by Lemma~\ref{lem:simlengths} there exists $\kappa \geq 0$ such that if $U,V \in \mathcal{L}$ are such that $\left| \overline{U}-\overline{V} \right|_X \leq 1$ then $|U|-\kappa \leq |V| \leq |U|+\kappa$. In order to show \ref{it:polyhedral-cover}, suppose for contradiction that $D := \mathbb{R}^N \setminus \bigcup_{j=1}^m C_j$ is non-empty. Thus, as $D \subseteq \mathbb{R}^N$ is open and $\mathbb{Q}^N \subseteq \mathbb{R}^N$ is dense, we may pick $\mathbf{v} \in D \cap \mathbb{Q}^N$. Since $D$ is invariant under multiplication by $\mu$ for any $\mu > 0$, we may furthermore assume that $\mathbf{v} \in \mathbb{Z}^N$, and that if $\mathbf{w} \in \mathbb{Z}^N$ but $\mathbf{w} \notin D$ then $|\mathbf{v}-\mathbf{w}|_X > \delta$. Now let $U \in \mathcal{L}$ be a word with $\overline{U} = \mathbf{v}$. By \eqref{eq:langunion}, we have $U = U_{j,0} V_{j,1}^{\beta_1} U_{j,1} \cdots V_{j,\alpha_j}^{\beta_{\alpha_j}} U_{j,\alpha_j}$ for some $j$ and some $\beta_1,\ldots,\beta_{\alpha_j} \in \mathbb{Z}_{\geq 0}$. However, we then have $\mathbf{w} := \sum_{k=1}^{\alpha_j} \beta_k \overline{V_{j,k}} = \sum_{k=1}^{\alpha_j} \beta_k |V_{j,k}| \mathbf{z}_{j,k} \in C_j$, but $|\mathbf{v}-\mathbf{w}|_X \leq \sum_{k=0}^{\alpha_j} |U_{j,k}| \leq \delta$, contradicting the choice of $\mathbf{v}$. Thus $\{ C_j \mid 1 \leq j \leq m \}$ must cover $\mathbb{R}^N$, which shows \ref{it:polyhedral-cover}. In order to show \ref{it:polyhedral-compat}, let $j,k \in \{ 1,\ldots,m \}$. Since $\mathbf{z}_{j,\ell} \in \mathbb{Q}^N$ and $\mathbf{z}_{k,\ell} \in \mathbb{Q}^N$ for all $\ell$, we may express $C_j \cap C_k$ as the set of solutions of a system of linear inequalities with rational coefficients (see Theorem~\ref{thm:coneequiv}). In particular, it follows that any non-empty open subset of $C_j \cap C_k$ contains a point in $\mathbb{Q}^N$, and so $C_j \cap C_k$ is the closure of $C_j \cap C_k \cap \mathbb{Q}^N$ in $\mathbb{R}^N$. As the functions $\langle {-}, \mathbf{y}_j \rangle$ and $\langle {-}, \mathbf{y}_k \rangle$ are continuous, it is thus enough to verify \ref{it:polyhedral-compat} when $\mathbf{v} \in \mathbb{Q}^N$. Thus, let $\mathbf{v} \in C_j \cap C_k \cap \mathbb{Q}^N$. Since $C_j$ and $C_k$ are invariant under multiplication by any $\mu > 0$, and since the functions $\langle {-}, \mathbf{y}_j \rangle$ and $\langle {-}, \mathbf{y}_k \rangle$ are linear, we may furthermore assume (after multiplying $\mathbf{v}$ by a positive integer if necessary) that $\mathbf{v} = \sum_{\ell=1}^{\alpha_j} \mu_{j,\ell} \mathbf{z}_{j,\ell} = \sum_{\ell=1}^{\alpha_k} \mu_{k,\ell} \mathbf{z}_{k,\ell}$ with $\mu_{j,\ell}/|V_{j,\ell}| \in \mathbb{Z}_{\geq 0}$ and $\mu_{k,\ell}/|V_{k,\ell}| \in \mathbb{Z}_{\geq 0}$ for all $\ell$. For any $\beta \in \mathbb{Z}_{\geq 0}$, we define the words \begin{align*} W_{\beta,j} &= U_{j,0} V_{j,1}^{\beta\mu_{j,1}/|V_{j,1}|} U_{j,1} \cdots V_{j,\alpha_j}^{\beta\mu_{j,\alpha_j}/|V_{j,\alpha_j}|} U_{j,\alpha_j} \in \mathcal{L} \shortintertext{and} W_{\beta,k} &= U_{k,0} V_{k,1}^{\beta\mu_{k,1}/|V_{k,1}|} U_{k,1} \cdots V_{k,\alpha_k}^{\beta\mu_{k,\alpha_k}/|V_{k,\alpha_k}|} U_{k,\alpha_k} \in \mathcal{L}. \end{align*} We then have \[ \overline{W_{\beta,j}} = \sum_{\ell=0}^{\alpha_j} \overline{U_{j,\ell}} + \sum_{\ell=1}^{\alpha_j} \frac{\beta\mu_{j,\ell}}{|V_{j,\ell}|} \overline{V_{j,\ell}} = \sum_{\ell=0}^{\alpha_j} \overline{U_{j,\ell}} + \beta \sum_{\ell=1}^{\alpha_j} \mu_{j,\ell} \mathbf{z}_{j,\ell} = \sum_{\ell=0}^{\alpha_j} \overline{U_{j,\ell}} + \beta \mathbf{v}, \] and similarly $\overline{W_{\beta,k}} = \sum_{\ell=0}^{\alpha_k} \overline{U_{k,\ell}} + \beta \mathbf{v}$. It follows that \[ \left| \overline{W_{\beta,j}} - \overline{W_{\beta,k}} \right|_X \leq \sum_{\ell=0}^{\alpha_j} |U_{j,\ell}| + \sum_{\ell=0}^{\alpha_k} |U_{k,\ell}| \leq 2\delta, \] and so $\Big| |W_{\beta,j}| - |W_{\beta,k}| \Big| \leq 2\delta\kappa$. On the other hand, we may compute that $|W_{\beta,j}| = \sum_{\ell=0}^{\alpha_j} |U_{j,\ell}| + \beta \sum_{\ell=1}^{\alpha_j} \mu_{j,\ell}$ and $|W_{\beta,k}| = \sum_{\ell=0}^{\alpha_k} |U_{k,\ell}| + \beta \sum_{\ell=1}^{\alpha_k} \mu_{k,\ell}$, implying that \[ \beta \left| \sum_{\ell=1}^{\alpha_k} \mu_{k,\ell} - \sum_{\ell=1}^{\alpha_j} \mu_{j,\ell} \right| \leq \sum_{\ell=0}^{\alpha_j} |U_{j,\ell}| + \sum_{\ell=0}^{\alpha_k} |U_{k,\ell}| + \Big| |W_{\beta,j}| - |W_{\beta,k}| \Big| \leq 2\delta+2\delta\kappa; \] as $\beta$ can be chosen to be arbitrarily large, it follows that $\sum_{\ell=1}^{\alpha_j} \mu_{j,\ell} = \sum_{\ell=1}^{\alpha_k} \mu_{k,\ell}$. Finally, we get \[ \langle \mathbf{v},\mathbf{y}_j \rangle = \sum_{\ell=1}^{\alpha_j} \mu_{j,\ell} \langle \mathbf{z}_{j,\ell},\mathbf{y}_j \rangle = \sum_{\ell=1}^{\alpha_j} \mu_{j,\ell} = \sum_{\ell=1}^{\alpha_k} \mu_{k,\ell} = \sum_{\ell=1}^{\alpha_k} \mu_{k,\ell} \langle \mathbf{z}_{k,\ell},\mathbf{y}_k \rangle = \langle \mathbf{v},\mathbf{y}_k \rangle. \] This proves \ref{it:polyhedral-compat}. \end{proof} \begin{rmk} \label{rmk:neumann-shapiro} Our construction is related to the Neumann--Shapiro triangulation of $\mathbb{S}^{N-1}$ associated to a biautomatic structure on $\mathbb{Z}^N$ \cite{neumann-shapiro}. Namely, let $C_1,\ldots,C_m \subseteq \mathbb{R}^N$ be the polyhedral cones constructed in the proof of Proposition~\ref{prop:biauto-poly}. Some subset of these cones---which we get after discarding cones contained in either a proper subspace of $\mathbb{R}^N$ or another cone---is precisely the set of $(N-1)$-simplices in the relevant Neumann--Shapiro triangulation of $\mathbb{S}^{N-1}$. We may furthermore order vertices in each of these polyhedral cones, with the ordering induced by the order $\mathbf{z}_{j,1} \prec \cdots \prec \mathbf{z}_{j,\alpha_j}$ on $C_j$, thus recovering the complete structure of the triangulations exhibited in \cite{neumann-shapiro}. However, we amend this triangulation by constructing a polyhedral function, associating to $(X,\mathcal{L})$ a geometric rather than combinatorial structure. This allows easier treatment of arbitrary subgroups of $\mathbb{Z}^N$. In particular, even though a triangulation of $\mathbb{S}^{N-1}$ does not necessarily induce a triangulation on an arbitrary equatorial subsphere $\mathbb{S}^{n-1} \subset \mathbb{S}^{N-1}$ for $n < N$, composing a polyhedral function with an arbitrary isometric linear inclusion $\mathbb{R}^n \hookrightarrow \mathbb{R}^N$ still yields a polyhedral function by Lemma~\ref{lem:polyhedralsubsp}. \end{rmk} \begin{ex} \label{ex:biauto} Let $X = \{ \varepsilon,x,y,x^{-1},y^{-1} \}$ be the generating set of $\mathbb{Z}^2$ such that $x$, $y$ and $\varepsilon$ map to $(1,0)$, $(0,1)$ and $(0,0)$, respectively. Define the language $\mathcal{L}$ as follows: \begin{align*} \mathcal{L} := \left(\varepsilon xy^2\right)^* \left(\varepsilon y\right)^* &\cup \left(\varepsilon xy^2\right)^* \left(\varepsilon^3 x\right)^* \cup \left(\varepsilon xy^2\right)^* y^{-1} \left(\varepsilon^3 x\right)^* \\ &{} \cup \left(\varepsilon^3 x\right)^* \left(\varepsilon y^{-1}\right)^* \cup \left(\varepsilon x^{-1}\right)^* \left(\varepsilon y^{-1}\right)^* \cup \left(\varepsilon y\right)^* \left(\varepsilon x^{-1}\right)^*. \end{align*} One may then check that $(X,\mathcal{L})$ is indeed a finite-to-one biautomatic structure for $\mathbb{Z}^2$. Moreover, in this case the polyhedral function constructed in the proof of Proposition~\ref{prop:biauto-poly} is precisely the function $f$ depicted in Figure~\ref{fig:poly}, and one may check that the values of $\mathbf{y}_j$ and $\mathbf{z}_{j,k}$ indicated in Example~\ref{ex:intro} are consistent with the notation used in the proof of Proposition~\ref{prop:biauto-poly} for the language $\mathcal{L}$. The thin black lines in Figure~\ref{fig:poly} represent the paths starting at $(0,0)$ and labelled by words in $\mathcal{L}$. Proposition~\ref{prop:biauto-poly} then implies that given any $n \in \mathbb{Z}_{\geq 0}$, if $S_n \subset \mathbb{Z}^2$ is the set of elements represented by words in $\mathcal{L}$ of length $n$, then $S_n$ is bounded distance away from an appropriate rescaling of the dotted line in Figure~\ref{fig:poly}, for some bound that is independent of~$n$. \end{ex} \section{The proof of Theorem~\ref{thm:main}} \label{sec:proof} Finally, we use Lemma~\ref{lem:polyhedralsubsp} and Propositions~\ref{prop:finite}~\&~\ref{prop:biauto-poly} to prove Theorem~\ref{thm:main}. The idea of our proof is to embed $H$ (or a finite-index torsion-free subgroup of $H$) into an $\mathcal{M}$-quasiconvex free abelian subgroup $\widehat{H} \leq G$, where $(Y,\mathcal{M})$ is a biautomatic structure of $G$. We then associate a polyhedral function to $\widehat{H}$, as per Proposition~\ref{prop:biauto-poly}, and Lemma~\ref{lem:polyhedralsubsp} allows us to restrict this function to a polyhedral function associated to $H$. The following result then implies that the latter polyhedral function is (in a sense) $K$-invariant, where $K$ is the image of $\Comm_G(H)$ in $\Comm(H)$, and consequently $K$ is finite by Proposition~\ref{prop:finite}. \begin{prop} \label{prop:polyhedralaction} Let $G$ be a group with a finite-to-one biautomatic structure $(Y,\mathcal{M})$, let $\widehat{H} \leq G$ be a free abelian $\mathcal{M}$-quasiconvex subgroup of rank $N \geq 0$, and let $\varphi\colon \widehat{H} \to \mathbb{R}^N$ be an embedding with $\varphi(\widehat{H}) = \mathbb{Z}^N$. Then there exists a polyhedral function $f\colon \mathbb{R}^N \to \mathbb{R}$ such that $f \circ \varphi(h) = f \circ \varphi(ghg^{-1})$ for all $g \in G$ and all $h \in \widehat{H} \cap g^{-1}\widehat{H}g$. \end{prop} \begin{proof} As $(Y,\mathcal{M})$ is a finite-to-one biautomatic structure on $G$, by Lemma~\ref{lem:simlengths} there exists a constant $\kappa \geq 0$ such that if $U,V \in \mathcal{M}$ are such that $\overline{U} = \overline{y^{-1}Vy}$ for some $y \in Y$, then $\big| |U|-|V| \big| \leq \kappa$. Since $\widehat{H}$ is $\mathcal{M}$-quasiconvex, by Theorem~\ref{thm:qconv-biauto} there exists a finite-to-one biautomatic structure $(X,\mathcal{L})$ on $\widehat{H}$ such that for any $V \in \mathcal{M}$ with $\overline{V} \in \widehat{H}$, there exists $U \in \mathcal{L}$ with $\overline{U} = \overline{V}$ and $|U| = |V|$. By identifying the subgroup $\widehat{H}$ with $\mathbb{Z}^N$ via $\varphi$, let $f\colon \mathbb{R}^N \to \mathbb{R}$ and $\xi \geq 0$ be the polyhedral function and the constant given by Proposition~\ref{prop:biauto-poly}. Now let $g \in G$ and let $h \in \widehat{H} \cap g^{-1}\widehat{H}g$, so that $h^\beta,gh^\beta g^{-1} \in \widehat{H}$ for all $\beta \in \mathbb{Z}_{\geq 0}$; without loss of generality, assume that $g \neq 1$. For each $\beta \in \mathbb{Z}_{\geq 0}$, let $U_\beta,V_\beta \in \mathcal{L}$ and $U_\beta',V_\beta' \in \mathcal{M}$ be such that $\overline{U_\beta} = \overline{U_\beta'} = h^\beta$, $\overline{V_\beta} = \overline{V_\beta'} = gh^\beta g^{-1}$, $|U_\beta| = |U_\beta'|$ and $|V_\beta| = |V_\beta'|$. It then follows from Proposition~\ref{prop:biauto-poly} that $\big| f \circ \varphi(h^\beta) - |U_\beta| \big| \leq \xi$ and $\big| f \circ \varphi(gh^\beta g^{-1}) - |V_\beta| \big| \leq \xi$. Moreover, by the choice of the constant $\kappa \geq 0$ above, we have $\big| |U_\beta|-|V_\beta| \big| = \big| |U_\beta'|-|V_\beta'| \big| \leq \kappa|g|_Y$. Finally, it follows from Definition~\ref{defn:polyhedral} that $f(\beta \mathbf{x}) = \beta f(\mathbf{x})$ for all $\beta \in \mathbb{Z}_{\geq 0}$ and $\mathbf{x} \in \mathbb{R}^n$, and therefore \begin{align*} &\beta \left| f \circ \varphi(h) - f \circ \varphi(ghg^{-1}) \right| = \left| f \circ \varphi(h^\beta) - f \circ \varphi(gh^\beta g^{-1}) \right| \\ &\qquad\qquad{} \leq \left| f \circ \varphi(h^\beta) - |U_\beta| \right| + \big| |U_\beta|-|V_\beta| \big| + \left| |V_\beta| - f \circ \varphi(gh^\beta g^{-1}) \right| \\ &\qquad\qquad{} \leq 2\xi + \kappa|g|_Y. \end{align*} As $\beta \in \mathbb{Z}_{\geq 0}$ was arbitrary, it follows that $f \circ \varphi(h) = f \circ \varphi(ghg^{-1})$, as required. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}] Since $H$ is finitely generated abelian, it has a torsion-free subgroup $H'$ of finite index. By Lemma~\ref{lem:commfi}, it is enough to show that the image of $\Comm_G(H')$ in $\Comm(H')$ is finite. Therefore, we will assume (without loss of generality) that $H$ is torsion-free. Now let $(Y,\mathcal{M})$ be a biautomatic structure for $G$. Since $H$ is finitely generated, its centraliser $C_G(H)$ is $\mathcal{M}$-quasiconvex by Proposition~\ref{prop:centr-qconv}; let $(Y',\mathcal{M}')$ be the associated biautomatic structure for $C_G(H)$. In particular, $C_G(H)$ is finitely generated and so its centre, $Z(C_G(H))$, is $\mathcal{M}'$-quasiconvex in $C_G(H)$ (again by Proposition~\ref{prop:centr-qconv}), and so $\mathcal{M}$-quasiconvex in $G$ (by Lemma~\ref{lem:qconv-trans}). Let $(Y'',\mathcal{M}'')$ be the associated biautomatic structure for $Z(C_G(H))$. Now $Z(C_G(H))$ is a finitely generated abelian group containing $H$, and so $Z(C_G(H))/H$ is a finitely generated abelian group. Thus $Z(C_G(H))/H$ has a finite-index torsion-free subgroup, say $\widehat{H}/H$. Its preimage $\widehat{H}$ is a finite-index abelian subgroup of $Z(C_G(H))$ containing $H$, and as $H$ is torsion-free so is $\widehat{H}$. Moreover, as $\widehat{H}$ has finite index in $Z(C_G(H))$, it is $\mathcal{M}''$-quasiconvex, and so by Lemma~\ref{lem:qconv-trans} it is $\mathcal{M}$-quasiconvex in $G$. Thus, $\widehat{H} \leq G$ is an $\mathcal{M}$-quasiconvex free abelian subgroup of finite rank ($N$, say) containing $H$. Now let $\widehat{\varphi}\colon \widehat{H} \to \mathbb{R}^N$ be an embedding such that $\widehat{\varphi}(\widehat{H}) = \mathbb{Z}^N$. By identifying the subspace of $\mathbb{R}^N$ spanned by $\widehat{\varphi}(H)$ with $\mathbb{R}^n$ via a linear isometry, we see that $\widehat{\varphi}|_H = \theta \circ \varphi$, where $\theta\colon \mathbb{R}^n \to \mathbb{R}^N$ is a linear isometric embedding, and $\varphi\colon H \to \mathbb{R}^n$ is an embedding as a lattice. Given an element $g \in \Comm_G(H)$, we may define a matrix $A_g \in GL_n(\mathbb{R})$ such that $\varphi(ghg^{-1}) = A_g\varphi(h)$ for all $h \in H \cap g^{-1}Hg$; such a matrix is unique since $H \cap g^{-1}Hg$ has finite index in $H$ and so $\varphi(H \cap g^{-1}Hg)$ is a lattice in $\mathbb{R}^n$. This defines a map \begin{align*} \Theta\colon \Comm_G(H) &\to GL_n(\mathbb{R}), \\ g &\mapsto A_g, \end{align*} which is easily seen to be a homomorphism---in fact, the map $\Theta$ is just a composite $\Comm_G(H) \to \Comm(H) \cong GL(V) \hookrightarrow GL_n(\mathbb{R})$, where $V = \varphi(H) \otimes \mathbb{Q} < \mathbb{R}^n$ is an $n$-dimensional $\mathbb{Q}$-vector subspace. Now by Proposition~\ref{prop:polyhedralaction}, there exists a polyhedral function $\widehat{f}\colon \mathbb{R}^N \to \mathbb{R}$ such that $\widehat{f} \circ \widehat{\varphi}(h) = \widehat{f} \circ \widehat{\varphi}(ghg^{-1})$ for all $g \in G$ and all $h \in \widehat{H} \cap g^{-1}\widehat{H}g$. By Lemma~\ref{lem:polyhedralsubsp}, the function $f = \widehat{f} \circ \theta\colon \mathbb{R}^n \to \mathbb{R}$ is also polyhedral. Now fix $g \in \Comm_G(H)$. Then, for any $h \in H \cap g^{-1}Hg$ we have \begin{align*} f(A_g \varphi(h)) &= f \circ \varphi(ghg^{-1}) = \widehat{f} \circ \theta \circ \varphi(ghg^{-1}) = \widehat{f} \circ \widehat{\varphi}(ghg^{-1}) \\ &= \widehat{f} \circ \widehat{\varphi}(h) = \widehat{f} \circ \theta \circ \varphi(h) = f(\varphi(h)), \end{align*} and so $f(A_g\mathbf{v}) = f(\mathbf{v})$ for all $\mathbf{v} \in \varphi(H \cap g^{-1}Hg)$. As $f$ is polyhedral, we have $f(\beta \mathbf{v}) = \beta f(\mathbf{v})$ for all $\beta \in [0,\infty)$ and all $\mathbf{v} \in \mathbb{R}^n$, implying that $f(A_g\mathbf{v}) = f(\mathbf{v})$ for all $\mathbf{v} \in K$, where $K := \{ \beta \mathbf{w} \mid \beta \in [0,\infty), \mathbf{w} \in \varphi(H \cap g^{-1}Hg) \}$. As $\varphi(H \cap g^{-1}Hg)$ is a lattice in $\mathbb{R}^n$, the subset $K \subseteq \mathbb{R}^n$ is dense; and as $f$ is polyhedral, it is continuous, implying that $f(A_g\mathbf{v}) = f(\mathbf{v})$ for all $\mathbf{v} \in \mathbb{R}^n$. Thus, we have $f(A\mathbf{v}) = f(\mathbf{v})$ for all $A \in \Theta(\Comm_G(H))$ and all $\mathbf{v} \in \mathbb{R}^n$. It follows from Proposition~\ref{prop:finite} that $\Theta(\Comm_G(H))$ is finite. Since $\Theta$ factors as a composite $\Comm_G(H) \to \Comm(H) \hookrightarrow GL_n(\mathbb{R})$, it follows that $\Comm_G(H)$ has finite image in $\Comm(H)$, as required. The `in particular' part of the Theorem follows directly from the definition of the map $\Comm_G(H) \to \Comm(H)$: indeed, $g \in \Comm_G(H)$ is in the kernel of this map if and only if it centralises a finite-index subgroup of $H$. \end{proof} \bibliographystyle{amsalpha}
2024-02-18T23:40:20.268Z
2021-05-10T02:15:17.000Z
algebraic_stack_train_0000
2,100
12,550
proofpile-arXiv_065-10270
\section{Introduction} \label{sect:intro} Owing to the simplicity of their implementation and their powerful error detection capability, Cyclic Redundancy Checks (CRCs) \cite{crc1961} are widely adopted in applications in storage, networking, and communications. CRCs are sometimes used not only to error detect, but also to assist in error correction. For example, to assist in early stopping of Turbo decoding iterations \cite{4656995}, or joint decoding of CRCs and convolutional codes \cite{4686263}. In list decoding architectures, a CRC is commonly used for selection from a list of candidates. This technique has been applied to Turbo codes \cite{list_turbo} and Polar codes (CA-Polar) \cite{Arikan09,KK-CA-Polar,TV-list}. The latter leads to the success of CRC-aided successive cancellation list (CA-SCL or SCL for brief) decoding algorithms. The better performance at shorter code lengths of CA-Polar codes, as compared to other common Shannon limit approaching codes, such as Turbo codes and LDPC codes \cite{ldpc97}, has led to their inclusion in 5G-NR control communications where latency is of particular concern \cite{3gpp38212}. With the error correction capability enabled by CRC, significant benefits can be brought to existing systems, but have heretofore seen only limited use. For example, the Automatic Repeat Request (ARQ) scheme of networking can be upgraded to Hybrid ARQ through the use of CRC's error correction. Similar benefits can be found in IoT or personal area network (PAN). Previous research on the use of CRCs for error correction focused only on fixing one or two erroneous bits \cite{crc_fec_2006, crc2020}. Soft detection techniques such as Belief Propagation (BP) and Linear Programming (LP) \cite{crcLLR} have also been considered for decoding CRCs, but their performance is severely limited when used for short and high rate codes. No previous approach leads to a broadly applicable decoding method capable of extracting the maximum error correction performance from CRCs. Many emerging applications require low latency communication of small packets of data. Examples include machine communications in IoT systems \cite{ieee_latency, iot2019, wHP_fec}, telemetry, tracking, and command in satellite communications \cite{ttc_2005, fec_ttc_2014}, all control channels in mobile communication systems, and Ultra-Reliable Low Latency Communications (URLLC) as proposed in 5G-NR \cite{urllc,sybis2016channel,shirvanimoghaddam2018short}. Most codes and decoders were designed to function well for large blocks of data, but provide underwhelming performance for short codes. As a result, some conventional codes have received renewed attention \cite{short_fec, bch_m2m, fec_wireless}, including Reed-Solomon Codes\cite{ReedSolo}, BCH codes\cite{journals/iandc/BoseR60a} and Random Linear Codes (RLC). In this paper, we show that CRCs stand out as a viable class of error correcting codes. The error correction capability of CRCs is enabled by the recently introduced Guessing Random Additive Noise Decoding (GRAND) algorithm \cite{Duffy18,kM:grand}, which can decode any short, high-rate block codes. GRAND's universality stems from its effort to identify the effect of the noise, from which the code-word is deduced. Originally introduced as a hard detection decoder \cite{Duffy18,kM:grand,grand-mo}, a series of soft detection variants, SRGRAND \cite{Duffy19a,Ken:5G}, ORBGRAND \cite{Duffy20} and SGRAND \cite{Solomon20}, have since been proposed that make distinct quantization assumptions. Both hard and soft detection versions of GRAND are ML decoders. The simplicity of GRAND's operation has resulted in the proposal of efficient circuit implementations \cite{abbas2020high}. With GRAND and standard decoders if available, we evaluate CRCs as short, high-rate error correction codes when compared with BCH, RLCs and CA-Polar codes. Using the best published CRC generator polynomials \cite{koopmanWeb} we find that CRCs perform as well as BCH codes at their available settings, but CRCs can operate for a wider range of code lengths and rates. Featuring even more flexible code-book settings, RLCs possess security potentials by code-book re-randomisation, but at very high code-rates RLC performance degrades in comparison to select CRCs. Using GRAND to compare the performance of Polar, CA-Polar and CRC codes, we find that that the CRC is not aiding so much as dominating the decoding performance potential. Furthermore, the celebrated CA-SCL algorithm underperforms for short packets because the CRC bits are only used for error detection. The rest of the paper is organized as follows. Section \ref{sect:grand} introduces GRAND hard and soft detection variants. Section \ref{sect:code} review CRC and other comparison candidates. Section \ref{sect:sim_comp} provides simulated performance evaluation of involved channel codes. Section \ref{sect:conclusion} summarizes the paper's findings. \section{Guessing Random Additive Noise Decoding} \label{sect:grand} \subsection{GRAND-SOS for hard detection} \label{sub:grand} Consider a transmitted binary code-word $X^n\in {\mathcal{C}}$ drawn from an arbitrary rate $R$ code-book ${\mathcal{C}}$, i.e. a set of $2^{nR}=2^K$ strings in $\{0,1\}^n$, where $(n,K)$ is a core pair of parameters of a codebook. Assume independent post-hard-detection channel noise, $N^n$, which also takes values from $\{0,1\}^n$, additively alters $X^n$ between transmission and reception. The resulting sequence is $Y^n = X^n \oplus N^n$, where $\oplus$ represents addition modulo 2. From $Y^n$ GRAND attempts to determine $X^n$ indirectly by identifying $N^n$ through sequentially taking putative noise sequences, $z^n$, which we sometimes term patterns, subtracting them from the received signal and querying if what remains, $Y^n\oplus z^n$, is in the code-book $\mathcal{C}$. If transmitted code-words are all equally likely and $z^n$ are queried in order from most likely to least likely based on the true channel statistics, the first instance where a code-book element is found is an optimally accurate maximum likelihood decoding in the hard detection setting \cite{kM:grand}. For the hard detection binary symmetric channels (BSC) channels considered in this paper the order of testing noise patterns $z^{n,i}, i=1,2,...,n$ follows their Hamming weight from low to high. For patterns with identical Hamming weights, we choose a sweeping order of generation, in which a grouped pattern sweeps from one end of sequence to the other end. A new grouped pattern is obtained by permuting the bits in the group. When permutation exhausts, an extra non-flipped bit is included and the permutation process restarts. This pattern generator gives the decoder some advantages with burst errors (specifically treated in \cite{grand-mo}) and the resulted algorithm is named as GRAND with simple order sweeping (GRAND-SOS). It is established in \cite{kM:grand} that one need not query all possible noise patterns for the decoder to be capacity-achieving, and instead one can determine a threshold for the number of code-book queries, termed the abandonment threshold, at which a failure to decode can be reported without unduly impacting error correction performance. Standard decoders of linear block codes can decode up to $t=\left \lfloor d/2 \right \rfloor$ errors where $d$ is the minimum distance of the code-book \cite{lin2004error}. Using this limit as abandonment threshold, GRAND is guaranteed to achieve at least equivalent performance to existing code-book dedicated decoders. \subsection{ORBGRAND for soft detection} \label{sub:orb} Ordered reliability bits GRAND (ORBGRAND) is a soft detection approximate ML decoder that features significant complexity advantage over the full ML soft GRAND (SGRAND) decoder \cite{Solomon20}, and still provides better BLER performance than state-of-the-art soft decoders \cite{Duffy20} for short, high-rate codes. Soft detection versions of GRAND require reliability information of each received code-word bit. With binary phase-shift key (BPSK) modulation and additive white Gaussian noise (AWGN) channels, the reliability is simply the absolute value of the received signal, from which the probability of a test pattern can be computed and used for its testing order. Facing the fact that efficient implementation of this simple idea is not computational straight-forward, ORBGRAND develops an efficient method to generate a approximate optimal order that is universal. Assume the reliability order index of the $k-th$ received bit is $r_k^n \in (1,2,...,n)$. The vector $r^n$ records the reliability orders of all received bits in $Y^n$. A reliability weighted Hamming weight called \textit{Logistic Weight} for $z^n$ is defined as, \begin{equation} \label{eq:lw} w_L(z^n) = \sum_{k=1}^{n}r_k^nz_k^n \end{equation} This metric is used to determine the checking order of testing noise sequences $z^{n,i}, i=1,2,...,n$, i.e. $w_L(z^{n,i})<w_L(z^{n,j})$ for $i<j$. Patterns with identical Logistic Weights can be arbitrarily ordered. The performance of soft decoders is no longer restricted by minimum distance of code-books. In most cases ORBGRAND with a moderate complexity can provide better performance than state-of-the-art soft decoders. \section{Overview of Channel Codes} \label{sect:code} \subsection{CRC codes} \label{sub:crc} As a type of cyclic code \cite{prange1957cyclic}, CRC operates in terms of polynomial computation based on Galois field of two elements GF(2). The CRC generator polynomial can be expressed as $g(x)=\sum_{k=0}^{n-K-1}g_kx^k$, where $n-K$ is the number of CRC bits to be appended to a message sequence, expressed as polynomial $m(x) = \sum_{i=0}^{K-1}m_ix^i$, with $n$ being the code-word length. The CRC code-word is constructed as, \begin{equation} \label{eq:crc} c(x) = x^{n-K}m(x) + \text{remainder}\left ( \frac{x^{n-K}m(x)}{g(x)} \right ) \end{equation} with the remainder of polynomial division appended to the shifted message polynomial, the resulting code-word polynomial $c(x)$ is divisible by $g(x)$. The property is used in parity checks to detect errors, and provides an efficient implementation for checking code-book membership in GRAND. While Equation (\ref{eq:crc}) presents the systematic format of CRC codes, they can also be constructed as other cyclic codes (such as BCH) as, \begin{equation} \label{eq:crc2} c(x) = m(x)g(x) \end{equation} Equation (\ref{eq:crc2}) generates the same code-book as (\ref{eq:crc}), but with lower computation complexity. To maximize the number of detectable errors, the CRC maximizes the minimum Hamming distance of its code-book. The topic has been studied extensively and optimal generator polynomials have been published for a wide set of combinations of CRC bit numbers and message lengths \cite{koopman04,koopman06,koopmanWeb}. Unlike channel codes for error correction function, the design of CRC codes need not consider decoder implementation, indicating potentially better decoding performance than normal Forward Error Correction (FEC) codes, provided that a decoder is available, i.e. GRAND. \subsection{Other short code candidates} \label{sub:other} Also as a type of cyclic codes, BCH codes can be constructed with Equation (\ref{eq:crc2}). In this sense, BCH codes can be viewed as a special case of CRC codes. For any positive integers $m>3$ and $t<2^{m-1}$, there exists a binary BCH code with code-word length of $n=2^m-1$, parity bit number of $n-K \leq mt$ and minimum distance of $d \geq 2t+1$. A list of available parameters $n$, $K$ and $t$ can be found in \cite{lin2004error}. The standard decoder for BCH codes is Berlekamp-Massey (B-M) decoder \cite{journals/iandc/BoseR60a}, which is also used for the widely adopted Reed-Solomon (RS) codes, a special type of BCH codes with operations extended to higher order Galois fields. It should be noted that the original proposal of GRAND has no limitation on the Galois field order\cite{kM:grand}, Here we focus on binary codes, but the methodology can be easily extended to RS codes in higher order Galois fields. RLCs are linear block codes that have long been used for theoretical investigation, but have heretofore not been considered practical in terms of decodability. With GRAND, RLCs are no more challenging to decode than any other linear codes. Their generator matrices are obtained by appending a randomly generated parity component to the identity matrix, from which the parity check matrix can be computed accordingly. In addition to its flexibility, the ease of code-book construction grants RLC security features by real-time code-book updates. CA-Polar codes are concatenated CRC and Polar codes. Although they approach the Shannon reliable rate limit \cite{Arikan09}, Polar codes do not give satisfactory decoding performance for practical code-word lengths, even with the ML successive cancellation (SC) algorithm. A CRC is introduced as an outer code to assist decoding and the CA-SCL decoding algorithm was invented as the key to the success of the resulted CA-Polar codes \cite{KK-CA-Polar,TV-list}. In the decoding process, a list of candidate code-words are produced and CRC checks are performed to make the selection with confidence. There is to our knowledge no standard hard detection decoder for Polar or CA-Polar codes apart from GRAND decoders. For code-book membership checking in GRAND, parity check matrices can be easily derived for generator matrices, which, for linear block codes, can be obtained with a simple method that injects each vector of the K-by-K identity matrix to the encoder and collects the output vectors. \section{Simulated Performance Evaluation} \label{sect:sim_comp} For our simulations, we map the stationary bit flip probability of the channel energy per transmitted information bit via the usual AWGN BPSK formula: $ p= \text{Q}\left(\sqrt{2RE_b/N_0}\right), $ where $R$ is the code rate. Table \ref{tb:bch_crc} lists the code-book settings considered for CRC evaluation, along with published preferred generator polynomials. Note that the zero order 1 is omitted in the least significant bit \cite{koopmanWeb}. \begin{table}[!ht] \centering \begin{tabular}{||c c | c | c c||} \hline N & K & BCH $t$ & CRC poly. & $d$ \\ [0.5ex] \hline 127 & 120 & 1 & 0x65 & 3 \\ 127 & 113 & 2 & 0x212d & 5 \\ 127 & 106 & 3 & 0x12faa5 & 7 \\ 128 & 99 & N/A & 0x13a46755 & 8 \\ \hline 63 & 57 & 1 & 0x33 & 3 \\ 63 & 51 & 2 & 0xbae & 5 \\ 63 & 45 & 3 & 0x25f6a & 7 \\ 64 & 51 & N/A & 0x12e6 & 4 \\ \hline \end{tabular} \caption{List of code-book settings along with correctable error number $t$ for BCH codes and minimum distances $d$ for CRC codes.} \label{tb:bch_crc} \end{table} \subsection{CRC Codes vs BCH Codes} \label{sub:crc_bch} For code-book settings in Table \ref{tb:bch_crc} available for BCH codes, we measure the performance of the following code/decoder combinations: \begin{itemize} \item BCH codes with Berlekamp-Massey (B-M) decoders \item BCH codes with GRAND-SOS decoder and $AB>t$ \item CRC codes with GRAND-SOS decoder and $AB>t$ \item CRC codes with GRAND-SOS decoder and $AB>4$ \end{itemize} $AB>t$ indicates abandonment when the error number in testing patterns is larger than the code-book decoding capability, $t$. And $AB>4$ is a loose condition for all available BCH codes listed in Table \ref{tb:bch_crc}, with which we show the influence of the minimum distance of a code-book to its decoding performance. Simulation results for code-word length of $127$ are presented in Fig. \ref{fig:bch_crc_127}. \begin{figure}[htbp] \centerline{\includegraphics[width=0.44\textwidth]{images/bch_crc_127}} \caption{Performance evaluation of code/decoder combinations with code-word length of $127$ for BCH and CRC codes.} \label{fig:bch_crc_127} \end{figure} For each code rate, all four code/decoder combinations have performance curves that are close to each other, leading to the following conclusions. For BCH codes, the GRAND-SOS algorithm has the same decoding capability as the standard Berlekamp-Massey decoder. With identical code-book settings, CRC codes have matching performances to BCH codes. Loosening abandonment condition grants little improvement to decoding performance of GRAND, so the use of abandonment to curtail complexity remains attractive. These observations are further confirmed with simulations of code-word length of $63$, as shown in Fig. \ref{fig:bch_crc_63} \begin{figure}[htbp] \centerline{\includegraphics[width=0.44\textwidth]{images/bch_crc_63}} \caption{Performance evaluation of code/decoder combinations with code-word length of $63$ for BCH and CRC codes.} \label{fig:bch_crc_63} \end{figure} Combined evaluation of curves in Fig. \ref{fig:bch_crc_127} and Fig. \ref{fig:bch_crc_63} indicates that performance of short codes is sensitive to code-word lengths and rates, therefore the flexibility of CRC renders it attractive vis-\`a-vis BCH codes in low-latency scenarios. \subsection{CRC Codes vs RLCs} \label{sub:crc_rlc} To explore fully the decoding capability of RLC codes, we update the generator matrix for every code-word transmission to achieve random selection of code-words. For a fair comparison, we apply the same abandonment threshold for CRC and RLC codes. In Fig. \ref{fig:rlc_crc}, the RLC(128, 99) curve overlaps the CRC(128, 99) curve. As the code rate increases, however, the RLC performance degrades in comparison to the CRC code, owing to the shrinking of code-word space that makes it more probable for RLC to pick up code-words close to each other. The generator polynomial of CRC, however, is the product of a selection procedure \cite{koopman04,koopman06} that thus results in a code-book with stable minimum distance. This suggests that to extract near-CRC code performance for very high rate RLCs, extra structure might be needed. \begin{figure}[htbp] \centerline{\includegraphics[width=0.44\textwidth]{images/rlc_crc}} \caption{Hard detection performance of RLC and CRC codes with selected code-word lengths and rates.} \label{fig:rlc_crc} \end{figure} \subsection{Comparison with Polar and CA-Polar Codes} \label{sub:crc_polar} In 5G mobile systems, an 11-bit CRC (CRC11) is specified for uplink CA-Polar codes, and a 24-bit CRC (CRC24) for downlink. We evaluate performance of Polar codes with or without aid from CRC11 and CRC24, and compare them to CRC codes from in Table \ref{tb:bch_crc}, for code-book with $(n,K)$ set to (128, 99). Hard detection ML performance when decoded with GRAND-SOS are presented in Fig. \ref{fig:polar_crc}. A generous abandonment threshold $AB>5$ is chosen to avoid influence from computation complexity on code performance. Let us first assess code performance under a hard detection scenario. The Polar(128, 99) code without the CRC shows significantly worse performance than the three other codes. This is reasonable considering that Polar codes are closely related to Reed-Muller codes \cite{journals/tc/Muller54}, which are known to be inferior to cyclic codes (such as BCH). The introduction of CRC codes, either CRC11 or CRC24, sufficiently breaks the undesirable structure in Polar code, bringing its performance close to that achieved by CRC29. In this sense, the ``aid'' from CRC is so effective that it may be preferable just to use the CRC as an error-correcting code, considering that significant encoding complexity can be saved thanks to the simple implementation of CRC encoders. \begin{figure}[htbp] \centerline{\includegraphics[width=0.44\textwidth]{images/polar_crc}} \caption{Hard detection performance of Polar(128,99), CA-Polar(128,99) with CRC11, CA-Polar(128,99) with CRC24 and CRC(128,99) (i.e. CRC29). All decoded with GRAND-SOS and $AB>5$.} \label{fig:polar_crc} \end{figure} Having considered performance under hard detection, let us envisage soft detection performance, which is of particular importance given that prevailing decoders for Polar codes are soft decoders, especially when a non-trivial decoding gain is expected \cite{lin2004error}. The extra gain comes from decoding errors beyond the hard detection decoding capability $t$. Therefore we use the number of code-book checks as the abandonment condition in this case, with $AB>5e6$ being a generous threshold that ensures optimal decoding performance. A gain of about 1.5 dB from soft decoding at BLER$=10^{-3}$ can be observed by comparing Fig. \ref{fig:polar_crc} and Fig. \ref{fig:polar_soft}. Of more interest is that under the same ORBGRAND decoding power four candidate codes follow identical performance ranking as their hard detection behaviors, i.e. CRC assists Polar codes to approach the best performance achieved by the CRC code itself. \begin{figure}[htbp] \centerline{\includegraphics[width=0.44\textwidth]{images/polar_soft}} \caption{Soft detection performance of code-book setting of (128, 99) for Polar, CRC11 aided Polar, CRC24 aided Polar, and CRC29 codes; SC for Polar, CA-SCL with list size of 16 for CA-Polar and ORBGRAND with $AB>5 \times 10^6$ for all codes; SC and CA-SCL are performed with \cite{Cassagne2019a}.} \label{fig:polar_soft} \end{figure} Evaluating the performance of the SC and CA-SCL soft decoding algorithms in Fig. \ref{fig:polar_soft}, a surprising phenomenon can be observed. While SC decoding of the Polar code and CA-SCL of the CRC11 aided Polar code provide comparable performance to ORBGRAND, the performance of CA-SCL with the CRC24 aided Polar code dramatically degrades to a level even worse than the hard detection performance of the same code in Fig. \ref{fig:polar_crc}. This reveals a previously unnoticed issue with the CA-SCL algorithm. As a list decoding algorithm, CA-SCL produces a set of candidate code-words in its decoding procedure and use CRC checks to make candidate selection. More CRC bits provide better error detection capability and consequently higher confidence in candidate checks. However, the redundancy introduced by CRC bits to the code-book is ignored for error correction, leaving the remaining Polar bits to produce the redundancy required for the error correcting list decoding. With a given code-book $(n,K)$ set of parameters, if the number of CRC bits is excessive relative to the limited number of Polar redundant bits, then the decoder may experience limited space of list candidates and, consequently, performance loss, in spite of the enhanced selection confidence from the extra CRC bits. As the key contributor to the error correcting quality of CA-Polar codes, the CA-SCL algorithm has been applied mostly to long codes, in which this shortcoming, which has been unnoticed so far to our knowledge, is far from negligible on these short codes. \begin{figure}[htbp] \centerline{\includegraphics[width=0.44\textwidth]{images/polar_soft_64}} \caption{Soft detection performance of code-book setting of (64, 51) for Polar, CRC11 aided Polar, and CRC13 codes; SC for Polar, CA-SCL with list size of 16 for CA-Polar and ORBGRAND with $AB>5 \times 10^6$ for all codes; SC and CA-SCL are performed with \cite{Cassagne2019a}.} \label{fig:polar_soft_64} \end{figure} In Fig. \ref{fig:polar_soft}, the CA-Polar (128, 99) with CRC11 does not experience this performance loss, owing to the 18 Polar bits, which are still sufficient to provide enough list candidate space. Unfortunately this success is not guaranteed with shorter code-word lengths or with higher coding rates. Fig. \ref{fig:polar_soft_64} displays the same short-coming of the CA-SCL algorithm with the CRC11 but in CA-Polar(64,51), while ORBGRAND is stably successful. The superior performance of CRC codes when compared to Polar codes, CRC aided or not, with their ultra low encoding complexity, and the robust decoding performance of ORBGRAND as compared to CA-SCL algorithm, all suggest that a CRC code with ORBGRAND as its decoder provides a better short code solution than Polar or CA-Polar codes. \section{Summary} \label{sect:conclusion} Despite being the most widely used error detection solution in storage and communication, CRC codes have not been considered seriously as error correcting codes owing to the absence of an appropriate decoder. Instead, their main contribution has been in assisting state-of-the-art list decoders, with CA-Polar codes as the most successful recent example. With the invention of GRAND, the possibility of considering CRCs as error correction codes can be realized. Here we consider hard and soft detection variants, GRAND-SOS and ORBGRAND, to evaluate the performance of CRC codes in comparison with other candidate short, high-rate codes, including BCH codes, RLCs and CA-Polar codes. As short codes, CRCs not only provide at least equivalent decoding performance to existing solutions, but are superior in various aspects. Compared to to BCH codes, CRC codes have no restriction on code lengths and rates. For very high rate codes, RLCs are outperformed by CRC codes. Polar codes are surpassed by CRC codes in both hard and soft detection BLER performance. Aided by CRC, CA-Polar code can provide a matching performance, but the CRC alone still wins with its ultra low encoding complexity. In addition, CA-SCL decoder, as the key to practical adoption of CA-Polar code, suffers significant performance loss when CRC bits dominate the code's redundancy. The reason is that CA-SCL uses CRC checks for candidate selection in list decoding, but does not avail of the CRC bits for error correction. ORBGRAND, on the other hand, is robust in all scenarios. Therefore, CRC with GRAND as its decoder is a good solution to low latency applications requiring short codes. \bibliographystyle{IEEEtran}
2024-02-18T23:40:20.351Z
2021-04-29T02:14:00.000Z
algebraic_stack_train_0000
2,105
4,172
proofpile-arXiv_065-10321
\section{Introduction} Recently, deep convolutional neural networks (CNNs) \cite{AlexNet,VGGNet, ResNet} made a remarkable progress for image classification. Then, \cite{RCNN, VisualizingNet} found that trained network from image classification can be well transferred to object detection training initialization. As time went on, numerous downstream vision tasks employed this `pre-training and fine-tuning' fashion. However, for downstream tasks, the pre-training is a time consuming work because many people believe that pre-training need massive size of datasets like ImageNet \cite{ImageNet} to get good or 'universal' visual representations. There are few works explored to train detectors from scratch, until He \etal \cite{RethinkImageNet} shows that on COCO \cite{COCO} dataset, it is possible to train comparably performance detector from scratch without ImageNet pre-training and also reveals that ImageNet pre-training speeds up convergence but can't improve final performance for detection task. But it still needs a relatively long training schedule \eg it totally catches up ImageNet pre-training needs 6x (72 epochs) schedule. \begin{figure} \centering \includegraphics[width=0.48\linewidth]{figs/process.png} \includegraphics[width=0.48\linewidth]{figs/efficiency.png} \begin{picture}(0,0) \put(-340,-8){(a) Fine-tuning performance} \put(-152,-8){(b) Overall time-AP comparison} \end{picture} \caption{Training time and performance comparison on COCO dataset. '1x','2x','6x' schedule for 12, 24 and 73 epochs respectively.'P1x' means pre-training with 1x schedule \ie 12 epochs. All models trained in a 8*2080Ti machine.} \label{process_comp} \end{figure} In this paper, we explore to setup a fast scratch training pipeline for object detection. To tackle this task, one important issue is \tb{batch size setting} during training. For accuracy consideration, existing detection training strategy always needs to re-scale images to a larger size \eg (1333, 800) to feed network. Under this resolution, detectors are always trained in small batch size(typically 2) situation due to the limited memory, and it severely degrades networks' performance. Previous solution \cite{RethinkImageNet} is using group normalization(GN)\cite{GN} or synchronized batch normalization(SyncBN)\cite{MegDet,PANet} to replace BN \cite{BN}. But on the other hand, GN increases model complexity and memory consumption, SyncBN increases training time due to cross-GPU communication. So, can we just use BN with large batch size for object detection? Back to ImageNet pre-training pipeline, ImageNet pre-trained models usually feed with images resized to (224, 224) and it indeed speed up convergence for training detectors. And more importantly, with low resolution setting, typical ImageNet training batch size setting is quite larger compare to detection task. So, we follow ImageNet pre-training setting, using low resolution images to pre-train detector, then fine-tuning with larger resizing strategy. By reviewing existing ImageNet pre-training pipeline and scratch training pipeline,that montage pre-training\cite{Montage} already demonstrated that following 'pre-training and fine-tuning' on target detection dataset is feasible to accelerate pre-training process. It only utilizes classification labels while leaves bounding boxes and mask labels unused. For efficiency consideration, if we fully utilize these 'strong' labels may be better for pre-training. On the other hand, their method is very complicate \eg it need crop instances and assemble multiple cropped instances to one image. And it also need a specific designed pre-training head(ERF Head) to train classification model through synthesized dataset. In this paper, direct pre-training pipeline doesn't introduce any new specific module. Through our detailed experiments and analysis in this paper, we found this idea has huge superiority versus existing methods. Direct pre-training is a coarse-to-fine approach that it first train model on small-size images resized from target detection dataset just for better fine-tuning initialization. Experiments show it's effective and efficient. Specifically, direct pre-training has following advantages compared with existing strategies:\tb{1.} Compared to purely scratch solution, it is much simpler without introducing any new network modules(\eg GN or SyncBN) yet with higher accuracy, train and inference speed, see Fig \ref{process_comp} for details. \tb{2.} Compared to ImageNet/Montage pre-training solution, it can totally abandon the heavy pre-training process on ImageNet, which largely accelerate scratch detection training. It is illustrated in Fig \ref{instruction}. \tb{3.} It is also very memory efficient, \eg in this paper, we can apply direct pre-training in 8 2080Ti GPUs server(11G memory per GPU), while previous works always need large memory GPUs\cite{MegDet} or TPUs\cite{DropBlock}. We also provide a detailed empirical study about performance influence by batch size and resolution settings. \section{Related Works} \subsection{ImageNet Pre-training} ImageNet-1k \cite{ImageNet} challenge is a large scale visual recognition task. Firstly, it was convolutional neural networks(CNNs) that bring huge performance gain to computer vision since AlexNet\cite{AlexNet} trained on ImageNet. Then developed Networks \cite{NetInNet, VGGNet, ResNet, ResNeXt, DenseNet} continuously achieve state-of-the-art performance with stronger feature representations. Besides, ImageNet pre-training are developed which utilize the trained networks on ImageNet for downstream target vision task as 'backbone initialization', \eg, detection \cite{RCNN}, segmentation \cite{FCN}, and pose-estimation \cite{Hourglass}, these tasks are indeed benefited a lot from pre-training. Recently, there are developed a batch of transformer based backbones\cite{ViT,DeiT,Swin} shows huge potential ability to replace CNN, and these models are also pre-trained on ImageNet-1k or even larger ImageNet-22k dataset. \subsection{Object Detection} R-CNN \cite{RCNN} first introduced CNN in detection task. Then Fast R-CNN \cite{FastRCNN} use CNN extracted features as input to localization and classification sub-tasks. Faster R-CNN \cite{FasterRCNN} proposed Region Proposal Network (RPN) to select potential RoI for further optimization. There is also another category of detectors called one-stage detector such as YOLO \cite{YOLO}, SSD \cite{SSD} which are mainly focused on real-time detection. Since Mask R-CNN \cite{MaskRCNN} simply extended \cite{FasterRCNN} with a mask head for instance segmentation. Object detection and instance segmentation are developed closer and closer. Subsequently, several methods \cite{PANet, HTC, MSRCNN} were developed following the pipeline of Mask R-CNN with some refinements of structural details to improve various aspects like multi-scale capability, alleviation of mismatch and the balance of data. \subsection{Training from Scratch} Fine-Tuning from ImageNet classification was already applied to R-CNN \cite{RCNN} and OverFeat \cite{OverFeat} very early. Following these results, most modern object detectors and many other computer vision algorithms employ this paradigm. DSOD \cite{DSOD} first reported a special designed one-stage object detector training from scratch. \cite{RethinkImageNet} reported results that train from scratch with a longer training schedule can be competitive to the ImageNet pre-training counterparts. \cite{PreTrainAnalysis} analyzed difference pre-training on object detection dataset and ImageNet pre-training. Montage pre-training \cite{Montage} is the first to synthesize on target detection dataset for efficient pre-training paradigm designed for object detection. \cite{RethinkPreAndSelf} take a further step, Pointing out that more detection data diminishes the value of ImageNet pre-training. \section{Methodology} Our aim is to setup a fast training pipeline for object detection. To this end, we follow ImageNet pre-training paradigm to separate the training process into two phases: pre-training phase and fine-tuning phase. Differently, for pre-training we directly utilize target detection dataset instead of classification dataset \eg ImageNet-1k. see Fig \ref{instruction} for different pre-training paradigms comparison. We first formulate problem in \S \ref{formulation}, and discuss design details in \S \ref{resize}, \S \ref{parameter}, \S \ref{norm}. More experiments setting details please read \S \ref{experiments}. \subsection{Problem Formulation}\label{formulation} Direct pre-training can be viewed as trade-off between batch size and input image resolution during pre-training. Specifically, we want to systematically investigate the performance influence when different image input size and batch size setting are applied. Besides, we need to take memory consumption into account in practice. Following \cite{EfficientNet}, A network can be defined as a function: $Y = F(X)$ where $F$ is network operation, $Y$ is output tensor, $X$ is input tensor, usually with tensor shape $\left\langle N, H, W, C\right\rangle$ during training. where $N$ is bath size, $H$ and $W$ are image spatial dimension and $C$ is the channel dimension. For typical RGB image, $C=3$ is a constant value. Setting a fixed steps $K$ for pre-training. In this paper, we follow previous work, view '1x schedule' is 12 epochs (88k steps). We can formulated this problem as: \begin{equation}\label{problem} \begin{array}{ll} \max_{r, b} & \operatorname{Accuracy}(\mathcal{F}(r, b)) \\ \text{ s.t. } & Y = \mathcal{F}(r, b) = \mathcal{F}(X_{\left\langle b \cdot N, r \cdot H, r \cdot W, C\right\rangle}) \\ & \operatorname{Memory}(\mathcal{F}(r, b)) \leq \text{target\_memory} \end{array} \end{equation} where r, b are coefficients for scaling input tensor during pre-training. It is obviously that both larger bath size and larger resolution would increase the memory consumption and training time. Let $\hat{N} = b \cdot N, \hat{H} = r \cdot H, \hat{W} = r \cdot W $. In practice, to fully utilize memory, we can manually set different $\hat{N}, \hat{H}, \hat{W}$ to make total memory consumption close to maximum GPU memory. \subsection{Resizing Strategy}\label{resize} Direct pre-training is so simple that it only changes a little in data pre-processing during pre-training, when fine-tuning it keeps all the same as typical data pre-processing does. Specifically, it only change the 'resize' process during data pre-processing, which typically is resized to (1333, 800). We reference existing two proposed resizing strategies: \tb{1.} ImageNet style, it resize input image to a fixed size \eg (256, 256), then randomly crop it to (224, 224). DropBlock\cite{DropBlock} also employed with this strategy. \tb{2.} Stitcher\cite{Stitcher} style, it firstly apply typical (1333, 800) resizing. Assume it resized to (H, W), then reduce image size by a divide factor $n$ to $(H/n, W/n)$. Both these strategies can reduce input image size which could leave more memory for setting larger bath size. For ImageNet style resizing, due to randomly crop, it brings performance gain like typical multi-scale training which Stitcher style can't provide, we also experimentally compare the two different resizing strategy, under $batchsize=8$ Mask-RCNN with R50-FPN and the other factors keeps same, ImageNet style got 40.0 mAP \vs Stitcher 39.4 mAP. Another disadvantage for Stitcher style is that due to random mini-batch training, there may appears memory consumption unstable during training, \eg a large image batched with a small image. While for ImageNet style, all image finally cropped to a fixed size \eg (224, 224), it's memory consumption is relatively stable, which is good for fully utilize GPU memory, and Stitcher style some times will cause memory overflow during training. Based on above consideration, we tend to use ImageNet style as resizing strategy. Based on above analysis, we describe direct pre-training resizing strategy more detailedly: select a base image shape (L, L), randomly select a factor $\alpha \in (0.8, 1.2)$, then resize image to $(\alpha \cdot L, \alpha \cdot L)$. Finally randomly cropped or pad to fixed image shape (L, L). \subsection{Hyper-parameter Setting}\label{parameter} It's not easy to precisely solve the optimization problem equation \ref{problem}. In practice, we can utilize grid search to roughly find out better hyper-parameter setting. So we empirically set a batch of grid search experiments to observe the influence by batch size and input image resolution as a indirect solution. We set image size as square shape \eg (224, 224) for input, in experiments we set image length($\hat{H}$ or $\hat{W}$) as [224, 320, 448, 640] and batch size($\hat{N}$) as [2, 4, 8, 16] respectively, see results in Fig \ref{batch_reso}. We can observe that \tb{either increase resolution or batch size will improve final performance, and it will be saturated when these factors gradually increased}. From another perspective, when these factors increased, memory consumption and training time would also increase. Based on these observations from table \ref{batch}, we set direct pre-training baseline with $batchsize=8, resolution=448$ or in other words $\hat{N}=8, \hat{H}=448, \hat{W}=448$ in this paper. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/br.png} \caption{Batch and resolution setting performance comparison, all experiments sufficiently pre-training in 3x schedule(36 epochs, 261k iterations), and fine-tuning in 1x schedule.} \label{batch_reso} \end{figure} From table \ref{batch}, it's worth to notice that even with $batchsize=2, resolution=448$ extreme situation, it still got final performance 39.3 mAP which already outperforms ImageNet fine-tuning 2x counterpart with 39.2 mAP. For comparison, scratch training with BN in 4x schedule only got 36.1 mAP. With above \S \ref{resize} analysis, it indicates that \tb{the random crop operation brings a lot gain to network}. And this technique widely used in ImageNet pre-training, this may provide another perspective to explain why ImageNet pre-training with very low resolution images (224, 224) but still could help these very large resizing tasks like object detection(1333, 800). \begin{table} \centering \setlength{\tabcolsep}{3mm}{ \begin{tabular}{c|c|c|c|c|cc} \toprule Method & Batch & Input & Schedule & Cost (days) & $AP^{bbox}$ & $AP^{mask}$ \\ \hline \multirow{3}*{Direct} & 2 & \multirow{3}*{(448, 448)} & \multirow{3}*{P3x+1x} & \tb{0.84} & 39.3 & 35.1 \\ ~ & 4 & ~ & ~ & 1.37 & 40.6 & 36.3 \\ ~ & 8 & ~ & ~ & 1.76 & \tb{41.5} & \tb{37.0} \\ \hline \multirow{3}*{Direct} & \multirow{3}*{4} & (320, 320) & \multirow{3}*{P3x+1x} & \tb{1.11} & 39.9 & 35.7 \\ ~ & ~ & (448, 448) & ~ & 1.37 & 40.6 & 36.3 \\ ~ & ~ & (640, 640) & ~ & 1.76 & \tb{41.1} & \tb{36.8} \\ \hline ImageNet & \multirow{3}*{2} & (1333, 800) & 1x & - & 38.2 & 34.7 \\ Scratch(BN) & ~ & (1333, 800) & 4x & 2.16 & 36.1 & 32.4 \\ Direct & ~ & (448, 448) & P3x+1x & \tb{0.84} & \tb{39.3} & \tb{35.1} \\ \bottomrule \end{tabular}} \caption{Batch size and resolution setting performance comparison. 'P3x+1x' means pre-training in 3x and fine-tuning with 1x. } \label{batch} \end{table} \subsection{Normalization}\label{norm} In this section, we separately discuss normalization during pre-training and fine-tuning. For pre-training, we use normal BN\cite{BN}, and due to the different batch size setting, we follow \tb{linear scaling rule}\cite{LargeSGD} to set learning rate, we use typical ImageNet fine-tuning learning rate 0.02 as base learning rate for $batchsize=2$, then when the batch size is multiplied by k, multiply the learning rate by k. Following batch normalization definition\cite{BN}, BN can be formulated as: \begin{equation} y=\frac{x-E[x]}{\sqrt{\operatorname{Var}[x]+\epsilon}} * \gamma+\beta \label{BN} \end{equation} Input features is $x$, and output features is $y$, $\epsilon$ is a small value to avoid zero divide error. BN operation need calculate batch statistics mean $E[x]$ and variance $Var[x]$, and the $\gamma$, $\beta$ are learnable parameters. During fine-tuning, we observed that the normalization technique setting could significantly influence final performance. Existing solution is using pre-trained batch statistics as fixed parameters\cite{ResNet}, while it still leaves that learnable affine parameters $\gamma$, $\beta$ activated for updating when fine-tuning, we call this 'affine' style BN. We also call that totally fix all BN parameters during fine-tuning 'Fixed' style, and synchronized batch normalization is 'SyncBN' style. We systematically investigate the performance influence of these BN setting strategies, see table \ref{affine} for details. From it we find that totally fixed BN is the best, which even better than SyncBN style. Besides, it also makes the training process accelerates roughly 0.06s/iteration. And typical used 'affine' style BN would significantly degrade performance(40.4mAP to 37.8mAP). On the contrary, if we use fixed BN parameters for ImageNet fine-tuning it would slightly degrade performance compare to affine solution. \begin{table} \centering \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|ccc|c|cc} \toprule Pre-training Model & Schedule & Affine & SyncBN & Fixed & Time(s/iter) & $AP^{bbox}$ & $AP^{mask}$ \\ \hline \multirow{2}*{ImageNet} & \multirow{2}*{2x} & $\surd$ & & & 0.479 & \tb{39.2} & \tb{35.4} \\ ~ & ~ & & & $\surd$ & \tb{0.424} & 38.7 & 35.1 \\ \hline \multirow{3}*{Direct(P1x)} & \multirow{3}*{2x} & $\surd$ & & & 0.479 & 37.8 & 33.9 \\ ~ & ~ & & $\surd$ & & 0.483 & 37.9 & 34.0 \\ ~ & ~ & & & $\surd$ & \tb{0.424} & \tb{40.4} & \tb{36.1} \\ \bottomrule \hline \end{tabular}} \caption{Influence of fine-tuning BN strategies. 'Time' is training time cost per iteration during fine-tuning} \label{affine} \end{table} \begin{figure*} \centering \includegraphics[width=\linewidth]{figs/direct_pretrain.png} \begin{picture}(0,0) \put(-185,0){a) ImageNet Pre-training} \put(-65,0){b) Montage Pre-training \cite{Montage}} \put(60,0){c) Direct Detection Pre-training} \end{picture} \caption{Instruction of different pre-training paradigms. '2fc' in means two fully connected layers. 'Backbone' represents feature extraction Network, 'Neck' means FPN \etc 'Heads' for different RoI heads or dense heads.} \label{instruction} \end{figure*} \section{Experiments} \subsection{Experimental Settings}\label{experiments} \tb{Dataset.} In this paper, if without specification, we evaluate our approach on MS-COCO \cite{COCO} datasets. We train the models on the COCO \textsl{train2017} split, and evaluate on COCO \textsl{val2017} split. We evaluate bbox Average Precision (AP) for object detection and mask AP for instance segmentation. \tb{Implementation.} We implement our algorithm based on MMDetection\cite{MMDetection}. In the experiments, our model is trained on a server with 8 RTX 2080Ti GPUs. Our baseline is the widely adopted Mask R-CNN \cite{MaskRCNN} with FPN \cite{FPN} and backbone ResNet-50 \cite{ResNet}. Following \cite{RethinkImageNet} scratch training, there is a little difference compared with Mask R-CNN that bbox head is a \textsl{4conv1fc} head instead of \textsl{2fc} head. Originally, our pretrained ImageNet ResNet-50 backbone is from torchvision. Actually, these models are given in MMDetection's MODELZOO, and we use the 'pytorch' style in backbone setting. During fine-tuning, if without specification, we directly load whole weights of pre-trained Mask R-CNN for initialization, \ie we load backbone, neck(FPN), RPN and heads from pre-trained model. \tb{Scheduling.} Following MMDetection setting, scheduling is based on epoch, 1x for 12 epochs. Pre-training schedule use fixed learning rate which dose not reduce learning rate while fine-tuning will follow typical setting to reduce learning rate. \tb{Hyper-parameter.} All other hyper-parameters follow those in MMDetection\cite{MMDetection}. Specially, during pre-training, we set the learning rate as 0.08 for baseline with a batch size of 8.With regard to fine-tuning, all the setting is the same as baseline MMDetection R50-FPN. \subsection{Main Results} \begin{table*} \centering \setlength{\tabcolsep}{2.5mm}{ \begin{tabular}{c|c|cc|cc} \toprule Method & Schedule & Pre-cost(days) & Total Cost(days) & $AP^{bbox}$ & $AP^{mask}$ \\ \hline ImageNet\cite{Montage} & 1x & 6.80 & - & 37.3 & - \\ Montage\cite{Montage} & 1x & 1.73{\fo(-74.5\%)} & - & 37.5{\fo(+0.2)} & - \\ \hline ImageNet & 1x & 9.05 & 9.54 & 38.2 & 34.7 \\ Montage$\ast$ & 1x & 2.26{\fo(-74.5\%)} & 2.75{\fo(-71.2\%)} & 38.4{\fo(+0.2)} & - \\ Direct(P1x) & 1x & \tb{0.81{\fo(-90.9\%)}} & \tb{1.26{\fo(-86.7\%)}} & 40.0{\fo(+1.8)} & 35.8{\fo(+1.1)} \\ Direct(P2x) & 1x & 1.62{\fo(-92.0\%)} & 2.07{\fo(-87.7\%)} & \tb{41.0{\fo(+2.8)}} & \tb{36.6{\fo(+1.9)}} \\ \hline ImageNet & 2x & 9.05 & 10.03 & 39.2 & 35.4 \\ Scratch(SyncBN) & 6x & - & 3.84{\fo(-56.8\%)} & 39.6{\fo(+0.4)} & 35.5{\fo(+0.1)} \\ Scratch(GN) & 6x & - & 2.35{\fo(-71.7\%)} & 41.1{\fo(+1.9)} & 36.5{\fo(+1.1)} \\ Direct(P3x) & 1x & \tb{1.07{\fo(-88.2\%)}} & \tb{1.52{\fo(-84.1\%)}} & 41.5{\fo(+2.3)} & 37.0{\fo(+1.6)} \\ Direct(P4x) & 1x & 1.44{\fo(-84.1\%)} & 1.89{\fo(-80.1\%)} & \tb{41.7{\fo(+2.5)}} & \tb{37.3{\fo(+1.9)}} \\ \bottomrule \end{tabular}} \caption{Main Mask R-CNN results on COCO dataset. P2x,P3x,P4x means pre-training in 24, 36 and 48 epochs respectively. For better comparison, Montage* are setting from reported values 'normalize' version. Note: Montage time cost from V100 GPU, while ours comes from 2080Ti and it's speed is about 0.79x V100. } \label{total_res} \end{table*} In this section, we compare our models with typical 1x and 2x settings. Results shows in table \ref{total_res}, also shows in Fig \ref{total_res}. For Montage pre-training, we use it originally reported results\cite{Montage}, for fair comparison, we 'normalize' its' performance and cost time by add to our ImageNet baseline. Following \cite{Montage}, All reported cost time is only account GPU running time which ignores data pre-processing time. Actually direct pre-training has faster data pre-training time see \S \ref{speed} for details. Then we calculate the Cost of time via their papers' reported time consumption to compare with ImageNet pre-training. We can see that our models are significantly fast and accurate compared to ImageNet ones. Direct(P1x) model is already with 1.8 mAP higher than typical ImageNet 1x model, further more, it outperforms ImageNet 2x model. And we compare it's pre-training phase to ImageNet one, it accelerate more than 11x(-90.9\% GPU days). For longer schedule \eg Scratch(GN)-6x and Scratch(SyncBN)-6x, we can see direct pre-training is still with higher efficiency from Fig \ref{total_res} (b). It is worth to note that Scratch(SyncBN) has the same network architecture, but with different training strategy, even in 6x schedule which is sufficiently trained, it still has large performance margin compare to direct pre-training in 4x schedule(39.6mAP \vs 41.7 mAP). \subsection{Schedule Setting Trade-off} We investigate the influences of schedule setting on pre-training and fine-tuning respectively in this part. We observe that increasing pre-training iterations will provide better pre-trained models, and the performance upper bound are mainly determined by pre-training instead of fine-tuning. We set fine-tuning schedule fixed in 1x schedule and pre-training schedule variate to find out the performance influence. Results are shown in table \ref{trade}. Results shows that in 3x schedule for pre-training is sufficiently enough. Then we investigate fine-tuning schedule settings in Fig \ref{finetune}. We can observe that direct pre-training is easily to over-fitted. The longer pre-training schedule adopted, the more easily for fine-tuning to get over-fitted. Fig \ref{finetune} shows when pre-training in 3x schedule, best fine-tuning schedule should between 9 epochs and 15 epochs. So, summarize above observations, we suggest that \tb{pre-training in 3x schedule and fine-tuning with 1x schedule is sufficiently trained for direct pre-training}. \begin{figure} \begin{floatrow}[2] \tablebox{\caption{Different schedule setting model performance comparison.}\label{trade}}{ \setlength{\tabcolsep}{1mm}{ \begin{tabular}{c|c|cc} \toprule Method & Schedule & $AP^{bbox}$ & $AP^{mask}$ \\ \hline \multirow{2}*{Direct(P1x)} & 1x & 40.0 & 35.8 \\ ~ & 2x & \tb{40.4} & \tb{36.1} \\ \hline \multirow{2}*{Direct(P2x)} & 1x & \tb{41.0} & \tb{36.6} \\ ~ & 2x & 40.8 & 36.4 \\ \hline \multirow{2}*{Direct(P3x)} & 1x & \tb{41.5} & \tb{37.0} \\ ~ & 2x & 41.0 & 36.6 \\ \hline Direct(P4x) & 1x & \tb{41.7} & \tb{37.3} \\ \bottomrule \end{tabular}}} \figurebox{\caption{Fine-tuning performance trade-off on COCO dataset.}\label{finetune}}{ \includegraphics[width=0.7\linewidth]{figs/tradeoff.png}} \end{floatrow} \end{figure} \subsection{Gain Analysis} Different from typical ImageNet pre-training, direct pre-training includes new model parts(neck, RPN and RoI Heads) during pre-training instead of just backbone. So in this section, we quantize the performance influence of different model parts applied with direct pre-training, to find out whether the increased model parts help training. Results shows in table \ref{gain}. It proved these parts indeed improve the detection performance. Thereinto, bounding box head helps most while RPN helps least even slightly degrade performance(-0.1mAP). \begin{table} \centering \setlength{\tabcolsep}{5mm}{ \begin{tabular}{c|cc} \toprule Load Parameters & $AP^{bbox}$ & $AP^{mask}$ \\ \hline ImageNet & 38.2 & 34.7 \\ \hline Backbone & 39.0 & 34.7 \\ + FPN & 39.4{\fo(+0.4)} & 35.2{\fo(+0.5)} \\ + RPN & 39.3{\fo(-0.1)} & 35.1{\fo(-0.1)} \\ + (Bbox Head) & \tb{41.7{\fo(+2.4)}} & 36.8\tb{\fo(+1.7)} \\ + (Mask Head) & \tb{41.7}{\fo(+0.0)} & \tb{37.3}{\fo(+0.5)} \\ \bottomrule \end{tabular}} \caption{Gain analysis over different model parts from direct pre-training. } \label{gain} \end{table} \subsection{Multi-scale Training} Due to the crop operation during pre-training, which can be viewed as a multi-scale training trick. We need to answer the question: if we use other pre-training strategy, can it catch up direct pre-training performance when adopt typical multi-scale training in a longer fine-tuning schedule? We use direct(P1x) as baseline, to compare with ImageNet and Scratch(SyncBN) counterparts trained with multi-scale training. we keeps fine-tuning steps the same, apply with poly multi-scale strategy which randomly resize short image edge to [640, 672, 704, 736, 768, 800] and evaluate performance, results shows in table \ref{multi_scale}. We can see that direct pre-training still has superiority over counterparts, even direct pre-training in 2x schedule without applying multi-scale training, it still outperforms ImageNet with multi-scale training.(40.4mAP \vs 40.2mAP) \begin{table} \centering \setlength{\tabcolsep}{3mm}{ \begin{tabular}{c|c|c|cc} \toprule Method & Schedule & Multi-Scale & $AP^{bbox}$ & $AP^{mask}$ \\ \hline ImageNet & 2x & & 39.2 & 35.4 \\ Direct(P1x) & 1x & & 40.0 & 35.8 \\ Direct(P1x) & 2x & & \tb{40.4} & \tb{36.1} \\ \hline ImageNet & 2x & $\surd$ & 40.2 & 36.1 \\ Scratch(SyncBN) & 3x & $\surd$ & 36.0 & 32.8 \\ Direct(P1x) & 2x & $\surd$ & \tb{41.0} & \tb{36.7} \\ \bottomrule \end{tabular}} \caption{Multi-scale training performance comparison. } \label{multi_scale} \end{table} \subsection{Train and Inference Time Comparison}\label{speed} Our method is faster than typical counterparts ImageNet and scratch(GN) during fine-tuning. During inference, speed of our method is actually the same as ImageNet fine-tuning models. When it comes to scratch(GN) training, without the introduction of GN, our model is faster during both fine-tuning and inference. Results are shown in table \ref{time}. We get train time and data pre-processing time by averaging all training steps of P3x model during fine-tuning on 8 GPUs. And inference speed is calculated from inference COCO val2017 on a 1080Ti GPU. \begin{table} \centering \setlength{\tabcolsep}{2mm}{ \begin{tabular}{c|c|c|c} \toprule Method & Time (s/iter) & Inference (fps) & Data Time (s) \\ \hline ImageNet & 0.478 & \tb{8.6} & - \\ Scratch(GN) & 0.578 & 7.6 & 0.029 \\ Scratch(SyncBN) & 0.483 & \tb{8.6} & 0.029 \\ Direct & \tb{0.424} & \tb{8.6} & \tb{0.014} \\ \bottomrule \end{tabular}} \caption{Different model time computation comparison. 'Time' is training time cost per iteration during fine-tuning,'Data Time' is data pre-processing time.} \label{time} \end{table} \subsection{More Validations} In this section we validate effectiveness of direct pre-training on more detectors. To this end, we select typical one-stage, two-stage and multi-stage detectors to validate, different from our Mask-RCNN baseline, these detectors trained without mask label. While we see results in table \ref{more}, all these detectors applied with direct pre-training outperform ImageNet counterparts. Further more, we also validate it on recently processed transformer based backbone Swin Transformer \cite{Swin}, we use Swin-T backbone,and do not fix layer normalization(LN) layer in backbone we keeps in activated. results shown in table \ref{more}, it proves that \tb{direct pre-training still worked for transformer based backbones.} \begin{table} \centering \setlength{\tabcolsep}{2mm}{ \begin{tabular}{c|c|cccccc} \toprule Method & Pre-training & AP & $AP_{50}$ & $AP_{75}$ & $AP_s$ & $AP_m$ & $AP_l$ \\ \hline \multirow{2}*{RetinaNet} & ImageNet & 36.5 & 55.4 & 39.1 & 20.4 & \tb{40.3} & 48.1 \\ ~ & Direct(P1x) & \tb{37.1} & \tb{55.7} & \tb{39.6} & \tb{22.1} & 40.0 & \tb{48.6} \\ \hline \multirow{2}*{Faster RCNN} & ImageNet & 37.4 & 58.1 & 40.4 & 21.2 & 41.0 & 48.1 \\ ~ & Direct(P1x) & \tb{39.3} & \tb{58.7} & \tb{43.0} & \tb{24.4} & \tb{42.1} & \tb{49.8} \\ \hline \multirow{2}*{Cascade RCNN} & ImageNet & 40.3 & 58.6 & 44.0 & 22.5 & 43.8 & 52.9 \\ ~ & Direct(P1x) & \tb{41.5} & \tb{59.4} & \tb{45.1} & \tb{25.3} & \tb{44.2} & \tb{53.4} \\ \hline \multirow{2}*{Mask RCNN w/ Swin} & ImageNet & 43.8 & 65.3 & 48.1 & 26.8 & 47.4 & \tb{58.0} \\ ~ & Direct(P1x) & \tb{45.0} & \tb{65.6} & \tb{49.6} & \tb{30.2} & \tb{48.2} & 57.4 \\ \bottomrule \end{tabular}} \caption{Different models direct pre-training performance comparison.} \label{more} \end{table} \subsection{Direct Pre-training with Less Data}\label{lessdata} In this section we validate our method's effectiveness with less data situation. We validate on 1/10 of COCO dataset(11.8k images) and PASCAL VOC \cite{VOC} dataset. On 1/10 COCO, we trained it with roughly 60k steps and 120k steps for direct pre-training \vs fine-tuning 60k with ImageNet pre-training. Hyper-parameter are follows \cite{RethinkImageNet} \eg base learning rate is set to 0.04 and decay factor is 0.02. Differently, we do not use multi-scale for consistency with above settings. For PASCAL VOC we train a Faster R-CNN with \textsl{train2007+train2012} and evaluate on \textsl{val2007}. Results are shown in table \ref{less}. From this results, under 1/10 COCO dataset direct pre-training still outperforms ImageNet pre-training counterpart, but for PASCAL VOC direct pre-training can't catch up with ImageNet pre-training. It shows that \tb{in extremely less data situation like PASCAL VOC dataset direct pre-training still can't replace ImageNet pre-training.} \begin{table} \centering \setlength{\tabcolsep}{4mm}{ \begin{tabular}{c|c|c|c|c} \toprule Dataset & Model & Method & Iterations & $AP^{bbox}$ \\ \hline \multirow{2}*{1/10 COCO} & \multirow{2}*{Mask R-CNN} & ImageNet & 60k & 22.1 \\ ~ & ~ & Direct & P60k+30k & \tb{23.2} \\ \hline \multirow{2}*{VOC} & \multirow{2}*{Faster RCNN} & ImageNet & 12.4k & \tb{79.47} \\ ~ & ~ & Direct & P93k+12.4k & 72.01 \\ \bottomrule \end{tabular}} \caption{Performance of direct pre-training with less data.} \label{less} \end{table} \section{Conclusion} In this paper, We explored to setup a fast and efficient scratch training pipeline for object detection. Extensive experiments that proved that our proposed direct pre-training significantly improve the efficiency of training from scratch. And with more and more massive scale detection datasets like Open Images \cite{OpenImages}, Objects365 \cite{Objects365} recently introduced, we need to develope more efficient training strategies to tackle these cases. We hope our work will help community to move forward. \section{Acknowledgements} This work was supported by the National Natural Science Foundation of China (Grant No.62088101, 61673341), the Project of State Key Laboratory of Industrial Control Technology, Zhejiang University, China (No.ICT2021A10), the Fundamental Research Funds for the Central Universities (No.2021FZZX003-01-06), National Key R\&D Program of China (2016YFD0200701-3), Double First Class University Plan (CN). \medskip {\small \bibliographystyle{ieeetr}
2024-02-18T23:40:20.540Z
2021-06-08T02:24:06.000Z
algebraic_stack_train_0000
2,117
5,275
proofpile-arXiv_065-10357
\section{Introduction} Throughout this paper, let $p$ be an odd prime, $\mathbb{Z}_{p^k}$ be the ring of integers modulo $p^k$, $\mathbb{F}_{p}^{n}$ be the vector space of the $n$-tuples over $\mathbb{F}_{p}$, $\mathbb{F}_{p^n}$ be the finite field with $p^n$ elements and $V_{n}$ be an $n$-dimensional vector space over $\mathbb{F}_{p}$ where $k, n$ are positive integers. Boolean bent functions introduced by Rothaus \cite{Rothaus} play an important role in cryptography, coding theory, sequences and combinatorics. In 1985, Kumar \emph{et al}. \cite{Kumar} generalized Boolean bent functions to bent functions over finite fields of odd characteristic. Due to the importance of bent functions, they have been studied extensively. There is an exhaustive survey \cite{Carlet} and a book \cite{Mesnager1} for bent functions and generalized bent functions. Recently, the notion of Boolean generalized bent functions has been generalized to generalized bent functions from $V_{n}$ to $\mathbb{Z}_{p^k}$ \cite{Mesnager2}. A function $f$ from $V_{n}$ to $\mathbb{Z}_{p^k}$ is called a generalized $p$-ary function, or simply $p$-ary function when $k=1$. The Walsh transform of a generalized $p$-ary function $f: V_{n} \rightarrow \mathbb{Z}_{p^k}$ is the complex valued function on $V_{n}$ defined as \begin{equation}\label{1} W_{f}(a)=\sum_{x\in V_{n}}\zeta_{p^k}^{f(x)}\zeta_{p}^{-\langle a, x \rangle}, a \in V_{n} \end{equation} where $\zeta_{p^k}=e^{\frac{2\pi \sqrt{-1}}{p^k}}$ is the complex primitive $p^k$-th root of unity and $\langle a, x \rangle$ denotes a (nondegenerate) inner product in $V_{n}$. And $f$ can be recovered by the inverse transform \begin{equation}\label{2} \zeta_{p^k}^{f(x)}=\frac{1}{p^n}\sum_{a\in V_{n}}W_{f}(a)\zeta_{p}^{\langle a, x\rangle}, x \in V_{n}. \end{equation} The classical representations of $V_{n}$ are $\mathbb{F}_{p}^{n}$ with $\langle a, x \rangle= a \cdot x$ and $\mathbb{F}_{p^n}$ with $\langle a, x \rangle =Tr_{1}^{n}(ax)$, where $a\cdot x$ is the usual dot product over $\mathbb{F}_{p}^{n}$, $Tr_{1}^{n}(\cdot)$ is the absolute trace function. When $V_{n}=V_{n_{1}}\times \dots \times V_{n_{s}} (n=\sum_{i=1}^{s}n_{i}$), let $\langle a, b\rangle =\sum_{i=1}^{s}\langle a_{i}, b_{i}\rangle $ where $a=(a_{1}, \dots, a_{s}), b=(b_{1}, \dots, b_{s})\in V_{n}$. The generalized $p$-ary function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ is called a generalized $p$-ary bent function, or simply $p$-ary bent function when $k=1$ if $|W_{f}(a)|=p^{\frac{n}{2}}$ for any $a\in V_{n}$. In \cite{Mesnager2}, the authors have shown that the Walsh transform of a generalized $p$-ary bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ satisfies that for any $a \in V_{n}$, \begin{equation}\label{3} W_{f}(a)=\left\{\begin{split} \pm p^{\frac{n}{2}}\zeta_{p^k}^{f^{*}(a)} & \ \ if \ p \equiv 1 \ (mod \ 4) \ or \ n \ is \ even,\\ \pm \sqrt{-1} p^{\frac{n}{2}}\zeta_{p^k}^{f^{*}(a)} & \ \ if \ p \equiv 3 \ (mod \ 4) \ and \ n \ is \ odd \end{split} \right. \end{equation} where $f^{*}$ is a function from $V_{n}$ to $\mathbb{Z}_{p^k}$ and is called the dual of $f$. We call $f$ self-dual if $f^{*}=f$. In the sequel, if $f^{*}$ is also a generalized bent function, let $f^{**}$ denote the dual of $f^{*}$. If $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ is a generalized bent function with dual $f^{*}$, define \begin{equation}\label{4} \mu_{f}(a)=p^{-\frac{n}{2}}\zeta_{p^k}^{-f^{*}(a)}W_{f}(a), a \in V_{n} \end{equation} and \begin{equation}\label{5} \varepsilon_{f}(a)=\xi^{-1}\mu_{f}(a), a \in V_{n} \end{equation} where $\xi=1$ if $p\equiv 1 \ (mod\ 4)$ or $n$ is even and $\xi=\sqrt{-1}$ if $p\equiv 3 \ (mod\ 4)$ and $n$ is odd. By (3), it is easy to see that $\varepsilon_{f}$ is a function from $V_{n}$ to $\{\pm 1\}$. If $\mu_{f}$ is a constant function, then $f$ is called weakly regular, otherwise $f$ is called non-weakly regular. In particular, if $\mu_{f}(a)=1, a \in V_{n}$, $f$ is called regular. There are some works on the dual of $p$-ary bent functions \cite{Cesmelioglu2}, \cite{Cesmelioglu4}, \cite{Cesmelioglu5}, \cite{Ozbudak}. It is known that weakly regular generalized bent functions always appear in pairs since the dual of a weakly regular generalized bent function is also a weakly regular generalized bent function \cite{Mesnager2}. For non-weakly regular bent functions, the dual can be bent or not bent. In \cite{Cesmelioglu4}, \c{C}e\c{s}melio\u{g}lu \emph{et al.} analysed the first known construction of non-weakly regular bent functions \cite{Cesmelioglu1} and showed that this construction yields bent functions for which the dual is bent if using some proper weakly regular plateaued functions or bent functions whose dual is also bent as building blocks. In \cite{Cesmelioglu5}, \c{C}e\c{s}melio\u{g}lu \emph{et al.} provided the first explicit construction of non-weakly regular bent functions for which the dual is not bent. As a consequence, the dual of non-weakly regular generalized bent functions can be generalized bent or not generalized bent. For instance, if $f: V_{n}\rightarrow \mathbb{F}_{p}$ is a non-weakly regular bent function whose dual is (not) bent, then obviously $p^{k-1}f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ is a non-weakly regular generalized bent function whose dual is (not) generalized bent. In \cite{Cesmelioglu4}, the authors also analysed the self-duality of $p$-ary bent functions from $V_{n}$ to $\mathbb{F}_{p}$. They characterized quadratic self-dual bent functions. For $p\equiv 1 \ (mod \ 4)$, they proposed a secondary construction of self-dual bent functions, which can be used to construct non-quadratic self-dual bent functions by using (non)-quadratic self-dual bent functions as building blocks. However, for $p\equiv 3 \ (mod \ 4)$ and $n$ is even, they only illustrated that there exist ternary non-quadratic self-dual bent functions by considering special ternary bent functions. For $p \equiv 3 \ (mod \ 4)$ and $n$ is odd, they showed that there is no weakly regular self-dual bent function and they pointed out that there exist non-weakly regular self-dual bent functions. Indeed, they pointed out that the non-weakly regular bent function $g_{3}(x)=Tr_{1}^{3}(x^{22}+x^{8})$ from $\mathbb{F}_{3^3}$ to $\mathbb{F}_{3}$ is self-dual. But in fact, $g_{3}$ is not self-dual. In this paper, we will prove that there is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. For weakly regular bent functions $f$ with dual $f^{*}$, it is known that $f^{**}(x)=f(-x)$ holds where $f^{**}$ denotes the dual of $f^{*}$. Recently, for non-weakly regular bent functions $f$ whose dual $f^{*}$ is also bent, \"{O}zbudak and Pelen \cite{Ozbudak} showed that $f^{**}(x)=f(-x)$ also holds by studying the value distributions of the dual of non-weakly regular bent functions. In this paper, we study the dual of generalized bent functions from $V_{n}$ to $\mathbb{Z}_{p^k}$. By generalizing the construction of Corollary 2 of \cite{Cesmelioglu5}, we obtain an explicit construction of generalized bent functions for which the dual can be generalized bent or not generalized bent. Note that in \cite{Cesmelioglu5}, a sufficient condition is given to construct non-weakly regular bent functions whose dual is not bent, however, a sufficient condition for the dual of the constructed non-weakly regular bent function to be bent is not given. We will give a sufficient condition to illustrate that the construction of Corollary 2 of \cite{Cesmelioglu5} can also be used to construct non-weakly regular bent functions whose dual is bent. We will show that the generalized indirect sum construction method given in \cite{Wang} can provide a secondary construction of generalized bent functions for which the dual can be generalized bent or not generalized bent. For generalized bent functions $f$ whose dual $f^{*}$ is also generalized bent, different from the proof method in \cite{Ozbudak}, we prove $f^{**}(x)=f(-x)$ by using the knowledge on ideal decomposition in cyclotomic field, which seems more concise. For any non-weakly regular generalized bent function $f$ which satisfies that $f(x)=f(-x)$ and the dual $f^{*}$ is generalized bent, we give a property and as a consequence, we prove that there is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. For $p \equiv 1 \ (mod \ 4)$ or $p\equiv 3 \ (mod \ 4)$ and $n$ is even, we give a secondary construction of self-dual generalized bent functions. In the end, by the decomposition of generalized bent functions, we characterize the relations between the generalized bentness of the dual of generalized bent functions and the bentness of the dual of bent functions, as well as the self-duality relations between generalized bent functions and bent functions. The rest of the paper is organized as follows. In Section II, we present some needed results on generalized bent functions and number fields. In Section III, we give constructions of generalized bent functions whose dual can be generalized bent or not generalized bent (Theorems 1 and 2). In Section IV, for generalized bent functions $f$ whose dual $f^{*}$ is also generalized bent, by using the knowledge on ideal decomposition in cyclotomic field, we prove that $f^{**}(x)=f(-x)$ holds (Theorem 3). In Section V, we prove that there is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd (Theorem 4). In Section VI, we give a secondary construction of self-dual generalized bent functions $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 1 \ (mod \ 4)$ or $n$ is even (Theorem 5). In Section VII, we characterize the relations between the generalized bentness of the dual of generalized bent functions and the bentness of the dual of bent functions, as well as the self-duality relations between generalized bent functions and bent functions (Theorem 6). In Section VIII, we make a conclusion. \section{Preliminaries} \label{sec:1} In this section, we present some needed results on generalized bent functions and number fields. \subsection{Some Results on Generalized Bent Functions} Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a generalized bent function, then $W_{f}(a)=\mu_{f}(a)p^{\frac{n}{2}}\zeta_{p^k}^{f^{*}(a)}$ for any $a\in V_{n}$ where $\mu_{f}(a)=\xi \varepsilon_{f}(a)$, $\xi=1$ if $p\equiv 1 \ (mod\ 4)$ or $n$ is even and $\xi=\sqrt{-1}$ if $p\equiv 3 \ (mod\ 4)$ and $n$ is odd, $\varepsilon_{f}: V_{n} \rightarrow \{\pm 1\}$, $f^{*}$ is the dual of $f$. By the inverse transform (2), we have \begin{equation*} \begin{split} \zeta_{p^k}^{f(x)}&=\frac{1}{p^n}\sum_{a \in V_{n}}W_{f}(a)\zeta_{p}^{\langle a, x\rangle} \\ &=\frac{1}{p^n}\sum_{a \in V_{n}}\xi\varepsilon_{f}(a)p^{\frac{n}{2}}\zeta_{p^k}^{f^{*}(a)}\zeta_{p}^{\langle a, x\rangle}\\ &=\xi p^{-\frac{n}{2}}\sum_{a \in V_{n}}\varepsilon_{f}(a)\zeta_{p^k}^{f^{*}(a)}\zeta_{p}^{\langle a, x\rangle}. \end{split} \end{equation*} Hence for any $x \in V_{n}$, \begin{equation}\label{6} \xi\sum_{a \in V_{n}}\varepsilon_{f}(a)\zeta_{p^k}^{f^{*}(a)}\zeta_{p}^{\langle a, x\rangle}=p^{\frac{n}{2}}\zeta_{p^k}^{f(x)}. \end{equation} In this paper, let $\eta$ be the multiplicative quadratic character of $\mathbb{F}_{p^n}$, that is, \begin{equation*} \eta(x)=\left\{\begin{split} 1 & \ \ if \ x \in \mathbb{F}_{p^n}^{*} \ is \ a \ square \ element, \\ -1 & \ \ if \ x \in \mathbb{F}_{p^n}^{*} \ is \ a \ non\raisebox{0mm}{-}square \ element. \end{split}\right. \end{equation*} Let $\alpha \in \mathbb{F}_{p^n}^{*}$. Then the function $f: \mathbb{F}_{p^n}\rightarrow \mathbb{F}_{p}$ defined by $f(x)=Tr_{1}^{n}(\alpha x^{2})$ is a weakly regular bent function and $W_{f}(a)=(-1)^{n-1}\epsilon^{n}\eta(\alpha)p^{\frac{n}{2}}\zeta_{p}^{Tr_{1}^{n}(-\frac{a^2}{4\alpha})}$ for any $a \in \mathbb{F}_{p^n}$ where $\epsilon=1$ if $p\equiv 1 \ (mod \ 4)$ and $\epsilon=\sqrt{-1}$ if $p\equiv 3 \ (mod \ 4)$ (see \cite{Helleseth}). In \cite{Mesnager2}, the authors characterized the relations between generalized bent functions and bent functions by the decomposition of generalized bent functions. The following lemma follows from Theorem 16 and its proof of \cite{Mesnager2}. \begin{lemma} [\cite{Mesnager2}] Let $k\geq 2$ be an integer. Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ with the decomposition $f(x)=\sum_{i=0}^{k-1}f_{i}(x)p^{k-1-i}=f_{0}(x)p^{k-1}+\bar{f}_{1}(x)$ where $f_{i}$ is a function from $V_{n}$ to $\mathbb{F}_{p}$ for any $0\leq i \leq k-1$ and $\bar{f}_{1}(x)=\sum_{i=1}^{k-1}f_{i}(x)p^{k-1-i}$ is a function from $V_{n}$ to $\mathbb{Z}_{p^{k-1}}$. Then $f$ is a generalized bent function if and only if $g_{f, F}\triangleq f_{0}+F(f_{1}, \dots, f_{k-1})$ is a bent function for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$. Furthermore, if $f$ is a generalized bent function, then $f^{*}(x)=f_{0}^{*}(x)p^{k-1}+\lambda(x)$ and $g_{f, F}^{*}(x)=f_{0}^{*}(x)+F(\lambda_{1}(x), \dots, \lambda_{k-1}(x))$ where $\lambda(x)=\sum_{i=1}^{k-1}\lambda_{i}(x)p^{k-1-i}, \lambda_{i}: V_{n}\rightarrow \mathbb{F}_{p}$ and $\lambda: V_{n}\rightarrow \mathbb{Z}_{p^{k-1}}$ satisfies that for any $a \in V_{n}$, $\sum_{x \in V_{n}: \bar{f}_{1}(x)=\lambda(a)}\zeta_{p}^{f_{0}(x)-\langle a, x\rangle}$ $=W_{f_{0}}(a)$ and $\sum_{x \in V_{n}:\bar{f}_{1}(x)=v}\zeta_{p}^{f_{0}(x)-\langle a, x\rangle}=0$ for any $v \in \mathbb{Z}_{p^{k-1}}$ with $v \neq \lambda(a)$. \end{lemma} \subsection{Some Results on Number Fields} In this subsection, we give some results on number fields. For more results on number fields, we refer to \cite{Feng}. Suppose $L/K$ is a number field extension. Let $\mathcal{O}_{L}$ and $\mathcal{O}_{K}$ be the ring of integers in $L$ and $K$ respectively. Any nonzero ideal $I$ of $\mathcal{O}_{L}$ can be uniquely (up to the order) expressed as \begin{equation*} I=\mathfrak{B}_{1}^{e_{1}}\dots \mathfrak{B}_{g}^{e_{g}} \end{equation*} where $\mathfrak{B}_{1}, \dots, \mathfrak{B}_{g}$ are distinct (nonzero) prime ideals of $\mathcal{O}_{L}$ and $e_{i}\geq 1 \ (1\leq i \leq g)$. And when $I=\mathfrak{p}\mathcal{O}_{L}$ where $\mathfrak{p}$ is a nonzero prime ideal of $\mathcal{O}_{K}$, $\mathfrak{p}=\mathfrak{B}_{i}\cap \mathcal{O}_{K}$ for any $1\leq i \leq g$. In this paper, we consider cyclotomic field $L=\mathbb{Q}(\zeta_{p^k})$. Then $\mathcal{O}_{L}=\mathbb{Z}[\zeta_{p^k}]$ and $\{\zeta_{p}^{i}\zeta_{p^k}^{j}: 0\leq i \leq p-2, 0\leq j \leq p^{k-1}-1\}$ is an integral basis of $\mathcal{O}_{L}$. For prime ideal $p\mathbb{Z}$ of $\mathbb{Z}$, $(p\mathbb{Z})\mathcal{O}_{L}=p\mathcal{O}_{L}$ and \begin{equation*} p\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)} \end{equation*} where $(1-\zeta_{p^k})\mathcal{O}_{L}$ is a prime ideal of $\mathcal{O}_{L}$ and $\varphi$ is the Euler function. For any integer $j\in \mathbb{Z}$, let $v_{p}(j)\in \mathbb{Z}$ denote the $p$-adic valuation of $j$, that is, $p^{v_{p}(j)}\mid j$ and $p^{v_{p}(j)+1}\nmid j$. \section{Constructions of generalized bent functions whose dual is (not) generalized bent} In this section, we give constructions of generalized bent functions whose dual can be generalized bent or not generalized bent. The following theorem gives an explicit construction of generalized bent functions for which the dual can be generalized bent or not generalized bent, which is a generalization of Corollary 2 of \cite{Cesmelioglu5}. Note that in Corollary 2 of \cite{Cesmelioglu5}, a sufficient condition is given to construct non-weakly regular bent functions whose dual is not bent, however, a sufficient condition for the dual of the constructed non-weakly regular bent function to be bent is not given. In the following theorem, we also give a sufficient condition to illustrate that the construction of \cite{Cesmelioglu5} can also be used to construct non-weakly regular bent functions whose dual is bent. \begin{theorem} Let $m\geq 2$ be an integer. Let $\alpha, \beta \in \mathbb{F}_{p^m}^{*}$ satisfy $1+i \alpha +j \beta \neq 0$ for any $i, j \in \mathbb{F}_{p}$. Let $F(x, y_{1}, y_{2})=p^{k-1}(Tr_{1}^{m}(x^{2})+(y_{1}+Tr_{1}^{m}(\alpha x^{2}))(y_{2}+Tr_{1}^{m}(\beta x^{2})))+g(y_{2}+Tr_{1}^{m}(\beta x^{2})), (x, y_{1}, y_{2})\in \mathbb{F}_{p^m} \times \mathbb{F}_{p} \times \mathbb{F}_{p}$, where $g$ is an arbitrary function from $\mathbb{F}_{p}$ to $\mathbb{Z}_{p^{k}}$. Then 1) $F$ is a generalized bent function and it is weakly regular if and only if $\eta(1+i \alpha +j \beta)=1$ for any $i, j \in \mathbb{F}_{p}$. 2) The dual $F^{*}(x, y_{1}, y_{2})=p^{k-1}(Tr_{1}^{m}(-\frac{x^{2}}{4(1+y_{1}\alpha+y_{2}\beta)})-y_{1}y_{2})+g(y_{1})$ and $F^{*}$ is a generalized bent function if and only if \begin{equation}\label{7} |\sum_{y_{1}, y_{2} \in \mathbb{F}_{p}}\eta(1+y_{1} \alpha +y_{2} \beta)\zeta_{p^k}^{g(y_{1})}\zeta_{p}^{-y_{1}y_{2}+b_{1}y_{1}+b_{2}y_{2}}|=p \ \ for \ any \ b_{1}, b_{2} \in \mathbb{F}_{p}. \end{equation} In particular, if $\eta(1+i \alpha +j \beta)=\eta(1+i\alpha)$ for any $i, j \in \mathbb{F}_{p}$ and there exist $i_{1}, i_{2}\in \mathbb{F}_{p}$ such that $\eta(1+i_{1}\alpha)\neq \eta(1+i_{2}\alpha)$, then $F^{*}$ is a non-weakly regular generalized bent function. \end{theorem} \begin{proof} 1) For any $(a, b_{1}, b_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p} \times \mathbb{F}_{p}$, we have \begin{equation*} \begin{split} &W_{F}(a, b_{1}, b_{2})\\ & =\sum_{x \in \mathbb{F}_{p^m}, y_{1}, y_{2} \in \mathbb{F}_{p}}\zeta_{p^k}^{F(x, y_{1}, y_{2})}\zeta_{p}^{-Tr_{1}^{m}(ax)-b_{1}y_{1}-b_{2}y_{2}}\\ & =\sum_{x \in \mathbb{F}_{p^m}}\zeta_{p}^{Tr_{1}^{m}(x^{2})-Tr_{1}^{m}(ax)}\sum_{y_{1}, y_{2}\in \mathbb{F}_{p}}\zeta_{p^k}^{g(y_{2}+Tr_{1}^{m}(\beta x^{2}))}\zeta_{p}^{(y_{1}+Tr_{1}^{m}(\alpha x^{2}))(y_{2}+Tr_{1}^{m}(\beta x^{2}))-b_{1}y_{1}-b_{2}y_{2}} \\ & =\sum_{x \in \mathbb{F}_{p^m}}\zeta_{p}^{Tr_{1}^{m}(\gamma_{b}x^{2}-ax)}\sum_{z_{1}, z_{2}\in \mathbb{F}_{p}}\zeta_{p}^{z_{1}z_{2}-b_{1}z_{1}-b_{2}z_{2}}\zeta_{p^k}^{g(z_{2})} \end{split} \end{equation*} \ \ \ \ \ \ \ \ \ $=W_{Tr_{1}^{m}(\gamma_{b}x^{2})}(a)W_{p^{k-1}z_{1}z_{2}+g(z_{2})}(b_{1}, b_{2})$ \\where $\gamma_{b}=1+b_{1}\alpha +b_{2}\beta$ and in the third equality we use the change of variables $z_{1}=y_{1}+Tr_{1}^{m}(\alpha x^{2})$, $z_{2}=y_{2}+Tr_{1}^{m}(\beta x^{2})$. Since $\gamma_{b}\neq 0$, then $Tr_{1}^{m}(\gamma_{b}x^{2})$ is a bent function and $W_{Tr_{1}^{m}(\gamma_{b}x^{2})}(a)=(-1)^{m-1}\epsilon^{m}\eta(\gamma_{b})p^{\frac{m}{2}}\zeta_{p}^{Tr_{1}^{m}(-\frac{a^{2}}{4\gamma_{b}})}$ where $\epsilon=1$ if $p\equiv 1\ (mod \ 4)$ and $\epsilon=\sqrt{-1}$ if $p\equiv 3 \ (mod \ 4)$. Since $p^{k-1}z_{1}z_{2}+g(z_{2})$ is a generalized Maiorana-McFarland bent function, then $W_{p^{k-1}z_{1}z_{2}+g(z_{2})}(b_{1}, b_{2})=p \zeta_{p^k}^{-p^{k-1}b_{1}b_{2}+g(b_{1})}$. Hence $F$ is generalized bent and \begin{equation}\label{8} W_{F}(a, b_{1}, b_{2})=\mu_{F}(a, b_{1}, b_{2})p^{\frac{m+2}{2}}\zeta_{p^k}^{F^{*}(a, b_{1}, b_{2})} \end{equation} where $\mu_{F}(a, b_{1}, b_{2})=(-1)^{m-1}\epsilon^{m}\eta(\gamma_{b})$ and its dual $F^{*}(a, b_{1}, b_{2})=p^{k-1}(Tr_{1}^{m}(-\frac{a^{2}}{4\gamma_{b}})-b_{1}b_{2})+g(b_{1})$. From (8), it is obviously that $F$ is weakly regular if and only if $\eta(\gamma_{b})=\eta(1+b_{1}\alpha+b_{2} \beta), b_{1}, b_{2} \in \mathbb{F}_{p}$ are the same. By $\eta(1)=1$, we have $F$ is weakly regular if and only if $\eta(\gamma_{b})=1$ for any $b_{1}, b_{2} \in \mathbb{F}_{p}$. 2) By $F^{*}(x, y_{1}, y_{2})=p^{k-1}(Tr_{1}^{m}(-\frac{x^{2}}{4\gamma_{y}})-y_{1}y_{2})+g(y_{1})$ where $\gamma_{y}=1+y_{1}\alpha+y_{2}\beta$, for any $(a, b_{1}, b_{2})\in \mathbb{F}_{p^m} \times \mathbb{F}_{p} \times \mathbb{F}_{p}$ we have \begin{equation*} \begin{split} & W_{F^{*}}(a, b_{1}, b_{2})\\ & =\sum_{y_{1}, y_{2}\in \mathbb{F}_{p}}\zeta_{p}^{-y_{1}y_{2}-b_{1}y_{1}-b_{2}y_{2}}\zeta_{p^k}^{g(y_{1})}\sum_{x \in \mathbb{F}_{p^m}}\zeta_{p}^{Tr_{1}^{m}(-\frac{x^{2}}{4\gamma_{y}}-ax)}\\ & =\sum_{y_{1}, y_{2}\in \mathbb{F}_{p}}\zeta_{p}^{-y_{1}y_{2}-b_{1}y_{1}-b_{2}y_{2}}\zeta_{p^k}^{g(y_{1})}(-1)^{m-1}\epsilon^{m}p^{\frac{m}{2}}\eta(-\frac{1}{4\gamma_{y}}) \zeta_{p}^{Tr_{1}^{m}(\gamma_{y}a^{2})}\\ & = \lambda\sum_{y_{1}, y_{2}\in \mathbb{F}_{p}}\eta(\gamma_{y})\zeta_{p}^{-y_{1}y_{2}-(b_{1}-Tr_{1}^{m}(a^{2}\alpha))y_{1}-(b_{2}-Tr_{1}^{m}(a^{2}\beta))y_{2}} \zeta_{p^k}^{g(y_{1})} \end{split} \end{equation*} where $\lambda=(-1)^{m-1}\epsilon^{m}\eta(-1)p^{\frac{m}{2}}\zeta_{p}^{Tr_{1}^{m}(a^{2})}$, hence it is easy to see that $F^{*}$ is generalized bent if and only if (7) holds. In particular, if $\eta(1+i \alpha +j \beta)=\eta(1+i\alpha)$ for any $i, j \in \mathbb{F}_{p}$, then for any $b_{1}, b_{2} \in \mathbb{F}_{p}$, \begin{equation*} \begin{split} &\sum_{y_{1}, y_{2}\in \mathbb{F}_{p}}\eta(1+y_{1}\alpha+y_{2}\beta)\zeta_{p}^{-y_{1}y_{2}+b_{1}y_{1}+b_{2}y_{2}}\zeta_{p^k}^{g(y_{1})} \\ &=\sum_{y_{1} \in \mathbb{F}_{p}}\eta(1+y_{1}\alpha)\zeta_{p}^{b_{1}y_{1}}\zeta_{p^k}^{g(y_{1})}\sum_{y_{2}\in \mathbb{F}_{p}}\zeta_{p}^{(b_{2}-y_{1})y_{2}}\\ &=\eta(1+b_{2}\alpha) p \zeta_{p}^{b_{1}b_{2}}\zeta_{p^k}^{g(b_{2})}, \end{split} \end{equation*} thus (7) holds and $F^{*}$ is generalized bent. Furthermore, if there exist $i_{1}, i_{2}\in \mathbb{F}_{p}$ such that $\eta(1+i_{1}\alpha)\neq \eta(1+i_{2}\alpha)$, it is easy to see that $F^{*}$ is non-weakly regular. \end{proof} \begin{remark} When $k=1$, $g=0$ and $1, \alpha , \beta\in \mathbb{F}_{p^m}$ are linearly independent over $\mathbb{F}_{p}$, the above construction reduces to Corollary 2 of \cite{Cesmelioglu5}, which is the first explicit construction of bent functions whose dual is not bent. As for general $\alpha, \beta\in \mathbb{F}_{p^m}^{*}$ with $1+i \alpha +j \beta \neq 0, i, j \in \mathbb{F}_{p}$, the condition (7) does not seem to hold in general, one can easily obtain non-weakly regular generalized bent functions whose dual is not generalized bent. But we emphasize that if $\alpha, \beta \in \mathbb{F}_{p^m}^{*}$ with $1+i \alpha +j \beta \neq 0, i, j \in \mathbb{F}_{p}$ satisfy that $\eta(1+i \alpha+j \beta)=\eta(1+i \alpha)$ for any $i, j \in \mathbb{F}_{p}$ and $\eta (1+i\alpha), i\in \mathbb{F}_{p}$ are not all the same, then the function constructed by the above construction is a non-weakly regular generalized bent function whose dual is also generalized bent. \end{remark} We give two examples of non-weakly regular generalized bent functions by using Theorem 1 for which the dual of the first example is not generalized bent and the dual of the second example is generalized bent. \begin{example} Let $p=5$, $m=2$, $k=3$. Let $z$ be the primitive element of $\mathbb{F}_{5^2}$ with $z^{2}+4z+2=0$. Let $\alpha=\beta=z$, $g: \mathbb{F}_{5}\rightarrow \mathbb{Z}_{5^3}$ be defined as $g(x)=x^{3}$. Then one can verify that the function $F$ constructed by Theorem 1 is a non-weakly regular generalized bent function and its dual $F^{*}$ is not generalized bent. \end{example} \begin{example} Let $p=3$, $m=5$, $k=2$. Let $z$ be the primitive element of $\mathbb{F}_{3^5}$ with $z^{5}+2z+1=0$. Let $\alpha=z^{10}, \beta=z^{47}$, $g: \mathbb{F}_{3}\rightarrow \mathbb{Z}_{3^2}$ be defined as $g(x)=x$. Then one can verify that $\eta(1+j\beta)=1, \eta(1+\alpha+j\beta)=\eta(1+2\alpha+j\beta)=-1$ for any $j \in \mathbb{F}_{3}$, hence the function $F$ constructed by Theorem 1 is a non-weakly regular generalized bent function and its dual $F^{*}$ is generalized bent. \end{example} Now we show that the generalized indirect sum construction method given in \cite{Wang} can provide a secondary construction of generalized bent functions for which the dual can be generalized bent or not generalized bent. For the sake of completeness, we give the proof of the following lemma, which can be obtained by Theorem 5 of \cite{Wang}. \begin{lemma}[\cite{Wang}] Let $f_{i} (i \in \mathbb{F}_{p}^{t}) : V_{r}\rightarrow \mathbb{Z}_{p^k}$ be generalized bent functions. Let $g_{s} (0\leq s\leq t) : V_{m}\rightarrow \mathbb{F}_{p}$ be bent functions which satisfy that for any $j=(j_{1}, \dots, j_{t})\in \mathbb{F}_{p}^{t}$, $G_{j}\triangleq (1-j_{1}-\dots-j_{t})g_{0}+j_{1}g_{1}+\dots+j_{t}g_{t}$ is a bent function and $G_{j}^{*}=(1-j_{1}-\dots-j_{t})g_{0}^{*}+j_{1}g_{1}^{*}+\dots+j_{t}g_{t}^{*}$ and $\mu_{G_{j}}(y)=u, y \in V_{m}$ where $\mu_{G_{j}}$ is defined by (4) and $u$ is a constant independent of $j$. Let $g$ be an arbitrary function from $\mathbb{F}_{p}^{t}$ to $\mathbb{Z}_{p^k}$. Then $F(x,y)=f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}(x)+p^{k-1}g_{0}(y)+g(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y)), (x, y) \in V_{r}\times V_{m}$ is a generalized bent function. \end{lemma} \begin{proof} For any $(a, b) \in V_{r}\times V_{m}$, we have \begin{equation*} \begin{split} &W_{F}(a, b)\\ &=\sum_{x \in V_{r}, y \in V_{m}}\zeta_{p^k}^{f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}(x)+g(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}\zeta_{p}^{g_{0}(y)-\langle a, x\rangle -\langle b, y \rangle}\\ & =\sum_{i_{1}, \dots, i_{t}\in \mathbb{F}_{p}}\sum_{y: g_{0}(y)-g_{j}(y)=i_{j}, 1\leq j\leq t}\sum_{x \in V_{r}}\zeta_{p^k}^{f_{(i_{1}, \dots, i_{t})}(x)+g(i_{1}, \dots, i_{t})}\zeta_{p}^{g_{0}(y)-\langle a, x \rangle-\langle b, y\rangle}\\ & =p^{-t}\sum_{i_{1}, \dots, i_{t} \in \mathbb{F}_{p}}\zeta_{p^k}^{g(i_{1}, \dots, i_{t})}W_{f_{(i_{1}, \dots, i_{t})}}(a)\sum_{y \in V_{m}}\zeta_{p}^{g_{0}(y)-\langle b, y\rangle}\sum_{j_{1}\in \mathbb{F}_{p}}\zeta_{p}^{(i_{1}-(g_{0}-g_{1})(y))j_{1}}\\ & \ \ \ \ \ \ \ \dots \sum_{j_{t}\in \mathbb{F}_{p}}\zeta_{p}^{(i_{t}-(g_{0}-g_{t})(y))j_{t}}\\ & =p^{-t}\sum_{i_{1}, \dots, i_{t} \in \mathbb{F}_{p}}\zeta_{p^k}^{g(i_{1}, \dots, i_{t})}W_{f_{(i_{1}, \dots, i_{t})}}(a)\sum_{j_{1}, \dots, j_{t}\in \mathbb{F}_{p}}\zeta_{p}^{i_{1}j_{1}+\dots+i_{t}j_{t}}W_{G_{(j_{1}, \dots, j_{t})}}(b). \end{split} \end{equation*} As for any $j_{1}, \dots, j_{t}\in \mathbb{F}_{p}$, $G_{(j_{1}, \dots, j_{t})}\triangleq (1-j_{1}-\dots-j_{t})g_{0}+j_{1}g_{1}+\dots+j_{t}g_{t}$ is a bent function and $G_{(j_{1}, \dots, j_{t})}^{*}=(1-j_{1}-\dots-j_{t})g_{0}^{*}+j_{1}g_{1}^{*}+\dots+j_{t}g_{t}^{*}$ and $\mu_{G_{(j_{1}, \dots, j_{t})}}(y)=u, y \in V_{m}$ where $u$ is a constant independent of $(j_{1}, \dots, j_{t})$, we have \begin{equation}\label{9} \begin{split} &W_{F}(a, b)\\ & =u p^{\frac{m}{2}}p^{-t}\sum_{i_{1}, \dots, i_{t} \in \mathbb{F}_{p}}\zeta_{p^k}^{g(i_{1}, \dots, i_{t})}W_{f_{(i_{1}, \dots, i_{t})}}(a)\sum_{j_{1}, \dots, j_{t}\in \mathbb{F}_{p}}\zeta_{p}^{i_{1}j_{1}+\dots+i_{t}j_{t}+(1-j_{1}-\dots-j_{t})g_{0}^{*}(b)+j_{1}g_{1}^{*}(b)+\dots+j_{t}g_{t}^{*}(b)}\\ & =u p^{\frac{m}{2}}p^{-t}\zeta_{p}^{g_{0}^{*}(b)}\sum_{i_{1}, \dots, i_{t} \in \mathbb{F}_{p}}\zeta_{p^k}^{g(i_{1}, \dots, i_{t})}W_{f_{(i_{1}, \dots, i_{t})}}(a)\sum_{j_{1}\in \mathbb{F}_{p}}\zeta_{p}^{(g_{1}^{*}(b)-g_{0}^{*}(b)+i_{1})j_{1}} \dots \sum_{j_{t}\in \mathbb{F}_{p}}\zeta_{p}^{(g_{t}^{*}(b)-g_{0}^{*}(b)+i_{t})j_{t}}\\ & =u p^{\frac{m}{2}}\zeta_{p}^{g_{0}^{*}(b)}\zeta_{p^k}^{g(g_{0}^{*}(b)-g_{1}^{*}(b), \dots, g_{0}^{*}(b)-g_{t}^{*}(b))}W_{f_{(g_{0}^{*}(b)-g_{1}^{*}(b), \dots, g_{0}^{*}(b)-g_{t}^{*}(b))}}(a). \end{split} \end{equation} Hence, by (9), it is easy to see that $F: V_{r}\times V_{m}\rightarrow \mathbb{Z}_{p^k}$ is a generalized bent function if $f_{i}, i\in \mathbb{F}_{p}^{t}$ are generalized bent functions from $V_{r}$ to $\mathbb{Z}_{p^k}$. \end{proof} Based on Lemma 2, we give the following theorem. \begin{theorem} With the same notation as Lemma 2. The dual of the generalized bent function constructed by Lemma 2 is a generalized bent function if and only if for any $y \in V_{m}$, the dual of $f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}$ is a generalized bent function. \end{theorem} \begin{proof} By (9), we have that the dual of the generalized bent function $F$ constructed by Lemma 2 is $F^{*}(x, y)=f^{*}_{(g_{0}^{*}(y)-g_{1}^{*}(y), \dots, g_{0}^{*}(y)-g_{t}^{*}(y))}(x)+p^{k-1}g_{0}^{*}(y)+g(g_{0}^{*}(y)-g_{1}^{*}(y), \dots, g_{0}^{*}(y)-g_{t}^{*}(y))$. Note that for a weakly regular bent function $h$ with dual $h^{*}$, $h^{*}$ is also a weakly regular bent function and $h^{**}(x)=h(-x), \mu_{h^{*}}=\mu_{h}^{-1}$. Since for any $j=(j_{1}, \dots, j_{t})\in \mathbb{F}_{p}^{t}$, $G_{j}=(1-j_{1}-\dots-j_{t})g_{0}+j_{1}g_{1}+\dots+j_{t}g_{t}$ is a bent function and $G_{j}^{*}=(1-j_{1}-\dots-j_{t})g_{0}^{*}+j_{1}g_{1}^{*}+\dots+j_{t}g_{t}^{*}$ and $\mu_{G_{j}}(y)=u, y \in V_{m}$ where $u$ is a constant independent of $j$, we have $G_{j}^{*}$ is a bent function and $G_{j}^{**}(y)=G_{j}(-y)=(1-j_{1}-\dots-j_{t})g_{0}(-y)+j_{1}g_{1}(-y)+\dots+j_{t}g_{t}(-y)=(1-j_{1}-\dots-j_{t})g_{0}^{**}(y)+j_{1}g_{1}^{**}(y)+\dots+j_{t}g_{t}^{**}(y)$ and $\mu_{G_{j}^{*}}(y)=u^{-1}, y \in V_{m}$, that is, $g_{s}^{*} (0\leq s \leq t)$ also satisfy the condition of Lemma 2. Since $F^{*}$ has the same form as $F$ and $g_{s}^{*} (0\leq s \leq t)$ satisfy the condition of Lemma 2, by (9), for any $(a, b)\in V_{r} \times V_{m}$ we have \begin{equation*} \begin{split} & W_{F^{*}}(a, b) \\ &=u^{-1}p^{\frac{m}{2}}\zeta_{p}^{g_{0}^{**}(b)}\zeta_{p^k}^{g(g_{0}^{**}(b)-g_{1}^{**}(b), \dots, g_{0}^{**}(b)-g_{t}^{**}(b))}W_{f_{(g_{0}^{**}(b)-g_{1}^{**}(b), \dots, g_{0}^{**}(b)-g_{t}^{**}(b))}^{*}}(a)\\ &=u^{-1}p^{\frac{m}{2}}\zeta_{p}^{g_{0}(-b)}\zeta_{p^k}^{g(g_{0}(-b)-g_{1}(-b), \dots, g_{0}(-b)-g_{t}(-b))}W_{f_{(g_{0}(-b)-g_{1}(-b), \dots, g_{0}(-b)-g_{t}(-b))}^{*}}(a). \end{split} \end{equation*} If for any $y \in V_{m}$, $f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}^{*}$ is generalized bent, then obviously $F^{*}$ is generalized bent. Suppose $F^{*}$ is generalized bent. If there exists $y \in V_{m}$ such that $f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}^{*}$ is not generalized bent, let $a \in V_{r}$ with $|W_{f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}^{*}}(a)|\neq p^{\frac{r}{2}}$ and $b=-y$, then $|W_{F^{*}}(a, b)|\neq p^{\frac{r+m}{2}}$ and $F^{*}$ is not generalized bent, which is a contradiction. Hence, $F^{*}$ is a generalized bent function if and only if for any $y \in V_{m}$, the dual of $f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}$ is a generalized bent function. \end{proof} When $k=1, t=1, m=2$ and $g=0$, $g_{0}(y)=y_{1}y_{2}, g_{1}(y)=y_{1}y_{2}-y_{2}, y=(y_{1}, y_{2}) \in \mathbb{F}_{p}\times \mathbb{F}_{p}$, Theorem 2 reduces to Theorem 3 of \cite{Cesmelioglu5}. The corresponding function of Theorem 3 of \cite{Cesmelioglu5} is in the Generalized Maiorana-McFarland bent functions class (see \cite{Cesmelioglu3}). In \cite{Wang}, the authors showed that the canonical way to construct Generalized Maiorana-McFarland bent functions can be obtained by the generalized indirect sum construction method. In \cite{Wang}, the authors also showed that $p$-ary $PS_{ap}$ bent functions \begin{equation*} g_{s}(y)=Tr_{1}^{m}(\alpha_{s}G(y_{1}y_{2}^{p^{m}-2})), y=(y_{1}, y_{2})\in \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}, 0\leq s\leq t \end{equation*} satisfy the condition of Lemma 2 where $m\geq t+1$, $\alpha_{0}, \dots, \alpha_{t}\in \mathbb{F}_{p^m}$ are linearly independent over $\mathbb{F}_{p}$ and $G$ is a permutation over $\mathbb{F}_{p^m}$ with $G(0)=0$. Since the above $g_{s}(0\leq s \leq t)$ satisfy the condition of Lemma 2 and $\{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y)), y=(y_{1}, y_{2}) \in \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}\}=\mathbb{F}_{p}^{t}$, by Theorem 2 we have the following corollary. \begin{corollary} Let $f_{i} (i \in \mathbb{F}_{p}^{t}) : V_{r}\rightarrow \mathbb{Z}_{p^k}$ be generalized bent functions. Let $g_{s} (0\leq s\leq t) : \mathbb{F}_{p^m} \times \mathbb{F}_{p^m} \rightarrow \mathbb{F}_{p}$ be defined as $g_{s}(y)=Tr_{1}^{m}(\alpha_{s}G(y_{1}y_{2}^{p^{m}-2})), y=(y_{1}, y_{2})\in \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}$ where $m\geq t+1$, $\alpha_{0}, \dots, \alpha_{t}\in \mathbb{F}_{p^m}$ are linearly independent over $\mathbb{F}_{p}$ and $G$ is a permutation over $\mathbb{F}_{p^m}$ with $G(0)=0$. Let $g$ be an arbitrary function from $\mathbb{F}_{p}^{t}$ to $\mathbb{Z}_{p^k}$. Then the dual of the generalized bent function $F: V_{r} \times \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}\rightarrow \mathbb{Z}_{p^k}$ defined as $F(x,y)=f_{(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y))}(x)+p^{k-1}g_{0}(y)+g(g_{0}(y)-g_{1}(y), \dots, g_{0}(y)-g_{t}(y)), (x, y)=(x, y_{1}, y_{2}) \in V_{r}\times \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}$ is generalized bent if and only if for any $i \in \mathbb{F}_{p}^{t}$, the dual of $f_{i}$ is generalized bent. \end{corollary} \section{A property of generalized bent functions whose dual is generalized bent} In this section, let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a generalized bent function whose dual $f^{*}$ is generalized bent. By using the knowledge on ideal decomposition in cyclotomic field, we will prove $f^{**}(x)=f(-x)$. For the case of bent functions $f: V_{n}\rightarrow \mathbb{F}_{p}$ whose dual $f^{*}$ is bent, \"{O}zbudak and Pelen \cite{Ozbudak} have shown that $f^{**}(x)=f(-x)$ holds by studying the value distributions of the dual of non-weakly regular bent functions. Compared with the proof method in \cite{Ozbudak}, our proof method seems more concise. Before proof, we need a lemma. \begin{lemma} Let $L=\mathbb{Q}(\zeta_{p^k}), \mathcal{O}_{L}=\mathbb{Z}[\zeta_{p^k}]$. Then for any $1\leq j \leq p^{k}-1$, $(1+\zeta_{p^k}^{j})\mathcal{O}_{L}=\mathcal{O}_{L}$ and $(1-\zeta_{p^k}^{j})\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{p^s}$ where $s=v_{p}(j)$. \end{lemma} \begin{proof} For any $1 \leq j \leq p^k-1$, by \begin{equation*} \frac{1}{1+\zeta_{p^k}^{j}}=\frac{1-\zeta_{p^k}^{j}}{1-\zeta_{p^k}^{2j}}=\frac{1-\zeta_{p^k}^{2lj}}{1-\zeta_{p^k}^{2j}}=\sum_{i=0}^{l-1}\zeta_{p^k}^{2ij} \in \mathcal{O}_{L} \end{equation*} where $l\in \mathbb{Z}_{p^k}$ satisfies $2l\equiv 1 (mod \ p^k)$, we have that $1+\zeta_{p^k}^{j}$ is a unit of $\mathcal{O}_{L}$, that is, $(1+\zeta_{p^k}^{j})\mathcal{O}_{L}=\mathcal{O}_{L}$. For any $1\leq j \leq p^k-1$, let $s=v_{p}(j)$. Then $0\leq s<k$ and $gcd(j, p^{k})=p^{s}$. Let $M=\mathbb{Q}(\zeta_{p^{k-s}})$, then $\mathcal{O}_{M}=\mathbb{Z}[\zeta_{p^{k-s}}]$ and $M$ is a subfield of $L$ since $\zeta_{p^{k-s}}=\zeta_{p^k}^{p^s}$. By \begin{equation*} \frac{1-\zeta_{p^{k-s}}^{\frac{j}{p^{s}}}}{1-\zeta_{p^{k-s}}}=\sum_{i=0}^{\frac{j}{p^s}-1}\zeta_{p^{k-s}}^{i} \in \mathcal{O}_{M}\subseteq \mathcal{O}_{L} \end{equation*} and \begin{equation*} \frac{1-\zeta_{p^{k-s}}}{1-\zeta_{p^{k-s}}^{\frac{j}{p^{s}}}}=\frac{1-\zeta_{p^{k-s}}^{\frac{j}{p^s}t}}{1-\zeta_{p^{k-s}}^{\frac{j}{p^{s}}}}=\sum_{i=0}^{t-1}\zeta_{p^{k-s}}^{i\frac{j}{p^s}}\in \mathcal{O}_{M}\subseteq \mathcal{O}_{L} \end{equation*} where $t\in \mathbb{Z}_{p^{k-s}}$ satisfies $\frac{j}{p^{s}}t\equiv 1 (mod \ p^{k-s})$, we have that $\frac{1-\zeta_{p^{k-s}}^{\frac{j}{p^{s}}}}{1-\zeta_{p^{k-s}}}$ is a unit of $\mathcal{O}_{L}$. Note that $t$ exists since $gcd(\frac{j}{p^{s}}, p^{k-s})=1$. Hence $(1-\zeta_{p^k}^{j})\mathcal{O}_{L}=(1-\zeta_{p^{k-s}}^{\frac{j}{p^s}})\mathcal{O}_{L}=(1-\zeta_{p^{k-s}})\mathcal{O}_{L}$. By $p\mathcal{O}_{M}=((1-\zeta_{p^{k-s}})\mathcal{O}_{M})^{\varphi(p^{k-s})}$ and $(p\mathcal{O}_{M})\mathcal{O}_{L}=p\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)}$, we have $((1-\zeta_{p^{k-s}})\mathcal{O}_{L})^{\varphi(p^{k-s})}=((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)}$. By the uniqueness of the decomposition of $(1-\zeta_{p^{k-s}})\mathcal{O}_{L}$, we have $(1-\zeta_{p^{k-s}})\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{\frac{\varphi(p^k)}{\varphi(p^{k-s})}}=((1-\zeta_{p^k})\mathcal{O}_{L})^{p^{s}}$. Hence, $(1-\zeta_{p^k}^{j})\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{p^s}$. \end{proof} Now we prove $f^{**}(x)=f(-x)$ by using Lemma 3. In the subsequent of this paper, $L=\mathbb{Q}(\zeta_{p^k}), \mathcal{O}_{L}=\mathbb{Z}[\zeta_{p^k}]$ unless otherwise stated. \begin{theorem} Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a generalized bent function whose dual $f^{*}$ is also a generalized bent function. Then $f^{**}(x)=f(-x), x \in V_{n}$, where $f^{**}$ is the dual of $f^{*}$. \end{theorem} \begin{proof} Consider the left-hand side of Equation (6), \begin{equation*} \begin{split} &\xi \sum_{x \in V_{n}}\varepsilon_{f}(x)\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}\\ & = \xi \sum_{x\in V_{n}:\ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}-\xi \sum_{x \in V_{n}: \ \varepsilon_{f}(x)=-1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle} \\ & = 2\xi \sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}-\xi \sum_{x \in V_{n}}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}\\ & =2\xi \sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}-\xi W_{f^{*}}(-a), \end{split} \end{equation*} where $\xi=1$ if $p \equiv 1 \ (mod \ 4)$ or $n$ is even and $\xi =\sqrt{-1}$ if $p \equiv 3 \ (mod \ 4)$ and $n$ is odd, $\varepsilon_{f}: V_{n}\rightarrow \{\pm 1\}$ is defined by (5). Since $f^{*}$ is a generalized bent function, we have $W_{f^{*}}(-a)=\xi \varepsilon_{f^{*}}(-a)p^{\frac{n}{2}}\zeta_{p^k}^{f^{**}(-a)}$ where $\varepsilon_{f^{*}}: V_{n}\rightarrow \{\pm 1\}$. Thus, for any $a \in V_{n}$ we have \begin{equation}\label{10} \xi \sum_{x \in V_{n}}\varepsilon_{f}(x)\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle} =2\xi \sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}-\xi^{2}\varepsilon_{f^{*}}(-a)p^{\frac{n}{2}}\zeta_{p^{k}}^{f^{**}(-a)}. \end{equation} Let $X_{a}=\xi \sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}$. By (6) and (10), we have \begin{equation}\label{11} 2X_{a}=p^{\frac{n}{2}}\zeta_{p^k}^{f(a)}(1+\xi^{2}\varepsilon_{f^{*}}(-a)\zeta_{p^k}^{f^{**}(-a)-f(a)}), a \in V_{n}. \end{equation} Suppose there exists $a \in V_{n}$ such that $f^{**}(-a)\neq f(a)$, that is, $\Delta_{a}\triangleq f^{**}(-a)-f(a)\neq 0$. 1) When $n$ is even, then $\xi=1$. Note that in this case, $X_{a} \in \mathcal{O}_{L}$. By (11) and Lemma 3, we have \begin{equation*} (2\mathcal{O}_{L})(X_{a}\mathcal{O}_{L})=\left\{\begin{split} & ((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)\frac{n}{2}} \ \ \ \ if \ \varepsilon_{f^{*}}(-a)=1,\\ & ((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)\frac{n}{2}+p^{v_{p}(\Delta_{a})}} \ if \ \varepsilon_{f^{*}}(-a)=-1. \end{split}\right. \end{equation*} Since $\frac{1}{2} \notin \mathcal{O}_{L}$, we have $2\mathcal{O}_{L}\neq \mathcal{O}_{L}$. Indeed, if $\frac{1}{2} \in \mathcal{O}_{L}$, then by $\{\zeta_{p^k}^{0}, \dots, $ $\zeta_{p^k}^{\varphi(p^k)-1}\}$ is an integer basis of $\mathcal{O}_{L}$, $\frac{1}{2}=\sum_{i=0}^{\varphi(p^k)-1}a_{i}\zeta_{p^k}^{i}$ where $a_{i}\in \mathbb{Z}$, that is, $\sum_{i=0}^{\varphi(p^k)-1}2a_{i}\zeta_{p^k}^{i}=\zeta_{p^k}^{0}$. This equation deduces $a_{0}=\frac{1}{2}, a_{i}=0 (1\leq i \leq \varphi(p^k)-1)$, which contradicts $a_{0}\in \mathbb{Z}$. Then by the uniqueness of the decomposition of $2\mathcal{O}_{L}$, we have \begin{equation}\label{12} 2\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{t} \end{equation} for some positive integer $t$. From (12), we have $2\mathbb{Z}=\mathbb{Z}\cap (1-\zeta_{p^k})\mathcal{O}_{L}$. And from $p\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)}$, we have $p\mathbb{Z}=\mathbb{Z}\cap (1-\zeta_{p^k})\mathcal{O}_{L}$. Hence, $2\mathbb{Z}=p\mathbb{Z}$, which is a contradiction since $p$ is an odd prime. So in this case, $f^{**}(-a)=f(a)$ for any $a \in V_{n}$. 2) When $n$ is odd, by multiplying both sides of (11) by $\sqrt{p}$, we obtain \begin{equation}\label{13} 2X_{a}\sqrt{p}=p^{\frac{n+1}{2}}\zeta_{p^k}^{f(a)}(1+\xi^{2}\varepsilon_{f^{*}}(-a)\zeta_{p^k}^{f^{**}(-a)-f(a)}). \end{equation} Recall that when $n$ is odd, $\xi=1$ if $p\equiv 1 \ (mod \ 4)$ and $\xi=\sqrt{-1}$ if $p\equiv 3 \ (mod \ 4)$. Note that $\xi^{2}\in \{\pm1\}$ and $X_{a}\sqrt{p}\in \mathcal{O}_{L}$ since $\xi \sqrt{p}=\sum_{i \in \mathbb{F}_{p}^{*}}\eta(i)\zeta_{p}^{i} \in \mathbb{Z}[\zeta_{p}]$ by a well known result on Gauss sums (see \cite{Lidl}). By (13) and Lemma 3, we have \begin{equation*} (2\mathcal{O}_{L})((X_{a}\sqrt{p})\mathcal{O}_{L})=\left\{\begin{split} & ((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)\frac{n+1}{2}} \ \ \ \ if \ \xi^{2}\varepsilon_{f^{*}}(-a)=1,\\ & ((1-\zeta_{p^k})\mathcal{O}_{L})^{\varphi(p^k)\frac{n+1}{2}+p^{v_{p}(\Delta_{a})}} \ if \ \xi^{2}\varepsilon_{f^{*}}(-a)=-1. \end{split}\right. \end{equation*} Since $2\mathcal{O}_{L}\neq \mathcal{O}_{L}$, by the uniqueness of the decomposition of $2\mathcal{O}_{L}$, we have $2\mathcal{O}_{L}=((1-\zeta_{p^k})\mathcal{O}_{L})^{t}$ for some positive integer $t$. Then with the same argument as 1), we have $2\mathbb{Z}=p\mathbb{Z}$, which is a contradiction. So in this case, $f^{**}(-a)=f(a)$ for any $a \in V_{n}$. \end{proof} \section{Nonexistence of self-dual generalized bent function if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd} In this section, we will show that there is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. Note that in \cite{Cesmelioglu4}, for $p \equiv 3 \ (mod \ 4)$ and $n$ is odd, the authors showed that there is no weakly regular self-dual bent function and they pointed out that there exist non-weakly regular self-dual bent functions. Indeed, they pointed out that the non-weakly regular bent function $g_{3}(x)=Tr_{1}^{3}(x^{22}+x^{8})$ from $\mathbb{F}_{3^3}$ to $\mathbb{F}_{3}$ is self-dual. But in fact, it is easy to verify that $g_{3}$ is not self-dual by using MAGMA and there is no self-dual bent function from $V_{n}$ to $\mathbb{F}_{p}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd according to the theorem of this section. Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a generalized bent function satisfying $f(x)=f(-x), x \in V_{n}$. For any $a \in V_{n}$, we have \begin{equation*} \begin{split} W_{f}(a) & =\sum_{x \in V_{n}}\zeta_{p^k}^{f(x)}\zeta_{p}^{-\langle a, x\rangle} \\ & =\sum_{x \in V_{n}}\zeta_{p^k}^{f(-x)}\zeta_{p}^{\langle a, x\rangle} \\ & =\sum_{x \in V_{n}}\zeta_{p^k}^{f(x)}\zeta_{p}^{\langle a, x\rangle}\\ &=W_{f}(-a), \end{split} \end{equation*} where in the second equation we use the change of variable $x\mapsto -x$ and in the third equation we use $f(x)=f(-x)$. By $W_{f}(a)=W_{f}(-a)$ and $W_{f}(a)=\xi \varepsilon_{f}(a)p^{\frac{n}{2}}\zeta_{p^k}^{f^{*}(a)}$ where $\xi=1$ if $p\equiv 1\ (mod \ 4)$ or $n$ is even and $\xi=\sqrt{-1}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd, $\varepsilon_{f}: V_{n}\rightarrow \{\pm 1\}$, we have \begin{equation}\label{14} \varepsilon_{f}(a)=\varepsilon_{f}(-a), f^{*}(a)=f^{*}(-a). \end{equation} Note that for any generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ satisfying $f(x)=f(-x)$, Equation (14) holds. For any $a\in V_{n}$, let \begin{equation*} S_{a}=\sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}, \ T_{a}=\sum_{x\in V_{n}: \ \varepsilon_{f}(x)=-1}\zeta_{p^k}^{f^{*}(x)}\zeta_{p}^{\langle a, x\rangle}. \end{equation*} Suppose the dual $f^{*}$ is generalized bent. Then by the definitions of $S_{a}$ and $T_{a}$, we have $S_{a}+T_{a}=W_{f^{*}}(-a)=\xi \varepsilon_{f^{*}}(-a)p^{\frac{n}{2}}\zeta_{p^k}^{f^{**}(-a)}$ where $\varepsilon_{f^{*}}: V_{n}\rightarrow \{\pm 1\}$. Since $f^{*}(x)=f^{*}(-x)$, we have $\varepsilon_{f^{*}}(-a)=\varepsilon_{f^{*}}(a)$. By Theorem 3, we have $f^{**}(-a)=f(a)$. Hence, we obtain \begin{equation}\label{15} S_{a}+T_{a}=\xi \varepsilon_{f^{*}}(a)p^{\frac{n}{2}}\zeta_{p^k}^{f(a)}. \end{equation} By (6), we have \begin{equation}\label{16} S_{a}-T_{a}=\xi^{-1}p^{\frac{n}{2}}\zeta_{p^k}^{f(a)}. \end{equation} By (15) and (16), we have \begin{equation}\label{17} S_{a}=p^{\frac{n}{2}}\zeta_{p^k}^{f(a)}\frac{\xi \varepsilon_{f^{*}}(a)+\xi^{-1}}{2}, \ T_{a}=p^{\frac{n}{2}}\zeta_{p^k}^{f(a)}\frac{\xi \varepsilon_{f^{*}}(a)-\xi^{-1}}{2}. \end{equation} Based on the above analysis and (17), we obtain the following proposition. For weakly regular generalized bent functions, the following proposition is obvious. But for non-weakly regular generalized bent functions, the following proposition is not obvious. \begin{proposition} Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a generalized bent function which satisfies that $f(x)=f(-x), x \in V_{n}$ and the dual $f^{*}$ is a generalized bent function. Let $\varepsilon_{f}, \varepsilon_{f^{*}}: V_{n}\rightarrow \{\pm 1\}$ be defined by (5). Then 1) If $p \equiv 1 \ (mod \ 4)$ or $n$ is even, then $\varepsilon_{f^{*}}(0)=\varepsilon_{f}(0)$; 2) If $p \equiv 3 \ (mod \ 4)$ and $n$ is odd, then $\varepsilon_{f^{*}}(0)=-\varepsilon_{f}(0)$. \end{proposition} \begin{proof} Let $g_{0}: V_{n}\rightarrow \mathbb{F}_{p}$ and $\bar{g_{1}}: V_{n}\rightarrow \mathbb{Z}_{p^{k-1}}$ satisfy $f^{*}=g_{0}p^{k-1}+\bar{g_{1}}$. Note that when $k=1$, $\mathbb{Z}_{p^{k-1}}=\{0\}$ and $g_{0}=f^{*}$, $\bar{g_{1}}=0$. 1) If $p \equiv 1 \ (mod \ 4)$ or $n$ is even, then $\xi=1$. By (17), we have $T_{a}=0$ if $\varepsilon_{f^{*}}(a)=1$ and $S_{a}=0$ if $\varepsilon_{f^{*}}(a)=-1$. Suppose $\varepsilon_{f^{*}}(0)=-\varepsilon_{f}(0)$. Without loss of generality, assume $\varepsilon_{f}(0)=1$. Then $\varepsilon_{f^{*}}(0)=-1$ and $S_{0}=0$. By the definition of $S_{0}$, we have $\sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}=0$. For any $i_{0} \in \mathbb{F}_{p}, \bar{i_{1}} \in \mathbb{Z}_{p^{k-1}}$, let \begin{equation}\label{18} A_{i_{0}, \bar{i_{1}}}=\{x \in V_{n}: \ \varepsilon_{f}(x)=1 \ and \ g_{0}(x)=i_{0}, \bar{g_{1}}(x)= \bar{i_{1}}\} \end{equation} and $N_{i_{0}, \bar{i_{1}}}$ denote the size of $A_{i_{0}, \bar{i_{1}}}$. Then we have \begin{equation*} \sum_{i_{0}\in \mathbb{F}_{p}}\sum_{\bar{i_{1}} \in \mathbb{Z}_{p^{k-1}}}N_{i_{0}, \bar{i_{1}}}\zeta_{p}^{i_{0}}\zeta_{p^k}^{\bar{i_{1}}}=0. \end{equation*} By $\sum_{i=0}^{p-1}\zeta_{p}^{i}=0$ and the above equation we obtain \begin{equation*} \sum_{i_{0}=0}^{p-2}\sum_{\bar{i_{1}} \in \mathbb{Z}_{p^{k-1}}}(N_{i_{0}, \bar{i_{1}}}-N_{p-1, \bar{i_{1}}})\zeta_{p}^{i_{0}}\zeta_{p^k}^{\bar{i_{1}}}=0, \end{equation*} which deduces $N_{i_{0}, \bar{i_{1}}}=N_{p-1, \bar{i_{1}}}$ for any $i_{0}\in \mathbb{F}_{p}, \bar{i_{1}} \in \mathbb{Z}_{p^{k-1}}$ since $\{\zeta_{p}^{i}\zeta_{p^k}^{j}: 0\leq i \leq p-2, 0\leq j \leq p^{k-1}-1\}$ is an integral basis of $\mathbb{Z}[\zeta_{p^k}]$. In particular, $N_{g_{0}(0), \bar{g_{1}}(0)}=N_{i_{0}, \bar{g_{1}}(0)}$ for any $i_{0} \in \mathbb{F}_{p}$. By (14), we have $f^{*}(x)=f^{*}(-x)$ and $\varepsilon_{f}(x)=\varepsilon_{f}(-x)$. From $f^{*}(x)=f^{*}(-x)$, we have $g_{0}(x)=g_{0}(-x)$ and $\bar{g_{1}}(x)=\bar{g_{1}}(-x)$. And by $\varepsilon_{f}(x)=\varepsilon_{f}(-x)$, we have $x\in A_{i_{0}, \bar{i_{1}}}$ if and only if $-x\in A_{i_{0}, \bar{i_{1}}}$. Note that $0 \in A_{g_{0}(0), \bar{g_{1}}(0)}$ since $\varepsilon_{f}(0)=1$. Hence, $N_{g_{0}(0),\bar{g_{1}}(0)}$ is odd and $N_{i_{0}, \bar{i_{1}}}$ is even if $(i_{0}, \bar{i_{1}})\neq (g_{0}(0),\bar{g_{1}}(0))$, which contradicts $N_{g_{0}(0), \bar{g_{1}}(0)}=N_{i_{0}, \bar{g_{1}}(0)}, i_{0} \in \mathbb{F}_{p}$. Hence $\varepsilon_{f^{*}}(0)=\varepsilon_{f}(0)$. 2) If $p \equiv 3 \ (mod \ 4)$ and $n$ is odd, then $\xi=\sqrt{-1}$. By (17), we have $S_{a}=0$ if $\varepsilon_{f^{*}}(a)=1$ and $T_{a}=0$ if $\varepsilon_{f^{*}}(a)=-1$. Suppose $\varepsilon_{f^{*}}(0)=\varepsilon_{f}(0)$. Without loss of generality, assume $\varepsilon_{f}(0)=1$. Then $\varepsilon_{f^{*}}(0)=1$ and $S_{0}=0$. By the definition of $S_{0}$, we have $\sum_{x\in V_{n}: \ \varepsilon_{f}(x)=1}\zeta_{p^k}^{f^{*}(x)}=0$. For any $i_{0} \in \mathbb{F}_{p}, \bar{i_{1}} \in \mathbb{Z}_{p^{k-1}}$, let $A_{i_{0}, \bar{i_{1}}}$ be defined by (18) and $N_{i_{0}, \bar{i_{1}}}$ denote the size of $A_{i_{0}, \bar{i_{1}}}$. Then with the same argument as 1), we have that $N_{g_{0}(0), \bar{g_{1}}(0)}=N_{i_{0}, \bar{g_{1}}(0)}$ for any $i_{0}\in \mathbb{F}_{p}$ and $N_{g_{0}(0), \bar{g_{1}}(0)}$ is odd, $N_{i_{0}, \bar{i_{1}}}$ is even if $(i_{0}, \bar{i_{1}})\neq (g_{0}(0), \bar{g_{1}}(0))$, which is a contradiction. Hence, $\varepsilon_{f^{*}}(0)=-\varepsilon_{f}(0)$. \end{proof} By using Proposition 1, we obtain the following theorem. \begin{theorem} There is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. \end{theorem} \begin{proof} Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a self-dual generalized bent function. Since $f$ is a generalized bent function whose dual is generalized bent, we have $f^{**}(x)=f(-x)$ by Theorem 3. And since $f$ is self-dual, that is, $f^{*}=f$, we have $f(x)=f(-x)$. By $f^{*}=f$ and Proposition 1, we have $\varepsilon_{f}(0)=-\varepsilon_{f}(0)$ if $p \equiv 3 \ (mod \ 4)$ and $n$ is odd. Hence $\varepsilon_{f}(0)=0$, which contradicts $\varepsilon_{f}(0) \in \{\pm 1\}$. Therefore, there is no self-dual generalized bent function $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. \end{proof} \section{A secondary construction of self-dual generalized bent functions if $p \equiv 1 \ (mod \ 4)$ or $n$ is even} In this section, for $p\equiv 1 \ (mod \ 4)$ or $n$ is even, we give a secondary construction of self-dual generalized bent functions $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$. First, we give a lemma. \begin{lemma} Let $m$ be a positive integer and $m$ be even if $p\equiv 3 \ (mod \ 4)$. Let $a \in \mathbb{F}_{p^m}^{*}$ be an arbitrary element. Let $\alpha, \beta \in \{\pm z^{\frac{p^m-1}{4}}\}$ where $z$ is a primitive element of $\mathbb{F}_{p^m}$. Let $f_{i} (i \in \mathbb{F}_{p}): V_{r}\rightarrow \mathbb{Z}_{p^k}$ be generalized bent functions. Let $g_{0}: \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}\rightarrow \mathbb{F}_{p}$ be defined as $g_{0}(y_{1}, y_{2})=Tr_{1}^{m}(\frac{\beta}{2}(y_{1}^{2}+y_{2}^{2})), (y_{1}, y_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$. Let $h: \mathbb{F}_{p^m}\rightarrow \mathbb{F}_{p}$ and $g: \mathbb{F}_{p}\rightarrow \mathbb{Z}_{p^k}$ be arbitrary functions. Then the function $F(x, y_{1}, y_{2})=f_{h(a\alpha y_{1}+a y_{2})}(x)+p^{k-1}g_{0}(y_{1}, y_{2})+g(h(a\alpha y_{1}+a y_{2}))$ is a generalized bent function from $V_{r} \times \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$ to $\mathbb{Z}_{p^k}$ and its dual $F^{*}(x, y_{1}, y_{2})=f_{h(-\beta(a\alpha y_{1}+a y_{2}))}^{*}(x)+p^{k-1}g_{0}(y_{1}, y_{2})+g(h(-\beta(a\alpha y_{1}+a y_{2})))$. \end{lemma} \begin{proof} First note that $4 \mid (p^{m}-1)$. Indeed, if $p\equiv 1 \ (mod \ 4)$, then $p^m\equiv 1 \ (mod \ 4)$ and if $p\equiv 3 \ (mod \ 4)$ and $m$ is even, then $p^{m}\equiv (-1)^{m}\equiv 1 \ (mod \ 4)$, that is, $4 \mid (p^{m}-1)$. By Lemma 2 and (9), we only need to prove that $g_{0}$ and $g_{1}$ defined by $g_{1}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})-h(a\alpha y_{1}+a y_{2}), (y_{1}, y_{2}) \in \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}$ satisfy the condition of Lemma 2 and $g_{0}^{*}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})$, $g_{1}^{*}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})-h(-\beta(a\alpha y_{1}+ay_{2})), (y_{1}, y_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$. For any $(b_{1}, b_{2})\in \mathbb{F}_{p^m} \times \mathbb{F}_{p^m}$, we have \begin{equation}\label{19} \begin{split} & W_{g_{1}}(b_{1}, b_{2}) \\ & =\sum_{y_{1}, y_{2} \in \mathbb{F}_{p^m}}\zeta_{p}^{Tr_{1}^{m}(\frac{\beta}{2}(y_{1}^{2}+y_{2}^{2}))-h(a\alpha y_{1}+a y_{2})-Tr_{1}^{m}(b_{1}y_{1}+b_{2}y_{2})}\\ & =\sum_{z_{1}, z_{2} \in \mathbb{F}_{p^m}}\zeta_{p}^{Tr_{1}^{m}(-\frac{\alpha\beta}{2a^{2}}z_{1}z_{2})-h(z_{2})-Tr_{1}^{m}(\frac{b_{1}-\alpha b_{2}}{2a}z_{1}+\frac{-\alpha b_{1}+b_{2}}{2a}z_{2})}\\ & =W_{Tr_{1}^{m}(-\frac{\alpha\beta}{2a^{2}}z_{1}z_{2})-h(z_{2})}(\frac{b_{1}-\alpha b_{2}}{2a}, \frac{-\alpha b_{1}+b_{2}}{2a}) \end{split} \end{equation} where in the second equation we use the change of variables $z_{1}=ay_{1}+a\alpha y_{2}$, $z_{2}=a\alpha y_{1}+ay_{2}$ and $\alpha^{2}=-1$. Since $Tr_{1}^{m}(-\frac{\alpha\beta}{2a^{2}}z_{1}z_{2})-h(z_{2})$ is an Maiorana-McFarland bent function, we have \begin{equation}\label{20} \begin{split} & W_{Tr_{1}^{m}(-\frac{\alpha\beta}{2a^{2}}z_{1}z_{2})-h(z_{2})}(\frac{b_{1}-\alpha b_{2}}{2a}, \frac{-\alpha b_{1}+b_{2}}{2a})\\ & =p^{m}\zeta_{p}^{Tr_{1}^{m}(\frac{2a^{2}}{\alpha\beta}\cdot\frac{b_{1}-\alpha b_{2}}{2a}\cdot\frac{-\alpha b_{1}+b_{2}}{2a})-h(-\frac{2a^{2}}{\alpha\beta}\cdot\frac{b_{1}-\alpha b_{2}}{2a})}\\ & =p^{m}\zeta_{p}^{Tr_{1}^{m}(\frac{\beta}{2}(b_{1}^{2}+b_{2}^{2}))-h(-\beta(a\alpha b_{1}+ab_{2}))} \end{split} \end{equation} where in the last equation we use $\alpha^{2}=\beta^{2}=-1$. Note that $h$ is arbitrary. Hence, by (19) and (20) we have that $g_{0}, g_{1}$ are regular bent functions and $g_{0}^{*}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})$, $g_{1}^{*}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})-h(-\beta(a\alpha y_{1}+ay_{2})), (y_{1}, y_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$. By (19) and (20), $(1-i)g_{0}(y_{1}, y_{2})+ig_{1}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})-ih(a\alpha y_{1}+ay_{2})$ is a regular bent function and $((1-i)g_{0}+ig_{1})^{*}(y_{1}, y_{2})=g_{0}(y_{1}, y_{2})-ih(-\beta(a\alpha y_{1}+ay_{2}))$, $(1-i)g_{0}^{*}(y_{1}, y_{2})+ig_{1}^{*}(y_{1}, y_{2})=(1-i)g_{0}(y_{1}, y_{2})+ig_{0}(y_{1}, y_{2})-ih(-\beta(a\alpha y_{1}+ay_{2}))=g_{0}(y_{1}, y_{2})-ih(-\beta(a\alpha y_{1}+ay_{2}))$ for any $i \in \mathbb{F}_{p}$, that is, $g_{0}, g_{1}$ satisfy the condition of Lemma 2. \end{proof} The following Theorem gives a secondary construction of self-dual generalized bent functions. \begin{theorem} With the same notation as Lemma 4. The function $F$ constructed by Lemma 4 is a self-dual generalized bent function if any one of the following conditions is satisfied: 1) $p\equiv 1 \ (mod \ 4)$, $f_{i} (i \in \mathbb{F}_{p})$ are self-dual generalized bent functions satisfying $f_{i}=f_{j}$ if $i=j \beta^{a}$ for some $0\leq a \leq 3$, $h: \mathbb{F}_{p^m} \rightarrow \mathbb{F}_{p}$ is defined as $h(x)=Tr_{1}^{m}(x)$, $g: \mathbb{F}_{p}\rightarrow \mathbb{Z}_{p^k}$ is an arbitrary function satisfying $g(y)=g(y')$ if $y=y'\beta^{b}$ for some $0\leq b \leq 3$. 2) $p\equiv 1 \ (mod \ 4)$ or $m$ is even, $f_{i} (i \in \mathbb{F}_{p})$ are self-dual generalized bent functions satisfying $f_{i}=f_{-i}$, $h: \mathbb{F}_{p^m}\rightarrow \mathbb{F}_{p}$ is defined as $h(x)=Tr_{1}^{m}(x^{2})$, $g: \mathbb{F}_{p}\rightarrow \mathbb{Z}_{p^k}$ is an arbitrary function satisfying $g(y)=g(-y)$. 3) $p\equiv 1 \ (mod \ 4)$ or $m$ is even, $f_{i} (i \in \mathbb{F}_{p})$ are self-dual generalized bent functions, $h: \mathbb{F}_{p^m}\rightarrow \mathbb{F}_{p}$ is defined as $h(x)=Tr_{1}^{m}(x^{4})$, $g: \mathbb{F}_{p}\rightarrow \mathbb{Z}_{p^k}$ is an arbitrary function. \end{theorem} \begin{proof} 1) If $p\equiv 1 \ (mod \ 4)$, that is, $4 \mid (p-1)$, we have $\beta^{p-1}=(\pm z^{\frac{p^m-1}{4}})^{p-1}=(z^{\frac{p-1}{4}})^{p^m-1}=1$, that is, $\beta \in \mathbb{F}_{p}$. When $h(x)=Tr_{1}^{m}(x)$, for any $(y_{1}, y_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$, $h(-\beta(a\alpha y_{1}+ay_{2}))=Tr_{1}^{m}(-\beta(a\alpha y_{1}+ay_{2}))=-\beta Tr_{1}^{m}(a\alpha y_{1}+ay_{2})=-\beta h(a\alpha y_{1}+ay_{2})$. Since $f_{i} (i \in \mathbb{F}_{p})$ are self-dual generalized bent functions and $f_{i}=f_{j}$ if $i=j \beta^{a}$ for some $0\leq a \leq 3$, $\beta^{2}=-1$, we have $f_{h(-\beta(a\alpha y_{1}+ay_{2}))}^{*}=f_{h(-\beta(a\alpha y_{1}+ay_{2}))}=f_{-\beta h(a\alpha y_{1}+ay_{2})}=f_{h(a\alpha y_{1}+ay_{2})}$. Since $g(y)=g(y')$ if $y'=y \beta^{b}$ for some $0 \leq b \leq 3$ and $\beta^{2}=-1$, we have $g(h(-\beta(a\alpha y_{1}+ay_{2})))=g(-\beta h(a\alpha y_{1}+ay_{2}))=g(h(a\alpha y_{1}+ay_{2}))$. Hence, it is easy to see that the generalized bent function $F$ constructed by Lemma 4 satisfies $F=F^{*}$, that is, $F$ is a self-dual generalized bent function. 2) When $h(x)=Tr_{1}^{m}(x^{2})$, for any $(y_{1}, y_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$, since $\beta^{2}=-1$, we have $h(-\beta(a\alpha y_{1}+ay_{2}))=Tr_{1}^{m}((-\beta(a\alpha y_{1}+ay_{2}))^{2})=Tr_{1}^{m}(-(a\alpha y_{1}+ay_{2})^{2})=-Tr_{1}^{m}((a\alpha y_{1}+ay_{2})^{2})=-h(a\alpha y_{1}+ay_{2})$. Since $f_{i} (i \in \mathbb{F}_{p})$ are self-dual generalized bent functions and $f_{i}=f_{-i}$, we have $f_{h(-\beta(a\alpha y_{1}+ay_{2}))}^{*}=f_{h(-\beta(a\alpha y_{1}+ay_{2}))}=f_{-h(a\alpha y_{1}+ay_{2})}=f_{h(a\alpha y_{1}+ay_{2})}$. Since $g(y)=g(-y)$, we have $g(h(-\beta(a\alpha y_{1}+ay_{2})))=g(-h(a\alpha y_{1}+ay_{2}))=g(h(a\alpha y_{1}+ay_{2}))$. Hence, it is easy to see that the generalized bent function $F$ constructed by Lemma 4 satisfies $F=F^{*}$, that is, $F$ is a self-dual generalized bent function. 3) When $h(x)=Tr_{1}^{m}(x^{4})$, for any $(y_{1}, y_{2})\in \mathbb{F}_{p^m}\times \mathbb{F}_{p^m}$, since $\beta^{4}=1$, we have $h(-\beta(a\alpha y_{1}+ay_{2}))=Tr_{1}^{m}((-\beta(a\alpha y_{1}+ay_{2}))^{4})=Tr_{1}^{m}((a\alpha y_{1}+ay_{2})^{4})=h(a\alpha y_{1}+ay_{2})$. Since $f_{i} (i \in \mathbb{F}_{p})$ are self-dual generalized bent functions, we have $f_{h(-\beta(a\alpha y_{1}+ay_{2}))}^{*}=f_{h(-\beta(a\alpha y_{1}+ay_{2}))}=f_{h(a\alpha y_{1}+ay_{2})}$. For an arbitrary function $g: \mathbb{F}_{p}\rightarrow \mathbb{Z}_{p^k}$, $g(h(-\beta(a\alpha y_{1}+ay_{2})))=g(h(a\alpha y_{1}+ay_{2}))$. Hence, it is easy to see that the generalized bent function $F$ constructed by Lemma 4 satisfies $F=F^{*}$, that is, $F$ is a self-dual generalized bent function. \end{proof} \begin{remark} One can verify that Theorem 3 of \cite{Cesmelioglu4} is a special case of the above case 1) with $k=1, m=1$ and $g=0$. In \cite{Cesmelioglu4}, for $p\equiv 3 \ (mod \ 4)$ and $n$ is even, the authors only illustrated that there exist ternary non-quadratic self-dual bent functions from $V_{n}$ to $\mathbb{F}_{p}$ by considering special ternary bent functions. Theorem 5 can be used to construct non-quadratic self-dual bent functions from $V_{n}$ to $\mathbb{F}_{p}$ for $p\equiv 3 \ (mod \ 4)$ and even integer $n\geq 6$ by using (non)-quadratic self-dual bent functions as building blocks. \end{remark} We give two examples by using Theorem 5. \begin{example} Let $p=5, k=2, r=1, m=1$. Let $\alpha=\beta=2$ and $a=1$. Let $f_{i}(x)=5x^{2}, x \in \mathbb{F}_{5}, i=0, 2, 3, 4$ and $f_{1}(x)=20x^{2}, x \in \mathbb{F}_{5}$. Then $f_{i} (i \in \mathbb{F}_{5})$ are self-dual generalized bent functions. Let $h(x)=Tr_{1}^{m}(x^{4})=x^{4}, x \in \mathbb{F}_{5}$. Let $g: \mathbb{F}_{5}\rightarrow \mathbb{Z}_{5^{2}}$ be defined as $g(y)=2y^{2}, y \in \mathbb{F}_{5}$. Then the generalized bent function constructed by Lemma 4 is $F(x, y_{1}, y_{2})=f_{(2y_{1}+y_{2})^{4}}(x)+5(y_{1}^{2}+y_{2}^{2})+2((2y_{1}+y_{2})^{4} mod \ 5)^{2}, (x, y_{1}, y_{2}) \in \mathbb{F}_{5}^{3}$ and it is a self-dual generalized bent function according to 3) of Theorem 5. \end{example} \begin{example} Let $p=7, k=1, r=m=2$. Let $z$ be the primitive element of $\mathbb{F}_{7^2}$ with $z^{2}+6z+3=0$. Let $\alpha=\beta=z^{12}$ and $a=z$. Let $f_{0}(x)=Tr_{1}^{2}(4z^{12}x^{2})$, $f_{1}(x)=f_{6}(x)=Tr_{1}^{2}(3z^{12}x^{2})+1$, $f_{2}(x)=f_{5}(x)=Tr_{1}^{2}(3z^{12}x^{2})+2$, $f_{3}(x)=f_{4}(x)=Tr_{1}^{2}(3z^{12}x^{2})+3$, $x \in \mathbb{F}_{7^2}$. Then $f_{i} (i \in \mathbb{F}_{7})$ are quadratic self-dual bent functions. Let $h(x)=Tr_{1}^{2}(x^{2}), x \in \mathbb{F}_{7^2}$. Let $g=0$. Then the bent function constructed by Lemma 4 is $F(x, y_{1}, y_{2})=f_{Tr_{1}^{2}((z^{13}y_{1}+zy_{2})^{2})}(x)+Tr_{1}^{2}(4z^{12}(y_{1}^{2}+y_{2}^{2})), (x, y_{1}, y_{2})\in \mathbb{F}_{7^2} \times \mathbb{F}_{7^2} \times \mathbb{F}_{7^2}$. It is a self-dual bent function according to 2) of Theorem 5 and it is easy to verify that it is non-quadratic. \end{example} \section{Relations between the dual of generalized bent functions and the dual of bent functions} In this section, we characterize the relations between the generalized bentness of the dual of generalized bent functions and the bentness of the dual of bent functions, as well as the self-duality relations between generalized bent functions and bent functions. The main result is the following theorem: \begin{theorem} Let $k\geq 2$ be an integer. Let $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ be a generalized bent function with the decomposition $f(x)=\sum_{i=0}^{k-1}f_{i}(x)p^{k-1-i}=f_{0}(x)p^{k-1}+\bar{f}_{1}(x)$ where $f_{i}$ is a function from $V_{n}$ to $\mathbb{F}_{p}$ for any $0\leq i \leq k-1$ and $\bar{f}_{1}(x)=\sum_{i=1}^{k-1}f_{i}(x)p^{k-1-i}$ is a function from $V_{n}$ to $\mathbb{Z}_{p^{k-1}}$. For any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$, define $g_{f, F}=f_{0}+F(f_{1}, \dots, f_{k-1})$. Then 1) $f^{*}$ is generalized bent if and only if for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$, $g_{f, F}^{*}$ is bent. 2) $f$ is self-dual generalized bent if and only if for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$, $g_{f, F}$ is self-dual bent. \end{theorem} \begin{proof} 1) First, by Lemma 1, $g_{f, F}$ is bent for any $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$. And by Lemma 1, we have $f^{*}(x)=f_{0}^{*}(x)p^{k-1}+\lambda(x)$ and $g_{f, F}^{*}(x)=f_{0}^{*}(x)+F(\lambda_{1}(x), \dots, \lambda_{k-1}(x))$ where $\lambda=\sum_{i=1}^{k-1}\lambda_{i}p^{k-1-i}, \lambda_{i}: V_{n}\rightarrow \mathbb{F}_{p}$ and $\lambda: V_{n}\rightarrow \mathbb{Z}_{p^{k-1}}$ satisfies that for any $a \in V_{n}$, $\sum_{x \in V_{n}: \bar{f}_{1}(x)=\lambda(a)}\zeta_{p}^{f_{0}(x)-\langle a, x\rangle}=W_{f_{0}}(a)$ and $\sum_{x \in V_{n}:\bar{f}_{1}(x)=v}\zeta_{p}^{f_{0}(x)-\langle a, x\rangle}=0$ for any $v \neq \lambda(a)$. Hence, by Lemma 1, $f^{*}$ is generalized bent if and only if for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$, $g_{f, F}^{*}$ is bent. 2) Suppose $f$ is self-dual generalized bent, that is, $f=f^{*}$. By $f=\sum_{i=0}^{k-1}$ $f_{i}p^{k-1-i}$ and $f^{*}=f_{0}^{*}p^{k-1}+\sum_{i=1}^{k-1}\lambda_{i}p^{k-1-i}$, we have $f_{0}=f_{0}^{*}, f_{i}=\lambda_{i}, 1\leq i \leq k-1$. As $g_{f, F}^{*}=f_{0}^{*}+F(\lambda_{1}, \dots, \lambda_{k-1})$, $g_{f, F}=f_{0}+F(f_{1}, \dots, f_{k-1})=g_{f, F}^{*}$, that is, $g_{f, F}$ is self-dual bent for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$. Suppose for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$, $g_{f, F}$ is self-dual bent, that is, $g_{f, F}=g_{f, F}^{*}$ for any function $F: \mathbb{F}_{p}^{k-1}\rightarrow \mathbb{F}_{p}$. Let $F=0$, we obtain $f_{0}=f_{0}^{*}$. Let $F=F_{i}, 1\leq i \leq k-1$ where $F_{i}(x_{1}, \dots, x_{k-1})=x_{i}$, we obtain $f_{i}=\lambda_{i}, 1\leq i \leq k-1$. Hence, $f^{*}=f$, that is, $f$ is self-dual generalized bent. \end{proof} By Theorem 6, if $f_{0}^{*}$ is not bent (resp., $f_{0}$ is not self-dual bent), then obviously $f^{*}$ is not generalized bent (resp., $f$ is not self-dual generalized bent). But the inverses are not true. We illustrate this with the following two examples. \begin{example} Let $z$ be the primitive element of $\mathbb{F}_{3^5}$ with $z^{5}+2z+1=0$. Let $f: \mathbb{F}_{3^5}\times \mathbb{F}_{3} \times \mathbb{F}_{3} \rightarrow \mathbb{Z}_{3^2}$ be defined as $f=3f_{0}+f_{1}$ where $f_{i}: \mathbb{F}_{3^5}\times \mathbb{F}_{3}\times \mathbb{F}_{3}\rightarrow \mathbb{F}_{3}, i=0,1$, $f_{0}(x, y_{1}, y_{2})=Tr_{1}^{5}(x^2)+(y_{1}+Tr_{1}^{5}(z^{47}x^{2}))(y_{2}+Tr_{1}^{5}(z^{10}x^{2}))$ and $f_{1}(x, y_{1}, y_{2})=y_{2}+Tr_{1}^{5}(z^{10} x^2)$. Then $f$ is a generalized bent function constructed by Theorem 1. One can verify that $f^{*}$ is not generalized bent, but $f_{0}^{*}$ is bent. \end{example} \begin{example} Let $f: \mathbb{F}_{5}^{2}\rightarrow \mathbb{Z}_{5^2}$ be defined as $f=5f_{0}+f_{1}$ where $f_{0}(x_{1}, x_{2})=x_{1}^{2}+x_{2}^{2}, (x_{1}, x_{2})\in \mathbb{F}_{5}^{2}$ and $f_{1}(x_{1}, x_{2})=2x_{1}+x_{2}$. Then $f$ is a generalized bent function and $f^{*}=5g_{0}+g_{1}$ where $g_{0}(x_{1}, x_{2})=x_{1}^{2}+x_{2}^{2}, (x_{1}, x_{2})\in \mathbb{F}_{5}^{2}$ and $g_{1}(x_{1}, x_{2})=x_{1}+3x_{2}, (x_{1}, x_{2})\in \mathbb{F}_{5}^{2}$. Hence, $f$ is not self-dual generalized bent, but $f_{0}$ is self-dual bent. \end{example} \section{Conclusion} In this paper, we study the dual of generalized bent functions $f: V_{n}\rightarrow \mathbb{Z}_{p^k}$ where $V_{n}$ is an $n$-dimensional vector space over $\mathbb{F}_{p}$ and $p$ is an odd prime, $k$ is a positive integer. We give an explicit construction of generalized bent functions whose dual can be generalized bent or not generalized bent. We show that the generalized indirect sum construction method given in \cite{Wang} can provide a secondary construction of generalized bent functions for which the dual can be generalized bent or not generalized bent. For generalized bent functions $f$ whose dual $f^{*}$ is generalized bent, by ideal decomposition in cyclotomic field, we prove $f^{**}(x)=f(-x)$. For generalized bent functions $f$ which satisfy that $f(x)=f(-x)$ and its dual $f^{*}$ is generalized bent, we give a property and as a consequence, we prove that there is no self-dual generalized bent function from $V_{n}$ to $\mathbb{Z}_{p^k}$ if $p\equiv 3 \ (mod \ 4)$ and $n$ is odd. For other cases, we give a secondary construction of self-dual generalized bent functions. In the end, we characterize the relations between the generalized bentness of the dual of generalized bent functions and the bentness of the dual of bent functions, as well as the self-duality relations between generalized bent functions and bent functions.
2024-02-18T23:40:20.665Z
2021-06-08T02:23:43.000Z
algebraic_stack_train_0000
2,122
13,813
proofpile-arXiv_065-10386
\section{Details of Type-aware Graph Attention} \label{appendix:TGA} In this section, we thoroughly describe the computation procedures of type-aware graph attention (TGA). Similar to the main body, We overload notations for the simplicity of notation such that the input node and edge feature as $h_i$ and $h_{ij}$, and the embedded node and edge feature $h'_i$ and $h'_{ij}$, respectively. The proposed TGA performs graph embedding with the following three phases: (1) type-aware edge update, (2) type-aware message aggregation, and (3) type-aware node update. \textbf{Type-aware edge update. } The edge update scheme is designed to reflect the complex type relationship among the entities while updating edge features. First, the \textit{context} embedding $c_{ij}$ of edge $e_{ij}$ is computed using the source node type $k_j$ such that: \begin{align} \label{eqn:appendix-gnn-edge-context} \begin{split} c_{ij} &= \text{MLP}_{etype}(k_j) \\ \end{split} \end{align} where $\text{MLP}_{etype}$ is the edge type encoder. The source node types are embedded into the context embedding $c_{ij}$ using $\text{MLP}_{etype}$. Next, the type-aware edge encoding $u_{ij}$ is computed using the Multiplicative Interaction (MI) layer \citep{jayakumar2019multiplicative} as follows: \begin{align} \label{eqn:appendix-gnn-edge-mi} \begin{split} u_{ij} &= \text{MI}_{edge}([h_i, h_j, h_{ij}], c_{ij})\\ \end{split} \end{align} where $\text{MI}_{edge}$ is the edge MI layer. We utilize the MI layer, which dynamically generates its parameter depending on the context $c_{ij}$ and produces type-aware edge encoding $u_{ij}$, to effectively model the complex type relationships among the nodes. Type-aware edge encoding $u_{ij}$ can be seen as a dynamic edge feature which varies depending on the source node type. Then, the updated edge embedding $h'_{ij}$ and its attention logit $z_{ij}$ are obtained as: \begin{align} h'_{ij} &= \text{MLP}_{edge}(u_{ij}) \label{eqn:appendix-gnn-edge-update}\\ z_{ij} &= \text{MLP}_{attn}(u_{ij}) \label{eqn:appendix-gnn-edge-attn} \end{align} where $\text{MLP}_{edge}$ and $\text{MLP}_{attn}$ is the edge updater and logit function, respectively. The edge updater and logit function produce updated edge embedding and logits from the type-aware edge. The computation steps of Equation \ref{eqn:appendix-gnn-edge-context}, \ref{eqn:appendix-gnn-edge-mi}, and \ref{eqn:appendix-gnn-edge-update} are defined as $\text{TGA}_{\mathbb{E}}$. Similarly, the computation steps of Equation \ref{eqn:appendix-gnn-edge-context}, \ref{eqn:appendix-gnn-edge-mi}, and \ref{eqn:appendix-gnn-edge-attn} are defined as $\text{TGA}_{\mathbb{A}}$. \textbf{Type-aware message aggregation. } We first define the type-$k$ neighborhood of $v_{i}$ as $\mathcal{N}_k(i) = \{v_l | k_l = k, \forall v_l \in \mathcal{N}(i)\}$, where $\mathcal{N}(i)$ is the in-neigborhood set of $v_{i}$. The proposed type-aware message aggregation procedure computes attention score $\alpha_{ij}$ for the $e_{ij}$, which starts from $v_{j}$ and heads to $v_{i}$, such that: \begin{align} \label{eqn:tga-attention} \begin{split} \alpha_{ij} &= \frac{\text{exp}(z_{ij})}{\sum_{l \in \mathcal{N}_{k_j}(i)}\text{exp}(z_{il})} \\ \end{split} \end{align} Intuitively speaking, The proposed attention scheme normalizes the attention logits of incoming edges over the types. Therefore, the attention scores sum up to 1 over each type-$k$ neighborhood. Next, the type-$k$ neighborhood message $m_{i}^{k}$ for node $v_{i}$ is computed as: \begin{align} \label{eqn:tga-intra-aggr} \begin{split} m_i^k = \sum_{j \in \mathcal{N}_k(i)}\alpha_{ij} h'_{ij} \end{split} \end{align} In this aggregation step, the incoming messages of node $i$ are aggregated per type. All incoming type neighborhood messages are concatenated to produce (inter-type) aggregated message $m_i$ for $v_{i}$, such that: \begin{align} \label{eqn:tga-inter-aggr} \begin{split} m_i &= \text{concat}(\{m_i^k | k \in \displaystyle {\mathbb{K}}\}) \\ \end{split} \end{align} \textbf{Type-aware node update. } Similar to the edge update phase, the context embedding $c_i$ is computed first for each node $v_{i}$: \begin{align} \label{eqn:appendix-node-mi} \begin{split} c_{i} &= \text{MLP}_{ntype}(k_i) \end{split} \end{align} where $\text{MLP}_{ntype}$ is the node type encoder. Then, the updated hidden node embedding $h'_{i}$ is computed as below: \begin{align} \label{eqn:appendix-node-update} \begin{split} h'_{i} &= \text{MLP}_{node}(h_i, u_i)\\ \end{split} \end{align} where $u_{i}=\text{MI}_{node}(m_i, c_i)$ is the type-aware node embedding that is produced by $\text{MI}_{node}$ layer using aggregated messages $m_i$ and the context embedding $c_i$. The computation steps of Equation \ref{eqn:appendix-node-mi}, and \ref{eqn:appendix-node-update} are defined as $\text{TGA}_{\mathbb{E}}$. \section{Extended discussion for reward normalization scheme} In this section, we further discuss the effect of the proposed reward normalization scheme and its variants to the performance of ScheduleNet. The proposed reward normalization (i.e. normalized makespan) $m(\pi, \pi_{b})$ is given as follows: \begin{align} \label{appendix:eqn-reward-norm} \begin{split} m(\pi, \pi_{b}) = \frac{M(\pi_{\theta})-M(\pi_b)}{M(\pi_{b})}\ \end{split} \end{align} where $\pi_b$ is the baseline policy. \textbf{Effect of the denominator. } $m(\pi, \pi_{b})$ measures the relative scheduling supremacy of $\pi$ to the $\pi_{b}$. Similar reward normalization scheme, but without $M(\pi_b)$ division, is employed to solve single-agent scheduling problems \cite{kool2018attention}. We empirically found that the division leads in much stable learning when the scale of makespan change (e.g. the area of map change from the unit square to different geometries). \textbf{Effect of the baseline selection. } A proper selection of $\pi_b$ is essential to assure stable and asymptotically better learning of ScheduleNet. Intuitively speaking, choosing too strong baseline (i.e. policy having smaller makespan such as OR-tools) can makes the entire learning process unstable since the normalized reward tends to have larger values. On the other hand, employing too weak baseline can leads in virtually no learning since the $m(\pi, \pi_b)$ becomes nearly zero. We select $\pi_b$ as $\text{Greedy}(\pi)$. This baseline selection has several advantages from selecting a fixed/pre-existing scheduling policy: (1) Entire learning process becomes independent from existing scheduling methods. Thus ScheduleNet is applicable even when the cheap-and-performing $\pi_b$ for some target mSP does not exist. (2) $\text{Greedy}(\pi)$ serves as an appropriate $\pi_b$ (either not too strong or weak) during policy learning. We experimentally confirmed that the baseline section $\text{Greedy}(\pi)$ results in a better scheduling policy as similar to the several literature \cite{kool2018attention, silver2016mastering}. \section{Experiments} \subsection{mTSP experiments} \subsubsection{semi-MDP formulation} \label{appendix:mtsp-mdp-formulation} The formulated mTSP semi-MDP is event-based. Here we discuss the further details about the event-based transitions of mTSP MDP. Whenever all agents are assigned to cities, the environment transits in time, until any of the workers arrives to the city (i.e. completes the task). Arrival of the worker to the city triggers an event, meanwhile the other assigned salesman are still on the way to their correspondingly assigned cities. We assume that each worker transits towards the assigned city with unit speed in the 2D Euclidean space, i.e. the distance travelled by each worker equals the time past between two consecutive MDP events. It is noteworthy that multiple events can happen at the same time, typically when time stamp $t=0$. If the MDP has multiple available workers at the same time, we repeatedly choose an arbitrary idle agent and assign it to the one of an idle task until no agent is idle, while updating event index $\tau$. This random selections do not alter the resulting solutions since we do not differentiate each agent (i.e. agents are homogeneous agents). \subsubsection{Agent-task graph formulation} \label{appendix:mtsp-graph-formulation} In this section, we present the list of all possible node types in ${\mathcal{G}}_\tau$: (1) assigned-agent, (2) unassigned-agent, (3), (4) assigned-task, (5) unassigned-task, (5) depot. Here, we do not include already visited cities (i.e. inactive tasks) to the graph. Thus, the set of active agents/tasks is defined by the union of assigned and unassigned agents/tasks. Our empirical experiments showed no performance difference between the graph with inactive nodes included versus the graph with active-only nodes. All nodes in the ${\mathcal{G}}_\tau$ are fully connected. \subsubsection{Training details} \label{appendix:mtsp-training} \paragraph{Network parameters.} {\fontfamily{lmtt}\selectfont ScheduleNet} is composed of two TGA layers. Each TGA layer (\textit{raw-to-hidden} and \textit{hidden-to-hidden}) has same MLP parameters, as shown in Table \ref{table-tga-parameters}. The node and edge encoders input dimensions for the first \textit{raw-to-hidden} TGA layer is 4 and 7, respectively, the output node, edge dimensions of the first TGA layer is 32, 32, which is used as input dimensions for the \textit{hidden-to-hidden} TGA layer. The embedded node and edge features are used to calculate the action embeddings via $\text{MLP}_{actor}$ with parameters described in Table \ref{table-tga-parameters}. \begin{table*}[t] \centering \small\addtolength{\tabcolsep}{-1pt} \renewcommand{\arraystretch}{1.3} \caption{\textbf{ScheduleNet TGA Layer parameters.}} \vskip 0.075in \begin{tabular}{>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}} \toprule Encoder name & MLP hidden dimensions & Hidden activation & Output activation \\ \midrule $\text{MLP}_{etype}$ & {[}32{]} & ReLU & Identity \\ $\text{MLP}_{edge}$ & {[}32, 32{]} & ReLU & Identity \\ $\text{MLP}_{attn}$ & {[}32, 32{]} & ReLU & Identity \\ $\text{MLP}_{ntype}$ & {[}32{]} & ReLU & Identity \\ $\text{MLP}_{node}$ & {[}32, 32{]} & ReLU & Identity \\ $\text{MLP}_{actor}$ & {[}256, 128{]} & ReLU & Identity\\ \bottomrule \end{tabular} \label{table-tga-parameters} \end{table*} \paragraph{Training.} In this section, we presents a pseudocode for training ScheduleNet. We smooth the evaluation policy $\pi_\theta$ with the Polyak average as studied \cite{izmailov2018averaging} for the further stabilization of training process. \begin{algorithm}[H] \DontPrintSemicolon \caption{ScheduleNet Training} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input {Training policy $\pi_\theta$} \Output {Smoothed policy $\pi_\phi$} Initialize the smoothed policy with parameters $\phi \leftarrow \theta$. \\ \For{$\text{update step}$} {Generate a random mTSP instance $I$ \\ $\pi_b \leftarrow \text{Greedy}(\pi_\theta)$ \\ \For{$\text{number of episodes}$} {Construct mTSP MDP from the instance $I$ \\ Collect samples with $\pi_\theta$ and $\pi_b$ from the mTSP MDP. } \For{$\text{inner updates K}$} { $\theta \leftarrow \theta + \alpha \nabla_\theta \mathcal{L}(\theta)$ } $\phi \leftarrow \beta \phi + (1-\beta) \theta$ } \caption{ScheduleNet Training} \end{algorithm} We set the MDP discounting factor $\gamma$ to 0.9, and the polyak smoothing coefficient $\beta$ as 0.1, and the clipping parameter $\epsilon$ to 0.2. \subsubsection{Details of Random mTSP experiments} \label{appendix:mtsp-random-results} \paragraph{OR-Tools implementation.} The Google \texttt{OR-Tools} routing module \cite{ortools} is a set of highly optimized and also practically working meta-heuristics for solving various routing problems (e.g. mTPS, mVRP). It first finds the initial feasible solution and then iteratively improves the solution with local search heuristics (e.g. Greedy Descent, Simulated Annealing, Tabu Search) until certain termination condition is satisfied. We acheive the scheduling results of the baseline OR-tools algorithm with the official implementation mTSP provided by Google. We further tune the hyperparameters of \texttt{OR-Tools} to achieve better scheduling results on large mTSP instances ($n >200$ and $m >20$) by applying different initial solution search and solution improvement schemes. However, such tuning results in virtually no improvements in scheduling performances. \paragraph{Two-phase heuristics implementations.} The 2-phase heuristics for mTSP is an extension of well-known TSP heuristics to the $m>1$ cases. First, we perform $K$-means clustering (where $K=m$) of cities in the mTSP instance by utilizing \texttt{scikit-learn} \cite{scikit-learn}. Next, we apply TSP insertion heuristics, Nearest Insertion, Farthest Insertion, Random Insertion, and Nearest Neighbour Insertion, for each cluster of cities. It should be noted that, performance of the 2-phase heuristics is highly depended on the spatial distribution of the cities on the map. Thus 2-phase heuristics perform particularly well on uniformly distributed random instances, where $K$-means clustering can obtain clusters with approximately same number of cities per cluster. \subsubsection{Qualitative analysis} \begin{figure}[t] \begin{center} \includegraphics[width={\linewidth}]{images/mtsp-random-histogram.png} \end{center} \caption{\textbf{Histogram of makespans on random ($N\times m$) mTSP datasets.} The x-axis demonstrates the makespans. The y-axis shows the density of makespans.} \label{fig:appendix-mtsp-random-histogram} \end{figure} \paragraph{The histogram of scheduling performances on random mTSP datasets.} ScheduleNet averagely shows higher makespans than OR-tools on the small ($M<200$) random mTSP datasets as discussed in Section \ref{section:mtsp-experiments}. However, we observe that, even on the small mTSP problems, ScheduleNet can outperform OR-tools as visualized in Figure \ref{fig:appendix-mtsp-random-histogram}. \begin{figure}[t] \begin{center} \includegraphics[width={0.5\linewidth}]{images/mtsp-random-knn-xl.png} \end{center} \caption{\textbf{Scheduling performances on large random ($N\times m$) mTSP datasets.} The y-axis shows the makespans.} \label{fig:appendix-mtsp-random-knn-xl} \end{figure} \paragraph{Extended results of sparse graph experiments.} We provide the scheduling performance of ScheduleNet and baseline algorithms on random mTSPs of size ($500 \times 50$) and ($750 \times 70$). See Figure \ref{fig:appendix-mtsp-random-knn-xl} for the results. \subsection{JSP experiments} Job-shop scheduling problem (JSP) is a mSP that can be applied in various industries including the operation of semi-conductor chip fabrication facility and railroad system. The objective of JSP is to find the sequence of machine (agent) allocations to finish the jobs (a sequence of operations; tasks) as soon as possible. JSP can be seen as an extension of mTSP with two additional constraints: (1) precedence constraint that models a scenario where an operation of a job becomes processable only after all of its preceding operations are done; (2) agent-sharing (disjunctive) constraint that confines the machine to process only one operation at a time. Due to these additional constraints, JSP is considered to be a more difficult problem when it is solved via mathematical optimization techniques. A common representation of JSP is the disjunctive graph representation. As shown in Figures \ref{fig:JSP-disjunctive-repr}, \ref{fig:JSP-precedence-const} and \ref{fig:JSP-disjunctive-const}, JSP contains the set of jobs, machines, precedence constraints, and disjunctive constraints as its entities. In the following subsections, we provide the details of the proposed MDP formulation of JSP, training details of ScheduleNet, and experiment results. \begin{figure}[t] \begin{minipage}{0.55\textwidth} \centering \includegraphics[width=.99\linewidth]{images/JSP.png} \caption{\textbf{Disjunctive graph representation of JSP}} \label{fig:JSP-disjunctive-repr} \end{minipage} \hfill \begin{minipage}{0.39\textwidth} \centering \vskip 0.175in \includegraphics[width=.99\linewidth]{images/JSP-precedence-constraint.png} \caption{\textbf{Precedence constraint}} \label{fig:JSP-precedence-const} \vskip 0.150in \centering \includegraphics[width=.8\linewidth]{images/JSP-disjunctive-constraint.png} \caption{\textbf{Agent-sharing constraint}} \label{fig:JSP-disjunctive-const} \end{minipage} \end{figure} \subsubsection{semi-MDP formulation} \label{appendix:jsp-mdp-formulation} The semi-MDP formulation of JSP is similar to that of mTSP. The specific definitions of the state and action for JSP are as follows: \textbf{State.} We define $s_{\tau}=(\{s_{\tau}^{i}\}_{i=1}^{N+m}, s_{\tau}^{\text{env}})$ which is composed of two types of states: entity state $s_{\tau}^{i}$ and environment state $s_{\tau}^{\text{env}}$. \vspace{-0.35cm} \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item $s_{\tau}^{i} = (p_{\tau}^{i}, \bm{1}_{\tau}^{\text{processable}},\bm{1}_{\tau}^{\text{assigned}}$, $\bm{1}_{\tau}^{\text{accessible}}$, $\bm{1}_{\tau}^{\text{waiting}})$ is the state of the $i$-th entity. $p_{\tau}^{i}$ is the processing time of the $i$-th entity at the $\tau$-th event. $\bm{1}_{\tau}^{\text{processable}}$ indicates whether the $i$-th task is processable by the target agent or not. Similar to mTSP, $\bm{1}_{\tau}^{\text{assigned}}$ indicates whether an agent/task is assigned. \item $s_{\tau}^{\text{env}}$ contains the current time of the environment, the sequence of tasks completed by each agent (machine), and the precedence constraints of tasks within each job. \end{itemize} \textbf{Action.} We define the action space at the $\tau$-th event as a set of oeprations that is both processable and currently available. Additionally, we define the \textit{waiting} action as a reservation of the target agent (i.e. the unique idle machine) until the next event. Having \textit{waiting} as an action allows the adaptive scheduler (e.g. ScheduleNet) to achieve the optimal scheduling solution (and also makespan) from the JSP MDP, where the optimal solution contains waiting (idle) time intervals. \subsubsection{Agent-task graph formulation} \label{appendix:jsp-graph-formulation} ScheduleNet constructs the \textit{agent-task graph} ${\mathcal{G}}_\tau$ that reflects the complex relationships among the entities in $s_\tau$. ScheduleNet constructs a directed graph ${\mathcal{G}}_\tau = ({\mathbb{V}}, \mathbb{E})$ out of $s_{\tau}$, where ${\mathbb{V}}$ is the set of nodes and $\mathbb{E}$ is the set of edges. The nodes and edges and their associated features are defined as: \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item $v_i$ denotes the $i$-th node and represents either an agent or a task. $v_i$ contains the node feature $x_i=(s_{\tau}^i,k_i)$, where $s_{\tau}^i$ is the state of entity $i$, and $k_i$ is the type of $v_i$ (e.g. if the entity $i$ is a \textit{task} and its $\bm{1}_\tau^{processable}=1$, then the $k_i$ becomes a \textit{processable-task} type.) \item $e_{ij}$ denotes the edge between the source node $v_j$ and the destination node $v_i$. The edge feature $w_{ij}$ is a binary feature which indicates whether the destination node $v_i$ is processable by the source node $v_j$. \end{itemize} All possible node types in ${\mathcal{G}}_\tau$ are: (1) assigned-agent, (2) unassigned-agent, (3) assigned-task, (4) processable-task, and (5) unprocessable-task. We do not include completed tasks in the graph. Thus, the currently active tasks are the union of the assigned tasks, processable-tasks, and unprocessable-tasks. The full list of node features are as follows: \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item $\bm{1}_{\tau}^{\text{agent}}$ indicates whether the node is a agent or a task. \item $\bm{1}_{\tau}^{\text{target-agent}}$ indicates whether the node is a target-agent (unique idle agent that needs to be assigned). \item $\bm{1}_{\tau}^{\text{assigned}}$ indicates whether the agent/task is assigned. \item $\bm{1}_{\tau}^{\text{waiting}}$ indicates whether the node is an agent in \textit{waiting} state. \item $\bm{1}_{\tau}^{\text{processable}}$ indicates whether the node is a task that is \textit{processable} by the target-agent. \item $\bm{1}_{\tau}^{\text{accessible}}$ indicates whether the node is processable by the target-agent and is available. \item \textit{Task wait time} indicates the amount of time passed since the operation became \textit{accessible}. \item \textit{Task processing time} indicates the processing time of the operation. \item \textit{Time-to-complete} indicates the amount of time it will take to complete the task, i.e. the time-distance to the given task. \item \textit{Remain ops.} indicates the number of remaining operations to be completed for the job where the task belongs to. \item \textit{Job completion ratio} is the ratio of completed operations within the job to the total amount of operations in the job. \end{itemize} \textbf{JSP graph connectivity}. Figure \ref{fig:JSP-graph-repr} visualizes the proposed agent-task graph. From Figure \ref{fig:JSP-graph-repr}, each agent is fully connected to the set of processable tasks by that agent, and vice versa. Each task is fully connected to the other tasks (operations) that belong to the same job. Each agent is fully connected to the other agents. \begin{figure}[t] \begin{center} \includegraphics[width={0.70\linewidth}]{images/JSP-graph.png} \end{center} \caption{\textbf{JSP agent-task graph representation}} \label{fig:JSP-graph-repr} \end{figure} \subsubsection{Training details} The TGA layer hyperparameters are set according to Table \ref{table-tga-parameters}. We use same training hyperparameters as in Appendix \ref{appendix:mtsp-training}. The node and edge encoder input dimensions for the first \textit{raw-to-hidden} TGA layer is 12 and 4 respectively. The output node and edge dimensions of the first TGA layer is 32 and 32 respectively, which are used as the input dimensions for the \textit{hidden-to-hidden} TGA layer. \label{appendix:jsp-training} \subsubsection{Random JSP experiments} \paragraph{CP-SAT implementation.} CP-SAT is one of the state-of-the-art constraint programming solver that is implemented as a module of Google \texttt{OR-Tools} \cite{ortools}. We employ CP-SAT with a one hour time-limit as a baseline algorithm for solving JSP. Our implementation of CP-SAT is directly adopted from the official JSP solver implementation provided by Google, which is considered to be highly optimized to solve JSP. The official implementation can be found in \url{https://developers.google.com/optimization/scheduling/job_shop}. \paragraph{JSP heuristic implementations.} Priority dispatching rules (PDR) is one of the most common JSP solving heuristics. PDR computes the priority of the feasible operations (i.e. the set of operations whose precedent operation is done and, at the same time, the target machine is idle) by utilizing the dispatching rules. As the JSP heuristic baselines, we consider the following three dispatching rules: \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item Most Operation Remaining (MOR) sets the highest priority to the operation that has the most remaining operations to finish its corresponding job. \item First-in-first-out (FIFO) sets the highest priority to the operation that joins to the feasible operation set first. \item Shortest Processing Time (SPT) sets the highest priority to the operation that has the shortest processing time. \end{itemize} \subsubsection{Extended public benchmark JSP results} \label{appendix:jsp-benchmark-results} We provide the detailed JSP results for the following public datasets: TA \citep{taillard1993benchmarks} (Tables \ref{table-ta1} and \ref{table-ta2}), ORB \cite{applegate1991computational}, FT \cite{fisher1963probabilistic}, YN \cite{yamada1992genetic} (Table \ref{table-abz}), SWV \cite{storer1992new} (Table \ref{table-SWV}), and LA \cite{lawrence1984resouce} (Table \ref{table-LA}). \input{tables/jsp-TA-part1} \input{tables/jsp-TA-part2} \input{tables/jsp-ABZ-FT-ORB-YN} \input{tables/jsp-SWV} \input{tables/jsp-LA} \section{Details of the Ablation studies} \label{appendix:ablation-studies} \subsection{Details of ablation models} In this section, we explain the details of ScheduleNet variants {\fontfamily{lmtt}\selectfont GN-CR}, {\fontfamily{lmtt}\selectfont GN-PPO}. Both variants employ the attention GN blocks (layer) to embed ${\mathcal{G}}_\tau$. The attention GN block takes a set of node embeddings $\mathbb{V}$ and edge embeddings $\mathbb{E}$, and produces the updated node embeddings $\mathbb{V}'$ and edge embeddings $\mathbb{E}'$ by utilizing three trainable modules (edge function $f_e(\cdot)$, attention function $f_a(\cdot)$ and node function $f_n(\cdot)$) and one aggregation function $\rho(\cdot)$. The computation procedure of the attention GN block is given in Algorithm \ref{alg:gn_block}. \begin{algorithm}[h] \caption{Attention GN block} \label{alg:gn_block} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \SetKwInOut{Return}{return} \Input { set of edges features $\mathbb{E}$ \newline set of nodes features $\mathbb{V}$ \newline edge function $f_{e}(\cdot)$ \newline attention function $f_{a}(\cdot)$ \newline node function $f_{n}(\cdot)$ \newline edge aggregation function $\rho(\cdot)$ } $\mathbb{E}',\mathbb{V}' \leftarrow \{\}, \{\}$ \tcp*{Initialize empty sets} \For{$e_{ij}$ \textbf{in} $\mathbb{E}$}{ $e_{ij}' \leftarrow f_{e}(e_{ij}, v_{i}, v_{j})$ \tcp*{Update edge features} $z_{ij} \leftarrow f_{a}(e_{ij}, v_{i}, v_{j})$ \tcp*{Compute attention logits} $\mathbb{E}' \leftarrow \mathbb{E}' \cup \{e'_{ij}\}$ } \For{$v_i$ \textbf{in} $\mathbb{V}$}{ $w_{ji} \leftarrow \textit{softmax}(\{z_{ji}\}_{j \in \mathcal{N}(i)})$ \tcp*{Normalize attention logits} $m_{i} \leftarrow \rho(\{w_{ji} \times e_{ji}'\}_{j \in \mathcal{N}(i)})$ \tcp*{Aggregate incoming messages} $v_i' \leftarrow f_{n}(v_i, m_{i})$ \tcp*{Update node features} $\mathbb{V}' \leftarrow \mathbb{V}' \cup \{v'_{i}\}$ } \Return{Updated node features $\mathbb{V}'$ and edge $\mathbb{E}'$ features} \end{algorithm} \paragraph{Details of {\fontfamily{lmtt}\selectfont GN-CR}} \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item \textbf{Network architecture.} {\fontfamily{lmtt}\selectfont GN-CR} is composed of two attention GN layers. The first GN layer encodes the raw node/edge features to the hidden embedding. Each GN layer's encoders $f_e$, $f_a$, and $f_n$ has the same parameters as $\text{MLP}_{edge}$, $\text{MLP}_{attn}$, $\text{MLP}_{node}$ respectively, as shown in Table \ref{table-tga-parameters}. We use $\rho(\cdot)$ as the average operator. The second GN layer has the same architecture as the first GN layer but the input and output dimensions of $f_e$, $f_a$, and $f_n$ are 32, 32, and 32 respectively. All hidden actionvations are ReLU, and output activations are indentity function. \item \textbf{Action assignments.} Similar to {\fontfamily{lmtt}\selectfont ScheduleNet}, we perform raw feature encoding with the first GN layer and $H$-rounds of hidden embedding with the second GN layer. We use the same MLP architecture of {\fontfamily{lmtt}\selectfont ScheduleNet} to compute the assignment probabilities from the embeded graph. \item \textbf{Training.} Same as {\fontfamily{lmtt}\selectfont ScheduleNet}. \end{itemize} \paragraph{Details of {\fontfamily{lmtt}\selectfont GN-PPO}.} \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item \textbf{Network architecture.} Actor (policy) is the same as {\fontfamily{lmtt}\selectfont GN-CR}. Critic (value function) utilize the same GNN architecture to embed the input graph ${\mathcal{G}}_\tau$. On the embedded graph, we perform the average readout function to readout the embedded information. All other MLP parameters are the same as {\fontfamily{lmtt}\selectfont GN-CR}. \item \textbf{Training.} We use proximal policy gradient (PPO) \cite{schulman2017proximal} to train {\fontfamily{lmtt}\selectfont GN-PPO} with the default PPO hyperparamters of the \texttt{stable baseline} PPO2 implementation. The hyperparameters can be found in \url{https://stable-baselines.readthedocs.io/en/master/modules/ppo2.html}. \end{itemize} \begin{figure}[t] \begin{center} \includegraphics[width={0.30\linewidth}]{images/sampled_gaps.png} \end{center} \caption{\textbf{makespan distribution}} \label{fig:sampled_gaps} \end{figure} \subsection{Extended discussion on Actor-Critic approach.} As explained in Section \ref{section:mtsp-experiments}, the model trained with PPO ({\fontfamily{lmtt}\selectfont GN-PPO}) results in lower scheduling performance as compared to the models trained with Clip-REINFORCE ({\fontfamily{lmtt}\selectfont ScheduleNet} and {\fontfamily{lmtt}\selectfont GN-CR}). We hypothesize that this phenomenon is because of the high volatility and multi-modality of the critic's training target (sampled makepsan) as visualized in Figure \ref{fig:sampled_gaps}. This may cause inaccurate state-value predictions of the critic. The value prediction error would deteriorate the policy due to the bellman error propagation in the actor-critic setup as discussed in \cite{fujimoto2018addressing}. \section{Limitations and future works.} ScheduleNet is an end-to-end learned heuristic that constructs feasible solutions from ``scratch'' without relying on existing solvers and/or effective local search heuristics. ScheduleNet builds a complete solution sequentially by accounting for the current partial solution and actions of other agents. This poses a challenge for cooperative decision-making with sparse and episodic team-reward. As we use only a single-shared scheduling policy (ScheduleNet) for all agents, it can be effectively transferred to solve extremely large mSPs. We propose ScheduleNet not only to achieve the best performance in mathematically well-defined scheduling problems, but also to address the challenging research question: \textbf{\textit{“Learning to solve various large-scale multi-agent scheduling problems in a sequential, decentralized and cooperative manner”}}. The following are the limitations of the current study and the future research direction to overcome these limitations. \paragraph{Towards improving the performance of ScheduleNet.} The RL approaches for routing problems can be categorized into: (1) \textit{improvement heuristics} which learns to revise a complete solution iteratively to obtain a better solution; and (2) \textit{construction heuristics} learns to construct a solution by sequentially assigning idle vehicles to unvisited cities until the full routing schedule (sequence) is constructed. The improvement heuristics typically can obtain better performance than the construction heuristics as they find the best solution iteratively through the repetitive solution revising/searching process. However, improvement heuristics require expensive computations than construction heuristics. Moreover, such solution revising processes can be a computational bottleneck when the size of multi-agent scheduling problems becomes larger (i.e., many agents and tasks). In this regard, this study expands the construction heuristics to a multi-agent setting to design a computationally efficient and scalable solver. While focusing on providing general schemes to solve various mSPs, ScheduleNet does not have the most competitive performance than the algorithms that are specially designed to solve specific target problems. We can also employ the structure of the existing scheduling methods, which eventually attains better solutions after a series of computations. For instance, the learnable Monte-Carlo tree search (MCTS) \cite{guez2018learning} learns the tree traversing heuristics and empirically shows better tree search performance than UBT/UCT based MCTS. We can also borrow the structure of existing scheduling algorithms (e.g., LKH3, Branch $\&$ Bound, $N$-opt) to construct improvement-guaranteed policy-learning schemes. \paragraph{Towards more realistic mSP solver.} The proposed MDP formulation framework allows modeling more realistic/complex mSPs. Nowadays, the consideration of the random task demand and agent supply are necessitated for real-life mSP applications (e.g., robot taxi service with autonomous vehicles). In this study, we only aim to solve classical mSP problems, which are mathematically well-studied and have baseline heuristics to confirm the potential of the proposed SchdeduleNet framework. We expect that the current ScheduleNet framework can cope with such real-world scenarios by reformulating the state and reward definition appropriately. \section{Introduction} \input{sections/01_intro} \section{Related Works} \input{sections/02_related_works} \section{Problem Formulation} \input{sections/03_problem_formulation} \section{ScheduleNet} \input{sections/04_schedulenet} \section{Training ScheduleNet} \input{sections/05_training} \section{Experiments} \input{sections/06_experiments} \section{Conclusion} \input{sections/07_conclusion} \newpage \subsection{Example: MDP formulation of mTSP} \label{sec:mtps-mdp-formulation} Let us consider the single-depot mTSP with two types of entities: $m$ salesmen (i.e. $m$ agents) and $N$ cities (i.e. $N$ tasks) to visit. All salesmen start their journey from the depot and come back to the depot after visiting all cities (each city can be visited by only one salesman). A solution to mTSP is considered to be \textit{complete} when all the cities have been visited and all salesmen have returned to the depot. The semi-MDP formulation for mTSP is similar to that of the general mSP. The specific definitions of the state and reward for mTSP are as follows: \textbf{State.} We define $s_{\tau}=(\{s_{\tau}^{i}\}_{i=1}^{N+m}, s_{\tau}^{\text{env}})$ is composed of two types of states: entity state $s_{\tau}^{i}$ and environment state $s_{\tau}^{\text{env}}$. \vspace{-0.35cm} \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item $s_{\tau}^{i} = (p_{\tau}^{i}, \bm{1}_{\tau}^{\text{active}},\bm{1}_{\tau}^{\text{assigned}})$ is the state of $i$-th entity. $p_{\tau}^{i}$ is the position of $i$-th entity at the $\tau$-th event. $\bm{1}_{\tau}^{\text{active}}$ indicates whether the $i$-th worker/task is active (worker is working/ task is not visited) or not. Similarly, $\bm{1}_{\tau}^{\text{assigned}}$ indicates whether worker/task is assigned or not. \item $s_{\tau}^{\text{env}}$ contains the current time of the environment, and the sequence of cities visited by each salesman. \end{itemize} \vspace{-0.35cm} \textbf{Reward.} We use makespan of mTSP as a reward, which is sparse and episodic. The reward $r({s_\tau}) = 0$ for all non-terminal events, and $r({s_\mathrm{T}}) = -t(\mathrm{T})$, where $\mathrm{T}$ is the index of the terminal state. \subsection{Constructing agent-task graph} \label{sec:graph-representation} ScheduleNet constructs the \textit{agent-task graph} ${\mathcal{G}}_\tau$ that reflects the complex relationships among the entities in $s_\tau$. Specifically, ScheduleNet constructs a directed complete graph ${\mathcal{G}}_\tau = ({\mathbb{V}}, \mathbb{E})$ out of $s_{\tau}$, where ${\mathbb{V}}$ is the set of nodes and $\mathbb{E}$ is the set of edges. The nodes and edges, and their associated features are defined as: \begin{itemize}[leftmargin=*] \vspace{-0.35cm} \setlength\itemsep{-0.1em} \item $v_i$ denotes the $i$-th node representing either a agent or a task. $v_i$ contains the node feature $x_i=(s_{\tau}^i,k_i)$, where $s_{\tau}^i$ is the state of entity $i$, and $k_i$ is the type of $v_i$. For example, if the entity $i$ is \textit{agent} and its $\bm{1}_\tau^{active}=1$, then the $k_i$ becomes \textit{active-agent} type. For the full list of the node types, refer to Appendix \ref{appendix:mtsp-graph-formulation}. \item $e_{ij}$ denotes the edge between the source node $v_j$ and the destination node $v_i$. The edge feature $w_{ij}$ is equal to the Euclidean distance between the two nodes. \end{itemize} In the following subsections, we omit the event iterator $\tau$ for notational brevity, since the action selection procedure is only associated with the current event index $\tau$. \subsection{Graph embedding using TGA} ScheduleNet computes the node embeddings from the agent-task graph ${\mathcal{G}}$ using the type-aware graph attention (TGA). The embedding procedure first encodes the features of ${\mathcal{G}}$ into the initial node embeddings $\{h_i^{(0)}|v_i \in {\mathbb{V}} \}$, and the initial edge embeddings $\{h_{ij}^{(0)}|e_{ij} \in \mathbb{E}\}$. ScheduleNet then performs TGA embedding $H$ times to produce final node embeddings $\{h_i^{(H)}|v_i \in {\mathbb{V}} \}$, and edge embeddings $\{h_{ij}^{(H)}|e_{ij} \in \mathbb{E}\}$. To be specific, TGA embeds the input graph using the type-aware edge update, type-aware message aggregation, and the type-aware node update as explained in the following paragraphs. \textbf{Type-aware edge update}. Given the node embedding ${h_i}$ and edge embedding $h_{ij}$, TGA computes the type-aware edge embedding $h'_{ij}$ and the attention logit $z_{ij}$ as follows: \begin{align} \label{eqn:gnn-edge-context} \begin{split} h'_{ij} &= \text{TGA}_{\mathbb{E}} ([h_i, h_j, h_{ij}], k_j) \\ z_{ij} &= \text{TGA}_{\mathbb{A}} ([h_i, h_j, h_{ij}], k_j) \\ \end{split} \end{align} where $\text{TGA}_{\mathbb{E}}$ and $\text{TGA}_{\mathbb{A}}$ are the type-aware edge update function and the type-aware attention function, respectively. $\text{TGA}_{\mathbb{E}}$ and $\text{TGA}_{\mathbb{A}}$ are parameterized as Multilayer Perceptron (MLP) where the first layer is the Multiplicative Interaction (MI) layer \citep{jayakumar2019multiplicative}. The MI layer, which is a bilinear instantiation of hypernetwork \citep{ha2016hypernetworks}, adaptively generates parameters of $\text{TGA}_{\mathbb{E}}$ and $\text{TGA}_{\mathbb{A}}$ based on the type $k_j$. This allows us to use $\text{TGA}_{\mathbb{E}}$ and $\text{TGA}_{\mathbb{A}}$ for all types of nodes, and thus reduces the number of embedding functions to be learned while maintaining the good representational power of GNN. \textbf{Type-aware message aggregation}. Each entity in the agent-task graph interacts differently with the other entities, depending on the type of the edge between them. To preserve the different relationships between the entities during the graph embedding procedure, TGA gathers messages $h_{ij}'$ via the type-aware message aggregation. First, TGA aggregates messages for each node type (\textit{per-type}) and produces the \textit{per-type} message $m_{i}^k$ as follows: \begin{align} \label{eqn:tga-intra-aggr} \begin{split} m_{i}^k = \sum_{j \in \mathcal{N}_k(i)}\alpha_{ij} h'_{ij} \end{split} \end{align} where $\mathcal{N}_k(i) = \{v_l| k_{l} = k, \forall v_l \in \mathcal{N}(i)\}$ is the type $k$ neighborhood of $v_i$, and $\alpha_{ij}$ is the attention score that is computed using $z_{ij}$: \begin{align} \label{eqn:tga-attention} \begin{split} \alpha_{ij} &= \frac{\text{exp}(z_{ij})}{\sum_{j \in \mathcal{N}_k(i)}\text{exp}(z_{ij})} \\ \end{split} \end{align} TGA then concatenates the per-type messages to produce the aggregated message $m_i$ as: \begin{align} \label{eqn:tga-inter-aggr} \begin{split} m_i &= \text{concat}(\{m_{i}^k | k \in {\mathbb{K}}\}) \\ \end{split} \end{align} where ${\mathbb{K}}$ is the set of node types. Since the number of node types is fixed, the size of $m_i$ is fixed regardless of the size of problems. \textbf{Type-aware node update}. The aggregated message $m_i$ is then used to compute the updated node embedding $h_i'$ as follows: \vspace{-0.35cm} \begin{align} \label{eqn:gnn-edge-context} h'_{i} &= \text{TGA}_{\mathbb{V}} ([h_i, m_i], k_i) \end{align} where $\text{TGA}_{\mathbb{V}}$ is the type-aware node update function that is parametrized with MLP. The first layer of $\text{TGA}_{\mathbb{V}}$ is MI Layer, similar to the edge updater. The detailed architectures of $\text{TGA}_{\mathbb{E}}$, $\text{TGA}_{\mathbb{A}}$, and $\text{TGA}_{\mathbb{V}}$ are provided in Appendix \ref{appendix:TGA}. \subsection{Computing assignment probability} Using the computed final node embeddings $\{h_i^{(H)}|v_i \in {\mathbb{V}} \}$ and edge embeddings $\{h_{ij}^{(H)}|e_{ij} \in \mathbb{E}\}$, ScheduleNet selects the best assignment action $a_\tau$ for the target agent ScheduleNet computes the assignment probability of the target \textit{idle} agent $i$ to the \textit{unassigned} task $j$ as follows: \begin{align} \label{eqn:policy} \begin{split} l_{ij} &= \text{MLP}_{\textit{actor}}(h^{(H)}_i, h^{(H)}_j, h^{(H)}_{ij}) \\ p_{ij} &= \text{softmax}(\{l_{ij}\}_{j \in {\mathbb{A}}({\mathcal{G}}_\tau)}) \end{split} \end{align} where $h^{(H)}_i$ is the final node embedding for $v_i$, and $h^{(H)}_{ij}$ is the final edge embeddings for $e_{ij}$. In addition, ${\mathbb{A}}({\mathcal{G}}_\tau)$ denote the set of feasible actions defined as $\{v_j| k_j=\textit{Unassigned-task}\, \forall j \in {\mathbb{V}} \}$. Note that ScheduleNet learns how to process local information of the state and make the decentralized action for each agent. This allows ScheduleNet to compute the assignment probabilities for mSPs with \textit{arbitrarily} numbered agents and tasks in a \textit{scalable} manner. \subsection{Reward normalization} We denote the makespan induced by policy $\pi_{\theta}$ as $M(\pi_{\theta})$. We observe that $M(\pi_{\theta})$ is highly volatile depending on the problem size ($N$, $m$) and $\pi_{\theta}$. To reduce the variance of the reward incurred from the problem sizes, we propose to use the normalized makespan $\bar{M}(\pi_{\theta}, \pi_b)$ computed as: \begin{align} \label{eqn:inter-problem-norm} \begin{split} \bar{M}(\pi_{\theta}, \pi_{b}) = \frac{M(\pi_{\theta})-M(\pi_b)}{M(\pi_{b})}\ \end{split} \end{align} where $\pi_b$, a baseline policy, is the current policy $\pi$ in greedy (test) mode. $\bar{M}(\pi_{\theta}, \pi_b)$ measures the relative scheduling performance of $\pi_{\theta}$ to $\pi_b$. Obviously, $\bar{M}(\pi_{\theta}, \pi_b)$ has smaller variance than $M(\pi_{\theta})$, especially when the number of agent $m$ changes. Using $\bar{M}(\pi_{\theta}, \pi_b)$, we compute the normalized return $G_\tau(\pi_{\theta}, \pi_b)$ as follows: \begin{align} \label{eqn:return} \begin{split} G_\tau(\pi_{\theta}, \pi_b) \triangleq -\gamma^{T-\tau} \bar{M}(\pi_{\theta}, \pi_b) \end{split} \end{align} where $T$ is the index of the terminal state, and $\gamma$ is the discount factor of MDP. The minus sign is for minimizing the makespan. Note that, in the early phase of mSP (when $\tau$ is small), it is difficult to estimate the makespan. Thus, we place a smaller weight (i.e, $\gamma^{T-\tau}$) for $G_\tau$ evaluated when $\tau$ is small. \subsection{Clip-REINFORCE} Even a small change in a single assignment can result in a dramatic change to make span due to the combinatorial nature of the solution. Hence, training value function that predicts the $G_\tau$ reliably is difficult. See Appendix \ref{appendix:ablation-studies} for more information. We thus propose to utilize the Clip-REINFORCE (CR), a variant of PPO \cite{schulman2017proximal}, \textit{without} the learned value function for training ScheduleNet. The objective of the Clip-REINFORCE is given as follows: \begin{flalign} &\mathcal{L}(\theta) = \mathop{\mathbb{E}}_{({\mathcal{G}}_\tau, a_\tau) \sim \pi_\theta}[\text{min}(\text{clip}(\rho_\tau, 1-\epsilon, 1+\epsilon) G_{\tau}, \rho_\tau G_{\tau})] \end{flalign} where $G_{\tau}$ is a shorthand notation for $G_{\tau}(\pi_\theta, \pi_b)$ and $\rho_\tau = \pi_\theta(a_\tau|{\mathcal{G}}_\tau)/\pi_{b}(a_\tau | {\mathcal{G}}_\tau)$ is the ratio between the target and baseline policy. \begin{figure}[t] \begin{minipage}{0.7\textwidth} \centering \includegraphics[width=.99\linewidth]{images/mtsp-random-ebar.png} \caption{\textbf{Scheduling performance on random ($N \times m$) mTSP datasets (smaller is better)}. The y-axis shows the normalized makespans. The red, blue and orange bar charts demonstrate the performance of LKH3, SchdeduleNet and OR-tools respectively. The green bars show the performance of two-phase heuristics. The error bars shows $\pm 1.0$ standard deviation of the makespans.} \label{fig:makesapn_on_random} \end{minipage}\hfill \begin{minipage}{0.25\textwidth} \centering \includegraphics[width=.99\linewidth]{images/mstp-random-comp-times.png} \caption{\textbf{Inference time of ScheduleNet and the baselines}} \label{fig:comp_time_on_random} \end{minipage} \end{figure} \subsection{mTSP experiments} \label{section:mtsp-experiments} \textbf{Training.} We denote ($N \times m$) as the mTSP with $N$ cities (tasks) and $m$ salesmen (agents). To generate a random mTSP instance, we sample $N$ and $m$ from $U(20, 25)$ and $U(2, 5)$, respectively. Similarly, the Euclidean coordinates of $N$ cities are sampled uniformly at random from the unit square. ScheduleNet is trained on random mTSP instances that are generated on-the-fly. For all experimental results, we evaluate the performance of a single trained ScheduleNet model. Please refer to Appendix \ref{appendix:mtsp-training} for more information. \textbf{Results on random mTSP datasets}. To investigate the generalization capacity of ScheduleNet to various problem sizes, we evaluate ScheduleNet on the randomly generated mTSP datasets. Each ($N \times m$) dataset consists of 100 random uniform mTSP instances. The baseline algorithms we considered are: \vspace{-0.2cm} \begin{itemize}[leftmargin=*] \setlength\itemsep{-0.1em} \item Lin-Kernighan-Helsgaun 3 (LKH3) \cite{helsgaun2017extension}, a state-of-the-art heuristic solver, that are known to find the optimal solutions for the mTSP problems with the identified optimal solutions. We use the makespans computed by LKH3 as the proxies for the optimal solutions of the test mTSP instances. \item Google OR-Tools\cite{ortools}, a highly optimized meta-heuristic library. \item NI and RI, two-phase mTSP heuristics, that first group the cities with $K$-means clustering (where $K=m$) and then apply the well-known TSP heuristics, i.e., Nearest Insertion (NI) and Random Insertion (RI), per cluster. \end{itemize} From Figure \ref{fig:makesapn_on_random}, in the small-sized maps ($N < 200$), we can notice that the scheduling performances of ScheduleNet better than that of the two-phase heuristics and worse than that of OR-tools. In the larger maps ($N \geq 200$), on the other hand, ScheduleNet outperforms OR-tools and the two-phase heuristics as well. ScheduleNet exhibits small standard deviations of makespans for all problem sizes. In addition, Figure \ref{fig:comp_time_on_random} compares the average inference time and the standard deviation per problem size, clearly indicating that ScheduleNet is significantly faster especially for large size mTSP instances. \textbf{Results on Public Benchmarks}. Next, to explore the generalization of ScheduleNet to problems that come from completely different distributions (i.e. real-world data), we present the results on the mTSPLib dataset defined by \citet{necula2015tackling}. mTSPLib consists of four small-to-medium- sized problems from TSPLib \citep{reinelt1991tsplib}, each of which extended to multi-agent setup where $m$ equals to 2, 3, 5, and 7. The baseline algorithms are: (1) exact solver CPLEX, (2) LKH3 whose stopping criteria set to be the known best solutions, (3) OR-Tools, (4) population-based meta-heuristics Self-organization map (SOM), Ant-colony optimization (ACO), Evolutionary algorithm (EA) \citep{lupoaie2019som}, and (5) the two-phase heuristics (NI and RI). From Table \ref{table:mtsp-benchmark-table-transpose}, we can observe that, at small-to-medium scale, OR-Tools produces near-optimal solutions, followed by ScheduleNet and ACO. However, the two-phase heuristics show drastic performance degradation, which is most likely due to the non-uniform distribution of the cities. From this experiment, we can observe that the ScheduleNet policy is equally effective in solving both generated and real-world mTSPs problems. \input{tables/mtsp_benchmarks} \begin{figure}[!htb] \centering \begin{minipage}{.475\textwidth} \flushleft \includegraphics[width={1\linewidth}]{images/KNN-ablation-bars2.png} \caption{\textbf{Scheduling performance (top) and computational time (bottom) on the sparse graphs.} The left figure shows the scheduling performance w.r.t. $N_r$. The right figure w.r.t. $N_t$. We normalize the makespan of each scheduling algorithm with the ScheduleNet makespan with complete graph.} \label{fig:knn-graph-ablation} \end{minipage}% \hspace{0.5cm} \begin{minipage}{0.475\textwidth} \flushright \includegraphics[width={1.0\linewidth}]{images/train-ablation2.png} \caption{\textbf{Training ablation.} Makespan over the training steps (left) and scheduling performance on uniform random test datasets (right). The blue, orange, and green curves show the normalized makespan of {\fontfamily{lmtt}\selectfont ScheduleNet}, {\fontfamily{lmtt}\selectfont CR-GN}, and {\fontfamily{lmtt}\selectfont PPO-GN} resepctively.} \label{fig:training-ablation} \end{minipage} \vspace{-0.2cm} \end{figure} \textbf{Graph sparsity ablation}. From the aforementioned performance results, we can conclude that ScheduleNet is specifically effective in solving large scale problems. However, as the problem size increases, the computational complexity for processing a complete graph will also increase in $\mathcal{O}(N^2+m^2)$. We hypothesize that a smaller value of $N_r$ (number of agent connection) and $N_t$ (number of task connection) reduce the computational cost significantly without losing performance a lot. To verify this idea, we evaluate the same ScheduleNet that is trained on small complete graphs (random mTSPs) on large mTSP instances ($300 \times 30$). The graph embedding procedure for the described sparse graph has complexity $\mathcal{O}((m+N)(N_r + N_t))$. Figure \ref{fig:knn-graph-ablation} shows the scheduling performance (colored) and the computational time (gray) of ScheduleNet for different values of $N_r$ and $N_t$, i.e. the level of sparsity. The results show that the performance of the scheduleNet does not deteriorate significantly even with the limited information scope. These results imply that ScheduleNet can be used to solve very large real-world mTSPs in a fast and computationally efficient manner with local information. We also evaluate ScheduleNet on random mTSPs of size ($500 \times 50$) and ($750 \times 70$) (see Appendix \ref{appendix:mtsp-random-results}). \textbf{RL training ablation}. To investigate the effectiveness of each ScheduleNet component (TGA and the training stabilizers), we train the following ScheduleNet variants: \vspace{-0.2cm} \begin{itemize}[leftmargin=*] \item {\fontfamily{lmtt}\selectfont GN-PPO} is a variant of ScheduleNet that has Attention-GN layers\footnote{AttentionGN does not use MI layer uses different message aggregation scheme. AttentionGN computes attention scores w.r.t. \textit{all incoming edges}, in type agnostic manner. In contrast, TGA attention scores are computed w.r.t. edge types with inter-type and intra-type aggregations. For more details about AttentionGN, refer to Algorithm 3 in Appendix D.} for graph embedding (instead of TGA) and is trained by the PPO. \item {\fontfamily{lmtt}\selectfont GN-CR} is a variant of ScheduleNet that has Attention-GN layers and is trained by Clip-REINFORCE with the normalized makespan. \item {\fontfamily{lmtt}\selectfont TGA-CR} is the ScheduleNet setting. \end{itemize} Figure \ref{fig:training-ablation} (left) visualizes the validation performance of ScheduleNet and its variants over the training steps. It can be seen that {\fontfamily{lmtt}\selectfont GN-PPO} fails to learn meaningful scheduling policy, which indicates that solving mSP with the standard Actor-Critic methods can be a challenging. We see the same trend for TGA-PPO (result is omitted). In contrast, the model trained with the stabilizers ({\fontfamily{lmtt}\selectfont GN-CR} and {\fontfamily{lmtt}\selectfont ScheduleNet}) exhibit stable training, and converge to a similar performance level on the validation dataset. In addition, as shown in Figure \ref{fig:training-ablation} (right), TGA-CR ({\fontfamily{lmtt}\selectfont ScheduleNet}) consistently shows better performance than {\fontfamily{lmtt}\selectfont GN-CR} over the random test datasets, showing that TGA plays an essential role in generalizing to larger mTSPs. \subsection{JSP experiments} We employ ScheduleNet to solve JSP, another type of mSP, to evaluate its generalization capacity in terms of solving various types of mSPs. The goal of the JSP is to schedule machines (i.e. agents) in a manufacturing facility to complete the jobs that consist of a series of operations (tasks), while minimizing the total completion time (makespan). Solving JSP is considered to be challenging since it imposes additional constraints that scheduler must obey: precedence constraints (i.e. an operation of a job cannot be processed until its precedence operation is done) and agent-sharing constraints (i.e. each agent has a unique set of feasible tasks). \textbf{Formulation}. We formulate JSP as a semi-MDP where $s_\tau$ is the partial solution of JSP, $a_\tau$ is agent-task assignment (i.e. assigning an idle machine to one of the feasible operations), and reward is the minus of makespan. Please refer to Appendix \ref{appendix:jsp-mdp-formulation} for the detailed formulation of semi-MDP and agent-task graph construction. \textbf{Training}. We train ScheduleNet using the random JSP instances ($N \times m$) that have $N$ jobs and $m$ machines We sample $N$ and $m$ from $U(7, 14)$ and $U(2, 5)$ respectively to generate the training JSP instances. We randomly shuffle the order of the machines in a job to generate machine sharing constraints. Please refer to Appendix \ref{appendix:jsp-training} for more information. \textbf{Results on random JSP datasets}. To investigate the generalization capacity of ScheduleNet to various problem sizes, we evaluate ScheduleNet on the randomly generated JSP datasets. We compare the scheduling performance with the following baselines: exact solver CP-SAT \cite{ortools} with one hour time-limit, and three JSP heuristics that known to work well in practice, Most Operation Remaining (MOR), First-in-first-out (FIFO), and Shortest Processing Time (SPT). As shown in Figure \ref{fig:jsp-random}, ScheduleNet always outperforms the baseline heuristics for all sizes of the maps and even outperforms CP-SAT for the larger maps. The result confirms that ScheduleNet has good generalization capacity in terms of the problem size. \textbf{Results on Public Benchmarks}. We evaluate the scheduling performance to verify ScheduleNet's generalization capacity to unseen JSP distributions on the Taillard's 80 dataset \citep{taillard1993benchmarks}. We compare against two deep RL baselines \cite{Park2021learning, zhang2020learning}, as well as the mentioned heuristics (MOR, FIFO, and SPT). Both baseline RL methods were specifically designed to solve JSP by utilizing the well-known disjunctive JSP graph representation \cite{roy1964problemes} and well-engineered dense reward functions. Nevertheless, we can observe that ScheduleNet outperforms all baselines in all of the cases while utilizing only a sparse episodic reward (see Table \ref{table:jsp-benchmark-table}). Please refer to Appendix \ref{appendix:jsp-benchmark-results} for the extended benchmark results, including ORB \cite{applegate1991computational}, SWV \cite{storer1992new}, FT \cite{fisher1963probabilistic}, LA \cite{lawrence1984resouce}, YN \cite{yamada1992genetic}. \begin{figure*}[t] \begin{center} \includegraphics[width={1.0\linewidth}]{images/jsp-random.png} \end{center} \caption{\textbf{Scheduling Performance on random ($N \times m$) JSP datasets (smaller is better).} Each dataset contains 100 instances. The blue and orange bar charts demonstrate the performance of SchdeduleNet and CP-SAT respectively. The green bars show the performance of JSP heuristics.} \label{fig:jsp-random} \vspace{-0.2cm} \end{figure*} \begin{table}[t] \centering \scriptsize\addtolength{\tabcolsep}{-1pt} \renewcommand{\arraystretch}{1} \caption{\textbf{The average scheduling gaps of ScheduleNet and the baseline algorithms.} The gaps are measured with respect to the optimal (or the best-known) makespan \cite{vilimfailure, van2016dynamic}.} \begin{tabular}{c|cccccccc||c} \toprule TA Instances & 15$\times$15 & 20$\times$15 & 20$\times$20 & 30$\times$15 & 30$\times$20 & 50$\times$15 & 50$\times$20 & 100$\times$20 & Mean Gap\\ \midrule ScheduleNet &\textbf{1.153}& \textbf{1.194}& \textbf{1.172} & \textbf{1.180} & \textbf{1.187} & \textbf{1.138} & \textbf{1.135} & \textbf{1.066} & \textbf{1.154}\\ \midrule \cite{Park2021learning} & 1.171 & 1.202 & 1.228 & 1.189 & 1.254 & 1.159 & 1.174 & 1.070 & 1.181\\ \cite{zhang2020learning} & 1.259 & 1.300 & 1.316 & 1.319 & 1.336 & 1.224 & 1.264 & 1.136 & 1.269\\ \midrule MOR & 1.205 & 1.236 & 1.217 & 1.249 & 1.173 & 1.177 & 1.092 & 1.092 & 1.197\\ FIFO & 1.239 & 1.314 & 1.275 & 1.319 & 1.310 & 1.206 & 1.239 & 1.136 &1.255 \\ SPT & 1.258 & 1.329 & 1.278 & 1.349 & 1.344 & 1.241 & 1.256 & 1.144 & 1.275 \\ \bottomrule \end{tabular} \label{table:jsp-benchmark-table} \vspace{-0.2cm} \end{table}
2024-02-18T23:40:20.763Z
2021-06-08T02:21:52.000Z
algebraic_stack_train_0000
2,127
8,990
proofpile-arXiv_065-10528
\section{Introduction} In this paper, we consider the dynamical behavior of a gaseous system characterized by rotation (with a strong positive angular momentum gradient), a weak magnetic field, and a small negative entropy gradient. As is well known, these ingredients lead to the magnetorotational instability (MRI), possibly in its more general convective form (Balbus \& Hawley 1991, Balbus 1995). It might appear that when resistivity is present only the longest wavelengths can remain unstable. If the resistivity is too large then these wavelengths can be longer than the size of the system, and global stability results. (Note that the small negative entropy gradient is stabilized in the hydrodynamical limit by strong rotation.) Surprisingly, this is not an accurate description of the dynamical behavior of this system. Here we show that the presence of resistivity activates unstable buoyancy modes that rely on the arbitrarily small adverse entropy gradient as their free energy source. Moreover, these unstable modes lie on the same branch of the dispersion relation as the MRI, to which they revert at small wavenumbers. What is particularly striking is that at very large wavenumbers, resistivity never damps these modes. Ultimately, when thermal diffusion is considered, there is in fact a critical wavenumber above which damping occurs. But, at least in protostellar disks and planetary interiors, this still allows for a very broad range of unstable large wavenumber modes. (The presence of both Ohmic resistivity and a much smaller thermal diffusivity is what gives this instability its `double diffusive' character.) In this introductory paper, our interest is in a formal presentation of the properties of the instability in the simplest possible setting. Our fiducial model will be a protostellar disk. Such systems are both cold and dense; excepting the hot regions near the central star and in the surface layers, much of the gas is poorly ionised, and, as a consequence, the electrical conductivity too low to develop classical MRI turbulence (Blaes and Balbus 1994, Gammie 1996, Igea and Glassgold 1999, Sano et al.~2000, Fleming and Stone 2003, Ilgner and Nelson 2006, Wardle 2007, Okuzumi 2009, Turner and Drake 2009). Most protostellar disk models hence consist of an `active' well-ionized envelope, in which the MRI could in principle be present, encasing an extensive body of poorly-ionised gas which remains `dead', at least as far as the MRI is concerned. The dead region nevertheless may support other dynamics, such as spiral density waves excited by the turbulence in the surface layers (Fleming and Stone 2003), streaming instability caused by settling dust grains (Youdin and Goodman 2005), and flows driven by torques applied by large-scale magnetic fields (Turner and Sano 2008). Generally, however, dead-zones have been considered magnetically `deactivated'. Nevertheless, even in the very resistive heart of the dead-zone we will see that magnetic fields can trigger local small scale instability. The analysis will be explored in a local model, the details of which we present in Section 2. In Section 3, we investigate the dynamics of linear axisymmetric Boussinesq disturbances by solving their fifth-order dispersion relation numerically and by obtaining asymptotic estimates of the growth rates in the double-diffusive limit. The instability is subsequently related to the idea of magnetostrophic balance. In Section 4 our conclusions are drawn and future work outlined. \section{Physical and mathematical outline} Consider a weakly ionized region of a protostellar disk, a nominal `dead-zone'. A negative radial entropy gradient is assumed to be present, but the details of how this gradient is established and maintained do not concern us directly in this calculation: they may plausibly be present due to a combination of outwardly decreasing heating from the central star and an outwardly increasing opacity. The entropy gradient is presumed to be small relative to the angular momentum gradient of the disk. More specifically, the radial Brunt-V\"ais\"all\"a (BV) frequency \begin{equation} N^2_R = - \frac{1}{ \gamma\rho}{\dd P\over \dd R} {\dd \ln P\rho^{-\gamma} \over \dd R} \end{equation} is assumed to be small compared with the epicyclic frequency \begin{equation} \kappa^2 = 4\Omega^2 +{d\Omega^2\over d\ln R}. \end {equation} Here, $R$ is the radial coordinate in a standard cylindrical system $(R,\phi, z)$, $\rho$ is the mass density, $P$ is the gas pressure, $\Omega(R)$ is the angular velocity, and $\gamma$ is the adiabatic index (ratio of constant pressure specific heat to constant volume specific heat). Recall that $\kappa=\Omega$ in a Keplerian disk, so the magnitude of $N_R$ is assumed much less than an orbital frequency ($N_R$ itself is, of course, imaginary here). We consider the stability of axisymmetric disturbances. If we were to treat this highly resistive system as purely hydrodynamic, we would conclude that it is stable: there is no instability if $N^2_R+\kappa^2>0$ (the Hoiland-Solberg criterion), which for small $|N_R|<\Omega$ is certainly true. On the other hand, MHD processes are generally also stabilized when the resistivity is high. The classical MRI, for example, is banished to unfeasibly long wavelengths in the presence of strong Ohmic dissipation. Consequently, as noted in the Introduction, one might be tempted to conclude that adding a very small negative entropy gradient to a magnetised resistive disk would make very little difference to the axisymmetric stability. In fact, it makes a dramatic difference. Consider two fluid elements at different vertical locations embedded in a negative radial entropy gradient. A weak magnetic field tethers these two elements to one another. When these elements are radially displaced, they will try to bring the magnetic field with them, but the field diffuses because of resistivity (which would quell the classical MRI). Nevertheless, because of the transient magnetic torque, some finite amount of angular momentum will have been exchanged between the two elements in the process. As a consequence, each element will settle into a new orbit at a new radial location --- and, most critically, in a \emph{new entropy environment}. If thermal diffusion is too slow to equilibrate the temperature of the displaced fluid with its surroundings, the elements will continue to move radially, because they are now buoyantly unstable. Subsequently, the diffusing field will ensure that these elements will magnetically connect to new fluid parcels, and further angular momentum will be exchanged. Thus the cycle continues and an instability is at hand: an instability that depends upon the transfer of angular momentum between magnetized fluid elements, yet whose seat of free energy is not the shear, but rather the unstable thermal stratification. Note that this instability is double-diffusive, relying on the speedy diffusion of angular momentum (accomplished by diffusing magnetic field) and the slow (or negligible) diffusion of heat. This field diffusion counters the stabilising tendency of an angular momentum gradient by breaking the constraint of specific angular momentum conservation. Though buoyantly driven, the instability clearly has features in common with the MRI. On sufficiently small vertical wavenumbers (sufficiently spaced fluid elements) the influence of resistive diffusion will be minor and we recover the MRI. In fact, we will show that the double-diffusive instability and the MRI are, in effect, two sides of {\em the same} instability. In the presence of a negative radial entropy gradient, one of the modes that emerges is, at small wavenumbers, the classical MRI and, at large wavenumbers, the double diffusive instability. The instability is suppressed only at extremely large wavenumbers, when thermal conduction becomes important. Thus the MRI and the resistive double-diffusive instability are intimately linked. In the following sections this idea is explored using a number of approaches, mainly within the framework of the linearised Boussinesq equations of resistive MHD. (An important omission, which we will examine in a later study, is the electromotive force associated with the Hall term in the induction equation.) The local modes under consideration satisfy a fifth-order dispersion relation, which we will study in some detail. It is possible to obtain a number of clean numerical and analytical results from which further physical insights emerge. \subsection{Model equations} Our model employs the resistive MHD equations, which comprise the continuity, momentum, and entropy equations, and the magnetic induction equation with Ohmic diffusion. As noted, the Hall effect is also of practical importance, but it is omitted in this first analysis. Ohmic resistivity will exceed viscosity and thermal diffusion by several orders of magnitude, hence the viscous stress is dropped. At least initially, however, it is helpful to retain the thermal diffusion term, which we model as the divergence of a radiative energy flux. The equations are \begin{align} \frac{\d\rho}{\d t} + \v\cdot\nabla\rho &= -\rho\nabla\cdot\v, \\ \frac{\d\v}{\d t}+\v\cdot\nabla\v & = -\frac{1}{\rho}\nabla\left(P + \frac{B^2}{8\pi}\right) - \nabla\Phi + \frac{(\ensuremath{\mathbf{B}}\cdot\nabla)\ensuremath{\mathbf{B}}}{4\pi\rho} \\ \frac{\d\ensuremath{\mathbf{B}}}{\d t} +\v\cdot\nabla\ensuremath{\mathbf{B}} &= \ensuremath{\mathbf{B}}\cdot\nabla\v - \ensuremath{\mathbf{B}}\nabla\cdot\v+\eta\,\nabla^2\ensuremath{\mathbf{B}}, \\ E\,\left(\frac{\d \sigma}{\d t} +\v\cdot\nabla \sigma\right) &= - \nabla\cdot \mathbf{F} + \eta|\nabla\times\ensuremath{\mathbf{B}}|^2.\label{mark} \end{align} Here $\v$ is the velocity, $E$ the internal energy density, $\Phi$ the gravitational potential, $\ensuremath{\mathbf{B}}$ the magnetic field, and $\sigma=\text{ln}P\rho^{-\gamma}$ is the entropy function. The resistivity is represented by $\eta$, and the radiative energy flux density by $\mathbf{F}$ which is defined through \begin{equation} \mathbf{F}= -\frac{16\,\sigma_B\,T^3}{3\,\rho\,\kappa_\text{op}}\,\nabla T, \end{equation} where $T$ is temperature, $\kappa_\text{op}$ is opacity, and $\sigma_B$ is the Stefan-Boltzmann constant (the latter two are not to be confused with the epicyclic frequency $\kappa$ and the entropy function $\sigma$). Finally, the equation of state is of an ideal gas which gives \begin{equation} E= \frac{1}{\gamma-1}\,P. \end{equation} In this study, we restrict the analysis to current-free initial equilibria, i.e.\ $\nabla\times\ensuremath{\mathbf{B}}=0$, and the last term of equation (\ref{mark}) will not be used. \subsection{Linearised equations} It is assumed that the disk is in near Keplerian equilibrium, with a small pressure gradient supplementing the star's gravity in the radial force balance. The equilibrium is $\v_0= R\,\Omega(R)\,\ensuremath{\mathbf{e}_{\phi}}$, $\rho=\rho(R)$, $P= P(R)$ and $S=S(R)$. It is also assumed from the outset that the radial gradients of $P$ and $S$ are negative and relatively small. In addition, weak magnetic fields thread the disk, but they are too small to influence the equilibrium balances. For simplicity, the magnetic field takes a uniform vertical configuration: $\ensuremath{\mathbf{B}}=B_0\,\ensuremath{\mathbf{e}_{z}}$. In order to examine local disturbances, we adopt the shearing sheet model (Goldreich and Lynden-Bell 1965), which represents a small block of disk as a Cartesian box with $(x, y, z)$ denoting the radial, azimuthal, and vertical coordinates. Inside the box $\rho$, $dP/dR$, and $dS/dR$ are assumed to be constant. This equilibrium is perturbed by a planar axisymmetric Boussinesq disturbance, $\rho'$, $v'_x$, $v'_y$, $B'_x$, $B_y'$, proportional to $e^{i k z + s t}$, the linearised equations of which read \begin{align} & s\,\v' = 2\Omega_0\, u_y'\,\ensuremath{\mathbf{e}_{x}} -\frac{\kappa^2}{2\Omega_0}\, u_x'\,\ensuremath{\mathbf{e}_{y}} +\frac{\rho'}{\rho_0^2}\left(\frac{d P}{d R}\right)_0\ensuremath{\mathbf{e}_{x}} +\frac{i\,k\,B_0}{4\pi\rho_0}\,\ensuremath{\mathbf{B}}', \label{linu} \\ & s\,\ensuremath{\mathbf{B}}' = i\,k\,B_0\,\v' +\left(\frac{d\Omega}{d\ln R}\right)_0 B_x'\,\ensuremath{\mathbf{e}_{y}} -\eta\,k^2\,\ensuremath{\mathbf{B}}', \label{linB}\\ & s\,\rho' = \frac{\rho_0}{\gamma}\left(\frac{d S}{d R}\right)_0 v_x' -\xi k^2 \rho' , \label{linS} \end{align} where the subscript $0$ indicates evaluation at the point on which the shearing sheet is anchored. Hereafter, the subscript 0 will be dropped. The thermal diffusivity $\xi$ has been introduced which is defined by \begin{equation} \xi = \frac{16\,\sigma (\gamma-1)\,T^4}{3\,\gamma\,\kappa_\text{op}\,\rho\,P}. \end{equation} Notice that in the entropy equation \eqref{linS}, we have used the relation \begin{equation}\label{Tpert} T'= -(T/\rho)\rho', \end{equation} which comes from the perturbed ideal gas law. The pressure perturbation is identically zero for our planar disturbances. But, in any case, Boussinesq perturbations are in near pressure equilibrium with their surroundings, and so the fractional pressure perturbation is negligible in the ideal gas law. \section{Analysis} Equations \eqref{linu}--\eqref{linS} present an algebraic eigenvalue problem for $s$, which yields the following dispersion relation \begin{equation} \label{axisdispdim} s^5 + a_4\, s^4 + a_3\, s^3 + a_2\,s^2 + a_1\,s + a_0 =0, \end{equation} with the coefficients \begin{align} a_4 &= 2\eta\,k^2 + \xi\,k^2, \\ a_3 &= N^2_R+\kappa^2+ 2 v_A^2\,k^2 + \eta^2\,k^4 + 2\eta\xi\,k^4, \\ a_2 &= 2N^2_R\,\eta\,k^2 + 2(\eta + \xi)v_A^2\,k^4 +(2\eta+\xi)k^2\,\kappa^2 +\eta^2\xi\,k^6, \\ a_1 &= v_A^2\,k^2\left(\frac{d\Omega^2}{d\ln R}\right) + v_A^4\,k^4 + (\eta^2\,k^2 + v_A^2)\,k^2\,N^2_R +(2\eta\xi + \eta^2)\,k^4\,\kappa^2 + 2\eta\xi v_A^2\,k^6, \\ a_0 &= N^2_R\,\eta\,v_A^2\,k^4+ \xi\,v_A^4\,k^6 + \eta^2\xi\,k^6\,\kappa^2 + \xi\,v_A^2\,k^4\left( \frac{d\Omega^2}{d\ln R}\right), \label{a0} \end{align} where the Alfv\'en speed is defined through $v_A= B_0/\sqrt{4\pi\rho}$. Equation \eqref{axisdispdim} is a special case of the dispersion relation calculated by Menou et al.~(2004), which also accounts for viscosity, vertical variation in $\Omega$, $P$, and $S$, and radial wavenumbers. On the other hand, by taking $\eta=\xi=0$ we recover the ideal MHD relation of Balbus (1995), which yields the two convective modes and the two convective-MRI modes. The dispersion relation in this case is fourth order, which means that the inclusion of magnetic diffusion (and thermal diffusion) leads to the emergence of a new fifth mode. From simply inspecting the sign of the last coefficient $a_0$ it is already clear that the system possesses novel stability behaviour when $\xi$ is small. If we consider scales upon which the thermal diffusion has little influence, the stability discriminant is the first term on the right side of \eqref{a0}. It follows that stability is linked to the sign of $N^2_R$, which means that the Schwarzchild criterion is recovered --- \emph{the stabilising effect of rotation has vanished}. Notice also that the discriminant combines the three elements essential to the new instability: resistivity, entropy stratification, and magnetic fields. If any one of these is missing $a_0$ vanishes and the systems' stability properties revert to the familiar situation of Balbus (1995), which is governed by the coefficient $a_1$. In the following subsections we adopt a number of approaches in order to describe the mathematics and physics of this double-diffusive mode more fully. First, we briefly discuss the main parameters that appear in the model and estimate realistic values they may take in protostellar disks. Next we solve the dispersion relation Eq.~\eqref{axisdispdim} numerically, and obtain an analytical asymptotic result in the regime of interest. Following that, we show how the mode relies on a generalisation of `magnetostrophic balance' in order to function. Finally, we give a more physical explanation of the mechanism of instability. \subsection{Fiducial parameters and lengthscales} Time and wavenumber units are chosen so that $\Omega=1$ and the Alfv\'en wavenumber, defined through $k_A = \Omega/v_A$, is also $1$. The Alfv\'en length is denoted by $l_A$ and is defined by $l_A=2\pi/k_A$. Consequently, the governing parameters of the dispersion relation comprise the scaled squared BV frequency $N^2_R/\Omega^2$, the scaled squared epicyclic frequency $\kappa^2/\Omega^2$, the Roberts number $q$, and the Elsasser number $\Lambda$. The latter two are defined by \begin{equation} \label{RobandElsie} q = \frac{\xi}{\eta}, \qquad \Lambda = \frac{v_A^2}{\eta\,\Omega}. \end{equation} In order to estimate $q$ we use the following formulas for the resistivity and thermal diffusivity \begin{align}\label{eta} \eta &= 2.34\times 10^{17}\,\left(\frac{T}{100\,\text{K}}\right)^{1/2}\left(\frac{10^{-14}}{x_e}\right)\,\text{cm}^2\,\text{s}^{-1}, \\ \xi &= 2.39\times 10^{12}\,\left(\frac{\rho}{10^{-9}\,\text{g}\,\text{cm}^{-3}}\right)^{-2}\left(\frac{T}{100\,\text{K}} \right)^3 \,\left(\frac{\kappa_\text{op}}{\text{cm}^2\,\text{g}^{-1}} \right)^{-1}\,\text{cm}^2\,\text{s}^{-1}, \end{align} where $x_e$ denotes electron fraction (Blaes and Balbus 1994). In the latter expression we have used $\gamma=7/5$, and endowed the gas with a mean molecular weight of 2.3. According to the commonly used minimum mass model, at a few AU the disk may be characterised by $T\sim 100$K and $\rho\sim 10^{-9}\,\text{g}\,\text{cm}^{-3}$ (Hayashi et al.~1985), while at these low temperatures $\kappa_\text{op}\sim 1\,\text{cm}^2\text{g}^{-1}$ (Henning and Stognienko 1996). The ionisation fraction $x_e$, on the other hand, is poorly constrained and extremely sensitive to dust grain size and abundance, the sources of ionisation, and vertical location. Various model show it ranging over many orders of magnitude: from $10^{-13}$ to values as low as $10^{-19}$ (Sano et al.~2000, Ilgner and Nelson 2006, Wardle 2007, Turner and Drake 2009). Given this uncertainty, we estimate the Roberts number by $q\sim x_e/10^{-9}$. This suggests an upper limit for $q$ of $10^{-4}$ and an extreme lower limit of $10^{-10}$. The Elsasser number requires knowledge of the strength of the latent magnetic field, which we parameterise by the plasma beta: \begin{equation} \beta=\frac{ 2\,c^2}{v_A^2}, \end{equation} where $c$ is the gas sound speed. This gives \begin{equation}\label{lamm} \Lambda \sim \frac{1}{\beta}\,\left(\frac{H^2\Omega}{\eta}\right) \end{equation} with $H= c/\Omega$ the disk scale height. The bracketed term in Eq.~\eqref{lamm} is the magnetic Reynolds number, for which we can obtain a rough estimate using equation Eq.~\eqref{eta} and assuming $H=0.1R$ and $R\approx 5$ AU. For $T\sim 100\,$K, this presents $\Lambda\sim \beta^{-1}(x_e/10^{-15})$. So one may expect $\Lambda$ to vary between $10^{-1}$ and $10^{-6}$ or even lower. Finally, it is necessary to estimate the squared BV frequency $N^2_R$, which represents the entropy gradient from which the double-diffusive modes derive their energy. Most $\alpha$-disk models in fact yield a positive BV frequency (Balbus and Hawley 1998), but this is because they omit the central star's radiation field which will tend to drive a negative radial temperature gradient. However, it is difficult to quantify how this central heating is mediated by the turbulent inner radii of the disk, and the dead-zone itself, and so it is unclear which estimates one should use for the magnitude of $N^2_R$. We simply assume that $N^2_R/\Omega^2$ takes values between $-0.1$ and $-0.01$. Because of the pressure gradient the background flow $\Omega$ is not strictly Keplerian, however the deviation should be minimal and so we set $\kappa\approx\Omega$. The Alfv\'en length scales as $$l_A\sim \beta^{-1/2}H.$$ Therefore modes that possess a wavelength comparable or smaller than the Alfv\'en length may fit comfortably into the disk when the fields are sufficiently sub-equipartition. In contrast the resistive length, defined by $l_\eta= \eta/v_A$, scales like $$l_\eta\sim \Lambda^{-1}l_A.$$ There can be no classical-MRI on scales less than $l_\eta$. Using the above estimate for $\Lambda$ we derive $l_\eta\sim \beta^{1/2}(10^{-15}/x_e)\,H$. Then, for realistic parameter values $\beta>10^2$ and $x_e<10^{-14}$, the following ordering is established: \begin{equation} \label{ordering} \lambda\,\text{(MRI)}\,>\,\, l_\eta\, >\, H\, >\, l_A\,. \end{equation} Thus the classical MRI modes will not occur in the main body of the disk in most cases, giving rise to a dead-zone. The double-diffusive instability, on the other hand, will be present on scales from $l_\eta$ down to a critical length $l_c$ upon which radiative diffusion becomes important. This critical length can be estimated by setting the constant coefficient $a_0$ in \eqref{a0} to zero. We find \begin{equation}\label{kc} l_c\sim q^{1/2}\Lambda^{-1}\,\frac{\Omega}{|N_R|}\,l_A\,\sim \beta^{1/2} \left(\frac{\Omega}{|N_R|}\right)\left(\frac{10^{-21}}{x_e}\right)^{1/2}\,H. \end{equation} Unless magnetic fields and the entropy gradient are exceptionally weak, this scale $l_c$ should be less than the scale height. We conclude that unstable double-diffusive modes will generally fit into the dead-zone. \subsection{Numerical solutions} In this section, the dispersion relation \eqref{axisdispdim} is solved numerically. Once the parameters $\Lambda$, $\kappa^2/\Omega^2$, $N^2_R/\Omega^2$, and $q$ are stipulated, we may compute the growth rate $s$ as a function of vertical wavenumber $k$. Throughout we set $q$ to be a constant, $q=10^{-6}$, and $\kappa/\Omega=1$, while varying $\Lambda$ and $N^2_R/\Omega^2$. Hence, the Elsasser number $\Lambda$ may be interpreted as a surrogate for magnetic field strength. The results of this subsection are limited to the MRI/double-diffusive mode, which is the only mode that grows, and we neglect the other four decaying modes. \begin{figure} \scalebox{0.7}{\includegraphics{nsols1.eps}} \caption{Growth rate $s$ of the unstable MRI/double-diffusive mode as a function of $k$ for three illustrative cases. The purple dashed-dotted line represents the ideal MHD configuration $\Lambda=\infty$ with $N^2_R/\Omega^2=-0.1$. Here the unstable mode is the convective-MRI and it is extinguished when $k/k_A\gtrsim \sqrt{3}$. The red dashed line represents resistive MHD in the absence of a radial entropy gradient; here $\Lambda=1$ and $N^2_R=0$. Instability is extinguished on wavelengths roughly below the resistive scale. The solid blue line represents the case with both resistivity and the negative entropy gradient: $\Lambda=1$ and $N^2_R/\Omega^2=-0.1$. As is plain, instability continues into short scales; this is the double-diffusive instability. In the non-ideal case, the ratio of thermal to resistive diffusivity is set to $q=10^{-6}$.} \end{figure} Figure 1 illustrates the remarkable stability behaviour unlocked by the combination of resistivity and a negative entropy gradient. The purple dotted-dashed curve represents the MRI in ideal MHD ($\eta=0$ but with $N^2_R\neq 0$) and the red dashed curve represents the MRI in resistive MHD but with no radial stratification ($\eta\neq 0$ and $N^2_R=0$). In both cases instability is extinguished on relatively long scales: near an Alfv\'en length in the former case, and near the resistive length in the latter case. (Because here $\Lambda=1$ the two scales are comparable.) However, when resistivity \emph{and} stratification are present ($\eta\neq 0$ and $N^2_R\neq 0$) instability extends to much shorter scales, as witnessed by the blue solid curve in Fig.~1. We identify this `extension' of the MRI as the double-diffusive instability. The MRI transforms smoothly into the new instability upon a transitional scale comparable to $l_\eta$; both instabilities occur on the \emph{same branch} of the dispersion relation. When the MRI mechanism is quenched (because of resistivity), the double-diffusive mechanism takes its place. Note also that the growth rate of the latter is effectively independent of wavenumber for a broad range of $k$. \begin{figure} \scalebox{0.7}{\includegraphics{nsols2.eps}} \caption{Growth rates of the double-diffusive mode $s$ as a function of $k$ for different Elsasser number $\Lambda$. The blue line represents the $\Lambda=0.1$ case, the purple line, $\Lambda=0.05$, and the red line, $\Lambda=0.01$. The squared BV frequency is $N^2_R/\Omega^2=-0.1$ for these cases and the Roberts number is $q=10^{-6}$. At very long scales, $k\ll k_\eta\sim\Lambda k_A$, we recover the familiar MRI mode, but as $k$ approaches $k_\eta$ this mode morphs into the double-diffusive mode which persists until $k_c$.} \end{figure} \begin{figure} \scalebox{0.7}{\includegraphics{nsols3.eps}} \caption{Growth rates of the double-diffusive mode $s$ as a function of $k$ for different squared BV frequency $N^2_R$. The blue line represents the $N^2_R/\Omega^2=-0.1$ case, and the red line, $N^2_R/\Omega^2=-0.01$. The Elsasser number is $\Lambda=0.1$ and the Roberts number is $q=10^{-6}$.} \end{figure} In Figs 2 and 3, we present the behaviour of the growth rates as $\Lambda$ and $N^2_R$ vary. In both case, the smaller $\Lambda$ and $N^2_R$, the smaller the double-diffusive growth rate. Numerically, it appears to scale as $$ \frac{s}{\Omega}\sim \left(\frac{-N^2_R}{\Omega^2}\right)\,\Lambda.$$ We show this analytically in Section 3.2 with an asymptotic analysis. Being the product of two small quantities, realistic growth rates will be much smaller than the orbital frequency, though still dynamically significant. For example, taking $\Lambda=10^{-3}$ and $N^2_R/\Omega^2=10^{-2}$ furnishes a growth rate of $s\sim 10^{-5}\Omega$, and thus an e-folding time of some 15,000 orbits. The range of unstable wavenumbers associated with the double-diffusive instability is governed by the Roberts number $q$ and $N^2_R$. It extends from, roughly, the resistive scale $k_\eta\sim\Lambda$ down to $k_c\sim |N_R/\Omega|\Lambda q^{-1/2}$ (in units of $k_A$). \subsection{Asymptotic analysis} In this section we derive an analytic expression for the growth rates in the double-diffusive regime: when wavelengths are much less than the resistive scale $l_\eta$ but much longer than the cutoff scale $l_c$. These modes are thus heavily influenced by electrical diffusion but are relatively insensitive to radiative diffusion. An asymptotic analysis of very long wavelength modes, which connects this paper to the local stability results of Balbus (1995), is presented in Appendix A. In order to calculate modes in the double-diffusive regime, we set set their lengthscale to a value of order the Alfv\'en length, i.e.\ $k\sim k_A$. In addition, a small scaling parameter $\epsilon$ is introduced so that $0<\epsilon\ll 1$. From the outset we assume that both $\Lambda$ and $N^2_R/\Omega^2$ are equally small and of order $\epsilon$. Subsequently, the growth rate $s$ is expanded in powers of $\epsilon$, substituted into the dispersion relation, and then terms of equal order are collected. This procedure permits the calculation of the leading order terms of all five modes. We find that the two hydrodynamic convective modes oscillate at the epicyclic frequency at leading (zeroth) order and decay at a rate $-\Lambda\,\Omega$. There exist two other modes, on the other hand, which both decay at the rate $-\Lambda^{-1}\,(k/k_A)^2\,\Omega$. The fifth mode is the double-diffusive mode and it possesses a growth rate that scales like $\epsilon^2$. It can be determined by balancing the last two terms in the dispersion relation Eq.~\eqref{axisdispdim}. We find, to leading order, \begin{equation} \label{dds} s= -\frac{N^2_R}{\kappa^2}\,\Lambda\,\Omega. \end{equation} Stability of the mode is governed by the classical Schwarzchild criterion. So when $N^2_R<0$, the mode grows. Note also that the growth rate is independent of the wavenumber $k$ in this regime. The expression \eqref{dds} is, in fact, quite general and holds throughout most of the double-diffusive range of $k$, as confirmed by the numerical solution (Figs 2 and 3). When $k$ becomes small, however, the analysis breaks down and the growth rate connects to the classical-MRI curve. And when $k$ is very large, thermal diffusion stabilises the mode. \subsection{Magnetostrophic balances} In this subsection we characterise the instability in a physically illuminating way, by showing how it relies on certain steady balances. While the resistive MRI (or magnetostrophic MRI) works by balancing the Lorentz and inertial forces in the momentum equation (as discussed in Petitdemange et al.~2008), the double-diffusive instability requires \emph{in addition} the balancing of advection and diffusion in the induction equation. Recognising this fact helps simplify some of the algebra of the previous section, and adds support to the previous asymptotic estimate. \subsubsection{Balancing the momentum equation} Consider the linearised momentum equation \eqref{linu}. Assume the left hand side is subdominant. This means that to leading order we have a steady balance between the Lorentz and inertial forces (magnetostrophic balance), also relevant to the Earth's core (Petitdemange et al.~2008). By keeping the growth rate explicit in the induction and entropy equation, but setting $\xi=0$, we may derive a (reduced) cubic dispersion relation, \begin{align} &\kappa^2\,s^3 + 2\kappa^2 \eta\,k^2\,s^2 + \left[(N^2_R+\widetilde{\Omega}^2)v_A^2\,k^2+ (v_A^4+\kappa^2\eta^2)k^4 \right]s +N^2_R\,\eta\,v_A^2\,k^4 \,=\,0, \end{align} with $ \widetilde{\Omega}^2 = (d\Omega^2/d\ln R)$. While an approximation, this equation captures the physical essence of the two MRI modes and the double-diffusive mode. We may obtain an analytic expression by solving for $k^2$, which, after inversion, yields the growth rates of these modes. For purely real $s$: \begin{align} k^2_{\pm} &= -\frac{2\kappa^2\eta\, s^2+(N^2_R+\widetilde{\Omega}^2)v_A^2\,s} {2(v_A^4s+\eta N^2_R v_A^2 + \kappa^2\eta^2 s)} \pm\frac{v_A\,s\sqrt{4\kappa^2\eta\widetilde{\Omega}^2s +(N^2_R+\widetilde{\Omega}^2)^2\,v_A^2 - 4\kappa^2\,v_A^2\,s^2 }} {2(v_A^4s+\eta N^2_R v_A^2 + \kappa^2\eta^2 s)}. \label{cubic} \end{align} \begin{figure} \scalebox{0.7}{\includegraphics{NSOL4.eps}} \caption{Growth rate of the purely real unstable mode $s$ as a function of $k$ as determined from the inverse of \eqref{cubic}. The red dashed line indicates the plus branch and the solid blue line the minus branch in \eqref{cubic}. The dotted black curve represents the full solution to the fifth-order dispersion relation \eqref{axisdispdim}. The parameters chosen are $N^2_R/\Omega^2=-0.1$, $\Lambda=0.1$, $\widetilde{\Omega}^2/\Omega^2=-3$, $\kappa/\Omega=1$.} \end{figure} There are thus two branches, which we plot in Fig.~4 for representative parameters. The solid blue line indicates the `minus' branch and the dashed red line the `positive' branch. In addition, we plot the full solution to the fifth-order dispersion relation \eqref{axisdispdim} (the dotted black line). As is clear, the two branches of the reduced magnetostrophic solution present a reasonable approximation to the actual MRI and double-diffusive modes for all $k$. This strengthens the claim that magnetostrophic balance is central to the mechanism of instability in both cases. \subsubsection{Balancing the induction equation} It is also instructive to consider the limit in which the time derivatives are negigible in the induction equation as well. We thus set the left-side of the linearised equation \eqref{linB} to zero. This means that in addition to the steady balance between the Lorentz force and the inertial forces in the momentum equation, there is a steady balance between magnetic diffusion and magnetic advection/distortion. This leaves only one time-derivative in the linearised equations \eqref{linu}--\eqref{linS}. As a result, the growth rate is straightforward to calculate: \begin{equation} \label{rmsbal} s= \frac{-N^2_R\,\eta\,v_A^2\,k^2}{\widetilde{\Omega}^2 v_A^2 + (v_A^4+\eta^2\kappa^2)\,k^2}. \end{equation} This expression yields the (scaled) asymptotic growth rate of the double-diffusive instability \eqref{dds} in the limit of large resistivity and for $k \gtrsim v_A/\Omega$. On the other hand, for sufficiently small $k$ we obtain the growth rate of a decaying purely resistive mode described in the Appendix by Eq.~\eqref{resis}. These two modes, however, occur on different solution branches and the transition between them, in the above, is discontinuous. The two steady balances upon which the double-diffusive mode exploits, may be understood as a cancelling of the forces associated with the magnetic field and differential rotation. Differential rotation will attempt to stretch out a field line, and magnetic diffusion will relax the ensuing magnetic tension. In the meantime, however, what magnetic tension is generated can counterbalance the inertial forces in the momentum equation. This is important because these inertial forces would otherwise stabilise the mode. \subsubsection{Angular momentum and heat transport} The solution obtained in the previous subsection presents a convenient framework to compute the angular momentum and heat fluxes aroused by a single linear mode. These may offer an insight into the direction of these fluxes in the saturated nonlinear state. The thermal heat flux associated with a single Fourier mode of the double-diffusive instability we define as \begin{equation} \mathcal{F}_\text{T} = 2\,\text{Re}(u_x'\, \overline{T}') = -2\,\text{Re}(u_x'\,\overline{\rho}'), \end{equation} where the overline indicates the complex conjugate. From the linearised equations, with $s$ negligible in the momentum and induction equations, we obtain \begin{equation} \mathcal{F}_\text{T} = 2\, |u_x'|^2\,\left(\frac{-\rho\,\d_R S}{\gamma\,s}\right). \end{equation} With $s>0$ and positive, from Eq.~\eqref{rmsbal}, and a negative radial entropy gradient, the double-diffusive mode always transports heat outwards, at a rate proportional to the gradient itself. This is as expected: the mode will act against the destabilising condition from which it arose. The angular momentum flux we define as \begin{equation} \mathcal{F}_\text{h} = 2\left[\text{Re}(u_x'\,\overline{u}_y') -\frac{1}{4\pi\,\rho_0}\,\text{Re}(B_x'\,\overline{B}_y') \right]. \end{equation} Using the eigenfunction of the double-diffusive instability, this can be reexpressed as \begin{equation} \mathcal{F}_\text{h} = -|u_x'|^2\left[\frac{1}{\Lambda}\left(\frac{\kappa^2}{\Omega^2}\right) - 4\Lambda\,\frac{\Omega^2}{v_A^2\,k^2}\right]. \end{equation} The double-diffusive mode works on scales $k\gtrsim v_A/\Omega$, and we take $\Lambda$ to be small. Thus the first term (associated with the Reynolds stress) in the square brackets will dominate the second, which ensures that angular momentum is transported inwards. However, for very small $k$, i.e.\ for the decaying resistive mode, the situation is reversed. \begin{figure} \scalebox{0.7}{\includegraphics{cartoon.eps}} \caption{Four panels showing schematically the physical mechanism of the double-diffusive instability. Panel (a) presents two fluid blobs, embedded in a negative entropy gradient: $S_{+2}>S_{+1}>S_0>S_{-1}>S_{-2}$. The fluid elements are separated vertically but connected by a magnetic tether. The following panels, in order, show the development of a horizontal perturbation in time, under the influence of speedy angular momentum transport (by the diffusing magnetic field) and the radial entropy gradient. See Section 3.5} \end{figure} \subsection{Physical description of the unstable mode} Figure 5 presents a schematic diagram of how the double-diffusive instability works in a physically intuitive way, developing the ideas put forward at the beginning of Section 2. Panel (a) shows two fluid blobs at different vertical locations but at the same radial location (the violet circles). They are connected by a vertical magnetic field line. The two blobs are embedded in a negative radial entropy gradient, but they possess the same entropy $S_0$ as each other, because they inhabit the same radius. Fluid at smaller radii (to their left) possesses greater entropy, $S_{+1}$ and $S_{+2}$, and fluid at greater radii (to their right) possesses lesser entropy, $S_{-1}$ and $S_{-2}$. In panel (b) the blobs are perturbed horizontally, initially deforming their magnetic tether. Because of the large resistivity, the blobs will slip through the magnetic field, and so the magnetorotational instability does not set in. Nevertheless, the field will permit some amount of angular momentum to be exchanged between the two blobs and they adopt new orbits at their new radii. But the displaced fluid blobs now possess a different entropy to their surroundings: too little entropy in the case of the upper blob ($S_0<S_{+1}$), and too much in the case of the lower ($S_0>S_{-1}$). Unless thermal diffusion acts rapidly to equilibriate them with their ambient gas, they cannot sustain their current positions. Instead they will continue moving radially (pink arrows). This motion will bring them into contact with new diffusing magnetic field lines, panel (c), which will tether them to new fluid partners (the grey circles). Now the new magnetic field lines will facilitate another transient torque, which will transfer more angular momentum. As earlier, the violet blobs adopt new radii but the disparity between their entropy and their environments' entropy is even larger than before, panel (d), and they are compelled to continue drifting. The cycle continues, leading to instability. Notice that the cycle incorporates the $B$ field, resistivity, and $dS/dR<0$, three ingredients which are represented in the stability discriminant in $a_0$, Eq.~\eqref{a0}. The essential features of the instability are: the suppression of stabilising rotation by the diffusing magnetic field, and slow (or negligible) thermal diffusion. This justifies calling the instability double-diffusive: angular momentum is diffused rapidly, eliminating the stable angular momentum stratification. But heat is diffused slowly, allowing the unstable entropy stratification to do its work unhindered. It can be shown that a sufficiently large viscosity can do the same job as the magnetic field here: and, in fact, a hydrodynamical disk in which $\xi/\nu\ll 1$ exhibits an analogous instability (where $\nu$ is kinematic viscosity). Consequently, we may interpret the instability as the `dual' of the Goldreich and Schubert instability (Goldreich and Schubert 1967), which relies on the negation of a stabilising entropy gradient, by rapid thermal diffusion, and the tapping of an unstable angular momentum gradient (unimpeded by the weak viscosity). The dead-zone double-diffusion instability is also related to salt-fingering in the ocean: and, in fact, the linear stage of a single unstable mode will be characterised by a vertical sequence of horizontal `fingers' or channel flows, much as in the MRI. However, these flows, unlike classical MRI, are not nonlinear solutions, and will quickly break down into some form of disordered mixing. Another important difference is that nearly all the unstable modes possess comparable growth rates, so no one mode will dominate. Noticeable channel flows, or fingers, are therefore unlikely to be observed in simulations. \section{Conclusion} This paper introduces a novel linear instability which occurs on short length-scales and which could be at work within the dead-zones of protostellar disks. The instability is double-diffusive and relies on (a) the energy provided by a weak negative entropy gradient, (b) a diffusing magnetic field to relax angular momentum conservation, and (c) very slow thermal diffusion. It grows on a range of lengthscales extending from the resistive length down to a length related to the thermal scale. Across this potentially large range the growth rate scales like $s\sim\Lambda\,(N^2_R/\Omega^2)\,\Omega$. Though the unstable mode will almost always physically fit into the dead-zone (unlike the MRI), it may grow quite slowly. Nevertheless it may give rise to dynamically interesting velocity amplitudes after some $10^4$ orbits (depending on the parameters). In principle, the instability's nonlinear saturation may play a significant role in the dynamics of dead-zones, leading, in particular, to radial and vertical mixing, which in turn may influence dust grain dynamics (settling and clumping) on one hand, and the global temperature structure of the disk, on the other. However, a number of issues need to be settled, and these form the basis for future work. First, the governing parameters of the system need to be better constrained, so that one can be assured the instability grows on dynamically meaningful timescales. This requires realistic estimates for the magnetic field strength, the radial entropy gradient, and the resistivity. Second, the linear analysis should be extended to the peculiar nonideal MHD conditions present in the interior of protostellar disks, where the instability will be altered by Hall effects and ionisation and dust grain chemistry. Third, the interaction between the instability and much faster processes in the active zones needs to be clarified. For instance, the turbulent motions in these regions may transport the large-scale magnetic field on a similar time to the growth time (Fromang and Stone 2009, Lesur and Longaretti 2009). On the other hand, rapid density waves, stimulated by the turbulence, may also interact with the instability (Fleming and Stone 2003, Heinemann and Papaloizou 2009a,b). Fourth, the instability's nonlinear saturation should be simulated numerically in order to establish whether it leads to sustained disordered flows. Finally, this instability is not restricted to protostellar disks, as the necessary ingredients for instability are rather generic. In particular, double-diffusive modes may be excited within the fluid cores of rapidly rotating planets and protoplanets, possibly modulating pre-existing dynamo activity, or perhaps directly instigating convection itself in bodies that spin so rapidly that regular buoyancy instabilities are quenched. These issues we address in future work. \section*{Acknowledgements} The authors would like to thank the anonymous referee whose critical suggestions improved the quality of the manuscript. HNL thanks Geoffroy Lesur, John Papaloizou, and Pierre Lesaffre for much useful advice. Funding from the Conseil R\'egional de l'Ile de France is acknowledged.
2024-02-18T23:40:21.841Z
2010-03-24T01:01:24.000Z
algebraic_stack_train_0000
2,162
7,015
proofpile-arXiv_065-10555
\section{Introduction and statement of results}\label{intro} The Julia set $J(f)$ of an entire function $f$ is defined as the set of all points in $\mathbb C$ where the iterates $f^{k}$ of $f$ do not form a normal family. An equivalent definition was given in~\cite{E}: $J(f)=\partial I(f)$ where $I(f)=\{ z:f^n(z)\to\infty\}$ is the set of escaping points; see~\cite{Ber93} for an introduction to the dynamics of entire an meromorphic functions. Devaney and Krych \cite{DK} showed that $J(\lambda e^z)$ is a ``Cantor bouquet'' for $0<\lambda<1/e$. To give a precise statement of their result we say that a subset $H$ of $\mathbb C$ (or $\mathbb R^d$) is a {\em hair} if there exists a continuous injective map $\gamma:[0,\infty)\to\mathbb C$ (or $\mathbb R^d$) such that $\lim_{t\to\infty}\gamma(t)=\infty$ and $\gamma([0,\infty))=H.$ We call $\gamma(0)$ the {\em endpoint} of the hair. The result of Devaney and Krych is the following. \begin{thma} Let $0<\lambda< 1/e$. Then $J(\lambda e^z)$ is an uncountable union of pairwise disjoint hairs. \end{thma} We denote by $\operatorname{dim} X$ the Hausdorff dimension of a set $X$ in $\mathbb C$ (or in~$\mathbb R^d$). The following result is due to McMullen~\cite[Theorem~1.2]{Mc}. \begin{thmb} For $\lambda\in\mathbb C\backslash\{0\}$ we have $\operatorname{dim} J(\lambda e^z)=2.$ \end{thmb} Karpi\'nska~\cite[Theorem~1.1]{K} proved the following surprising result. \begin{thmc} Let $0<\lambda<1/e$ and let $E_\lambda$ be the set of endpoints of the hairs that form $J(\lambda e^z)$. Then $\operatorname{dim} E_\lambda=2$ and $\operatorname{dim}(J(\lambda e^z)\backslash E_\lambda)=1$. \end{thmc} The conclusion of Theorem B holds more generally for entire functions of finite order for which the set of critical and asympotic values is bounded; see~\cite[Theorem~A]{Bar08} and~\cite{Schubert07}. If, in addition, this set is compactly contained in the immediate basin of an attracting fixed point, then the conclusions of Theorems A and C also hold~\cite{Bar07,Bar08}. These results apply in particular to trigonometric functions. However, the analogue of Theorem A for trigonometric functions had been obtained already much earlier by Devaney and Tangerman~\cite{DT}. \begin{thmd} Let $0<\lambda< 1$. Then $J(\lambda\sin z)$ is an uncountable union of pairwise disjoint hairs. \end{thmd} McMullen~\cite[Theorem~1.1]{Mc} and Karpi\'nska~\cite[Theorem~3]{Karp1} also considered the case of trigonometric functions. Their results are as follows. Here $\operatorname{area} X$ stands for the Lebesgue measure of a measurable subset $X$ of~$\mathbb C$. \begin{thme} Let $\lambda,\mu\in\mathbb C,\;\lambda\neq 0$. Then $\operatorname{area} J(\lambda\sin z+\mu)>0$. \end{thme} \begin{thmf} For $0<\lambda<1$ let $E_\lambda$ be the set of endpoints of hairs that form $J(\lambda\sin z)$. Then $\operatorname{area} E_\lambda>0$. \end{thmf} The argument in~\cite{K} shows that under the hypothesis of Theorem~F we also have $\operatorname{dim}(J(\lambda \sin z)\backslash E_\lambda)=1$. The conclusions of Theorems $D$ and $F$, as well as the last remark, hold more generally for functions of the form $f(z)=\lambda\sin z+\mu$ if the parameters are chosen such that the critical values $\pm\lambda+\mu$ of $f$ are contained in the immediate basin of the same attracting fixed point. If this condition on the critical values is not satisfied, then the hairs in the Julia set of $f$ still may exist, but in general distinct hairs may share their endpoints~\cite{RS}. If the critical values of $f(z)=\lambda\sin z+\mu$ are strictly preperiodic, then $J(f)=\mathbb C$. Schleicher~(\cite{S}, see also~\cite{S1}) showed that $J(f)$ is still a union of hairs which are pairwise disjoint except for their endpoints, and the Hausdorff dimension of the hairs without their endpoints is~$1$. Thus he obtained the following result. \begin{thmg} There exists a representation of the complex plane $\mathbb C$ as a union of hairs with the following properties: \begin{itemize} \item the intersection of two hairs is either empty or consists of the common endpoint; \item the union of the hairs without their endpoints has Hausdorff dimension~$1$. \end{itemize} \end{thmg} Zorich~\cite{Z} introduced a quasi\-regular analog $F:\mathbb R^3\to\mathbb R^3\backslash\{0\}$ of the exponential function. It was shown in~\cite{B} that the results about the dynamics of the exponential function quoted above (Theorems A, B and~C) have analogs in the context of Zorich maps. In this paper we introduce a higher dimensional analog of the trigonometric functions. The dynamics of this map are then used to extend Theorem G to all dimensions greater than~$1$. \begin{theorem} For each $d\in\mathbb N$, $d\geq 2$, there exists a representation of $\mathbb R^d$ as a union of hairs with the following properties: \begin{itemize} \item the intersection of two hairs is either empty or consists of the common endpoint; \item the union of the hairs without their endpoints has Hausdorff dimension~$1$. \end{itemize} \end{theorem} The construction of our higher dimensional analog of the trigonometric functions is similar to the construction of Zorich's map as given in~\cite[Section 6.5.4]{Iwaniec01}. We begin with a bi-Lipschitz map $F$ from the half-cube $$ \left\{x=(x_1,\dots,x_d)\in\mathbb R^d:\|x\|_\infty\leq 1, x_d\geq 0 \right\} =[-1,1]^{d-1}\times [0,1]$$ to the upper half-ball $$\{x\in\mathbb R^d:\|x\|_2 \leq 1,x_d\geq 0\}$$ which maps the face $[-1,1]^{d-1}\times \{1\}$ to the hemisphere $\{x\in\mathbb R^d:\|x\|_2 = 1,x_d\geq 0\}$. We will give an explicit construction of such a bi-Lipschitz map $F$ in Section~\ref{explicit}. Next we define $F:[-1,1]^{d-1}\times (1,\infty)\to\mathbb R^d$ by $$F(x)=\exp(x_d-1)F(x_1,\dots,x_{d-1},1).$$ The map $F$ is now defined on $[-1,1]^{d-1}\times [0,\infty)$, and it maps $[-1,1]^{d-1}\times [0,\infty)$ bijectively onto the upper half-space $H^+:=\{x\in\mathbb R^d:x_d\geq 0\}$. Using repeated reflections at hyperplanes we can extend $F$ to a map $F:\mathbb R^d\to\mathbb R^d$. It turns out that the map $F$ is quasi\-regular. However, we shall not actually use this fact. On the other hand, the quasi\-regularity of $F$ is one of the underlying ideas in the proofs, and thus we make some remarks about quasiregular maps in Section~\ref{quasi}. We also show there that our map $F$ is indeed quasiregular. We note that since $F$ is locally bi-Lipschitz, the restriction of $F$ to any line is absolutely continuous, and $F$ is differentiable almost everywhere. We denote by $$\|DF(x)\|:=\sup_{\| y\|=1}\| DF(x)(y)\|$$ the operator norm of the derivative $DF(x)$. (Here and in the following $\|y\|=\|y\|_2$ for $y\in\mathbb R^d$; that is, unless specified otherwise we consider the Euclidean norm in $\mathbb R^d$.) We also put $$\ell(DF(x)):=\inf_{\| y\|=1}\| DF(x)(y)\|.$$ We note that it follows from the definition of $F$ that if $x,x' \in (-1,1)^{d-1}\times (1,\infty)$ and $x_j=x_j'$ for $1\leq j\leq d-1$, then \begin{equation}\label{DFexp} DF(x')=\exp(x_d'-x_d) DF(x) \end{equation} whenever these derivatives exist. It is easy to see that $$\beta:=\operatorname*{ess\,inf}_{ x\in\mathbb R^d}\ell(DF(x))>0$$ for our map~$F$. We choose $\lambda >1/\beta$ and consider the map $f=\lambda F$. Clearly $f$ is quasi\-regular and \begin{equation} \label{1a} \alpha:=\operatorname*{ess\,inf}_{x\in\mathbb R^d}\ell(Df(x))=\lambda \beta>1, \end{equation} that is, $f$ is locally uniformly expanding in $\mathbb R^d$. We put $S:=\mathbb Z^{d-1}\times\{-1,1\}$ and for $r=(r_1,\dots,r_d)\in S$ we define $$T(r):=\{x\in\mathbb R^d:|x_j-2r_j|\leq 1 \text{ for } 1\leq j\leq d-1,\; r_dx_d\geq 0\}.$$ We find that if $$\sigma(r):=\sum_{j=1}^{d-1}r_j+\frac12(r_d-1)$$ is even, then $f$ maps $T(r)$ bijectively onto $H^+$. If $\sigma(r)$ is odd, then $f$ maps $T(r)$ bijectively onto $H^-:=\{x\in\mathbb R^d:x_d\leq 0\}$. For a sequence $\underline{s}=(s_k)_{k\geq 0}$ of elements of $S$ we put $$H(\underline{s}):=\left\{ x\in\mathbb R^d:f^k(x)\in T(s_k)\ \mbox{for all}\ k\geq 0 \right\}.$$ Evidently $\mathbb R^d=\sum_{\underline{s}\in \underline{S}} H(\underline{s})$, where $\underline{S}$ is the set of all sequences with elements in~$S$ for which $H(\underline{s})$ is not empty. \begin{proposition}\label{prop1} If $\underline{s}\in\underline{S}$ then $H(\underline{s})$ is a hair. \end{proposition} For $\underline{s}\in\underline{S}$ we denote by $E(\underline{s})$ the endpoint of $H(\underline{s})$. \begin{proposition}\label{prop2} If $\underline{s}'\neq\underline{s}''$ then $H(\underline{s}')\cap H(\underline{s}'')=\emptyset$ or $H(\underline{s}')\cap H(\underline{s}'')=\{E(\underline{s})\}$. \end{proposition} \begin{proposition}\label{prop3} $\operatorname{dim}\left( \bigcup_{\underline{s}\in \underline{S}} H(\underline{s})\backslash \{ E(\underline{s})\}\right)=1$. \end{proposition} Theorem 1 follows from these propositions. \section{Preliminaries} It follows from the definition of $F$ that $$\| F(x)\|= \exp(|x_d|-1),\quad x \in\mathbb R^d,\ |x_d|\geq 1,$$ so that \begin{equation}\label{2b} \| f(x)\|= \lambda \exp(|x_d|-1),\quad x \in\mathbb R^d,\ |x_d|\geq 1. \end{equation} For $r\in S$ we denote by $\Lambda^r$ the inverse function of $f|_{T(r)}$. Thus $\Lambda^r:H^+\to T(r)$ or $\Lambda^r:H^-\to T(r)$, depending on whether $\sigma(r)$ is even or odd. For $x\in T(r)$ and $y=f(x)$ we have $$\| D\Lambda^r(y)\|=\frac{1}{\ell(Df(x))}$$ and thus \begin{equation}\label{n1} \| D\Lambda^r(y)\|\leq \frac{1}{\alpha} \end{equation} by~\eqref{1a}. It follows from~\eqref{n1} that if $a,b\in T(r)$, then $$\| a-b\|=\left\|\Lambda^r(f(a))-\Lambda^r(f(b))\right\| \leq \frac{1}{\alpha}\| f(a)-f(b)\|.$$ Hence \begin{equation}\label{2a} \| f(a)-f(b)\|\geq\alpha\| a-b\|\quad\mbox{for}\ a,b\in T(r),\ r\in S. \end{equation} If $|x_d|\geq 1$ then we have $$\ell(Df(x))\geq \alpha \exp(|x_d|-1)=\frac{\alpha\| f(x)\|}{\lambda} =\beta \| f(x)\|$$ by~\eqref{DFexp}, \eqref{1a} and~\eqref{2b}. Note that the condition $|x_d|\geq 1$ is equivalent to $\|y\|\geq \lambda$. Thus \begin{equation}\label{n2} \| D\Lambda^r(y)\|\leq\frac{1}{\beta \| y\|},\quad y\in\mathbb R^d,\ \|y\|\geq \lambda. \end{equation} Similarly we deduce from~\eqref{DFexp} that there exists a positive constant $\delta$ such that \begin{equation}\label{n3} \ell\left( D\Lambda^r(y)(x)\right)\geq \frac{\delta}{\| y\|} ,\quad y\in\mathbb R^d,\ \|y\|\geq \lambda. \end{equation} We shall also need the following result. \begin{lemma}\label{lemma1} Let $\underline{s}=(s_k)_{k\geq 0}$ be an element of $\underline{S}$ and let $x,y\in H(\underline{s})$. For $k\geq 0$ we put $x^k=(x_1^k,\dots,x_d^k):=f^k(x)$ and $y^k=(y_1^k,\dots,y_d^k):=f^k(y)$. There exists $M>0$ with the following property: if \begin{equation} \label{2c} |y_d^k|>|x_d^k|+M \end{equation} for some $k\geq 0$ then $$ |y_d^{k+1}|>\frac{\lambda }{3}\exp{|y^k_d|}+M\geq 5|x_d^{k+1}|+M.$$ \end{lemma} \noindent {\em Proof.} We will denote by $p$ the projection $$p:\mathbb R^d\to\mathbb R^{d-1},\ (x_1,\dots,x_{d-1},x_d)\mapsto(x_1,\dots,x_{d-1}).$$ Since $|x^k_j-y^k_j|\leq 2$ for $1\leq j\leq d-1$ and all $k$ we have \begin{equation} \label{2c1} \left\| p(x^{k})- p(y^{k})\right\| \leq 2\sqrt{d-1} \end{equation} for all~$k$. Suppose now that (\ref{2c}) holds. Then using (\ref{2b}) and (\ref{2c1}) we obtain \begin{eqnarray*} \left|y^{k+1}_d\right|&\geq&\left\| y^{k+1}\right\| -\left\| p(y^{k+1})\right\|\\ &\geq & \lambda \exp\left({\left|y^k_d\right|}-1\right)- \left\| p(x^{k+1})\right\|-2\sqrt{d-1}\\ &\geq& \lambda \exp\left(\left|y^k_d\right|-1\right)- \lambda \exp\left({\left|x^k_d\right|}-1\right)-2\sqrt{d-1}\\ &\geq& \lambda \exp\left(\left|y^k_d\right|-1\right)- \lambda \exp\left({\left|y^k_d\right|}-M-1\right)-2\sqrt{d-1}\\ &=& \frac{\lambda}{e} \left(1-e^{-M}\right)\exp\left|y^k_d\right| -2\sqrt{d-1}. \end{eqnarray*} Noting that $|y^k_d|>M$ by (\ref{2c}) we find that if $M$ is sufficiently large then $$\left|y^{k+1}_d\right|\geq \frac{\lambda }{3}\exp{\left|y^k_d\right|}+M.$$ Since $$\frac{\lambda }{3}\exp{\left|y^k_d\right|}> \frac{\lambda }{3}e^M\exp{\left|x^k_d\right|} =\frac{\lambda e}{3}e^M\left\|x^{k+1}\right\| \geq \frac{\lambda e}{3}e^M\left|x^{k+1}_d\right| $$ the last inequality in the conclusion of the lemma also holds if $M$ is large. \section{Proof of the Propositions} \begin{proof}[Proof of Proposition 1] For a sequence $\underline{s}=(s_k)$ in $\underline{S}$ we have $$H(\underline{s})=\bigcap_{ k\geq 0} \left(\Lambda^{s_0}\circ\Lambda^{s_1}\circ \ldots\circ\Lambda^{s_k}\right)(T(s_{k+1})).$$ Thus $X:= H(\underline{s})\cup\{\infty\}$ is an intersection of nested, connected, compact subsets of $\overline{\mathbb R^d}:= \mathbb R^d\cup\{\infty\}$. This implies that $X$ is compact and connected. To prove that $H(\underline{s})$ is a hair we follow Rottenfu{\ss}er, R\"uckert, Rempe and Schleicher~\cite{R3S} and use the following lemma from~\cite{N}. \begin{lemma}\label{lemma2} Let $X$ be a non-empty, compact, connected metric space. Suppose that there is a strict linear ordering $\prec$ on $X$ such that the order topology on $X$ agrees with the metric topology. Then either $X$ consists of a single point or there is an order-preserving homeomorphism from $X$ onto $[0,1]$. \end{lemma} To define the linear ordering on $X=H(\underline{s})\cup\{\infty\}$ we choose $M$ according to Lemma~\ref{lemma1}. For $x,y\in H(\underline{s})$ we say that $x\prec y$ if there exists $k\geq 0$ such that $|y_d^k|>|x_d^k|+M$, and we define $x\prec\infty$ for all $x\in H(\underline{s})$. Lemma~\ref{lemma1} implies that $x\prec y$ and $y\prec x$ cannot hold simultaneously. Another easy consequence of Lemma~\ref{lemma1} is that our relation $\prec$ is transitive. To show that it is a linear ordering we notice that $\| x^k-y^k\|\geq\alpha^k\| x-y\|$ by~(\ref{2a}). Using~\eqref{2c1} we obtain \begin{equation}\label{2c3} |x^k_d-y^k_d|\geq \left\|x^k-y^k\right\| -\left\|p(x^k)-p(y^k)\right\| \geq \alpha^k\| x-y\|-2\sqrt{d-1}. \end{equation} Thus $x\neq y$ implies either $x\prec y$ or $y\prec x$. Now we prove that the order topology on $X$ is the same as the topology induced from~$\overline{\mathbb R^d}$. We have to show that the identity map from $X$ with the induced topology to $X$ with the order topology is a homeomorphism. Since $X$ with the induced topology is compact and since $X$ with the order topology is Hausdorff, it suffices to show that the identity map is continuous~\cite[p.~141, Theorem~8]{Kelley}. Thus we only have to show that the sets $$ U^-(a):=\left\{ w\in X:w\prec a\right\} \quad\text{and}\quad U^+(a):=\left\{ w\in X:w\succ a\right\} $$ are open with respect to the induced topology for all $a\in X$. In order to do so, let $w\in U^-(a)$ and choose the minimal $k$ such that $|w^k_d|<|a^k_d|-M$. Then there is a neighborhood $V$ of $w$ in $\mathbb R^d$ where the same inequality is satisfied. The intersection $V\cap H(\underline{s})$ is a neighborhood of $w$ that is contained in $U^-(a)$. Thus $U^-(a)$ is open with respect to the induced topology. The proof for $U^+(a)$ is similar. Thus the order topology on $X$ agrees with the topology induced from~$\overline{\mathbb R^d}$. Proposition~\ref{prop1} now follows from Lemma~\ref{lemma2}. \end{proof} \begin{proof}[Proof of Proposition 2] Let $y\in H(\underline{s}')\cap H(\underline{s}'')$. Let $m$ be the smallest subscript such that $s^\prime_m\neq s^{\prime\prime}_m$. Then $f^m(y)$ belongs to the common boundary of $T(s^\prime_m)$ and $T(s^{\prime\prime}_m)$. From the definition of $f$ we conclude that $f^k(y)$ belongs to the hyperplane $\{x\in\mathbb R^d:x_d=0\}$ for all $k\geq m+1$. This implies that $x\prec y$ is impossible for any~$x$. So $y$ is the minimal element of the order $\prec$ and thus an endpoint of $H(\underline{s}')$ and $H(\underline{s}'')$. \end{proof} \begin{proof}[Proof of Proposition 3] We follow the argument in~\cite{B} and with $\psi:[1,\infty)\to\mathbb R$, $$\psi(t):=\exp\left(\sqrt{\log t}\right)$$ and $M:=\max\{e,4\lambda\}$ we put $$\Omega:=\left\{x\in\mathbb R^d:|x_d|\geq M,\ \|p(x)\| \leq \psi\left(|x_d|\right)\right\}.$$ We then have \begin{equation}\label{Me} \|x\|\leq |x_d|+\|p(x)\|\leq|x_d|+\psi\left(|x_d|\right)\leq 2|x_d|, \quad x\in\Omega. \end{equation} The following result is analogous to~\cite[Lemma 5.3]{B}. \begin{lemma}\label{lemma3} If $y\in H(\underline{s})\backslash \{E(\underline{s})\}$ then $f^k(y)\to\infty$ as $k\to\infty$. Moroever, $f^k(y)\in\Omega$ for all large~$k$. \end{lemma} \begin{proof} Let $\underline{s}=(s_k)_{k\geq 0}\in\underline{S}$ such that $y\in H(\underline{s})$. With $x=E(\underline{s})$ and the ordering $\prec$ as in Section~3 we have $x\prec y$. As before, we put $x^k=f^k(x)$ and $y^k=f^k(y)$ for $k\geq 0.$ By Lemma~\ref{lemma1} we have $$|y^k_d|\geq 5|x^k_d|+M$$ for all large~$k$. Using~\eqref{2c3} we see that $| y_d^k|\to\infty$ and hence $y^k\to\infty$ as $k\to\infty$. Since $$\left\| p(y^k)\right\|\leq \left\| p(x^k)\right\|+2\sqrt{d-1} \leq\left\|x^k\right\|+2\sqrt{d-1}$$ by~\eqref{2c1} we see that $f^k(y)\in \Omega$ holds for large~$k$ if $\| x^k\|\leq R$, where $R$ is any fixed constant. Noting that $$\left\| x^k\right\|\leq \lambda \exp\left|x^{k-1}_d\right|\leq \lambda \exp\left\| x^{k-1}\right\|,$$ we also find that $f^k(y)\in \Omega$ holds for all large $k$ for which $\| x^{k-1}\|\leq \log (R/\lambda) $. We may thus suppose that $\min\{\| x^k\|,\|x^{k-1}\|\}$ is large. Lemma~\ref{lemma1} now yields for large~$k$ that \begin{eqnarray*} \left|y^{k-1}_d\right|&\geq&\frac{\lambda }{3}\exp{\left|y^{k-2}_d\right|}+M\\ &\geq& \frac{\lambda }{3}\exp\left(5\left|x^{k-2}_d\right|+M\right)+M\\ &\geq&\frac{e^M}{3\lambda ^4}\left(\lambda \exp{\left|x^{k-2}_d\right|}\right)^5+M\\ &\geq&\left\|x^{k-1}\right\|^4\\ &\geq&\left|x^{k-1}_d\right|^4, \end{eqnarray*} and hence that \begin{eqnarray*} \left|y^k_d\right|&\geq&\frac{\lambda }{3}\exp{\left|y^{k-1}_d\right|}+M\\ &\geq& \frac{\lambda }{3}\exp\left(\left|x^{k-1}_d\right|^4\right)\\ &\geq&\frac{\lambda }{3}\exp\left(\left(\log\left\| x^k\right\|\right)^4\right)\\ &\geq& \exp\left(\left(\log\left\|x^k\right\|\right)^3\right). \end{eqnarray*} Thus \begin{eqnarray*} \left\| p(y^k)\right\|&\leq&\left\| p(x^k)\right\|+2\sqrt{d-1}\\ &\leq&\left\| x^k\right\|+ 2\sqrt{d-1}\\ &\leq&\exp\left(\left(\log\left| y_d^k\right|\right)^{1/3}\right)+2\sqrt{d-1}\\ &\leq& \exp\sqrt{\log\left|y_d^k\right|} \end{eqnarray*} for large~$k$. This means that $y_k\in\Omega$, and the proof of Lemma~\ref{lemma3} is completed. \end{proof} The following result~\cite[Lemma~5.2]{B} is a simple consequence of some classical covering lemmas. Here we denote by $B(x,r)$ the open ball of radius $r$ around a point $x\in\mathbb R^d$. \begin{lemma} \label{lemma5} Let $Y\subset \mathbb R^d$ and $\rho>1$. Suppose that for all $y\in Y$ and $\eta >0$ there exist $r(y)\in (0,1)$, $d(y)\in (0,\eta)$ and $N(y)\in \mathbb N$ satisfying $d(y)^\rho N(y)\leq r(y)^d$ such that $B(y,r(y))\cap Y$ can be covered by $N(y)$ sets of diameter at most $d(y)$. Then $\operatorname{dim} Y\leq\rho$. \end{lemma} In~\cite[Lemma~5.2]{B} it is additionally assumed that $Y$ is bounded, but this hypothesis can be omitted, since the Hausdorff dimension of a set is the supremum of the Hausdorff dimensions of its bounded subsets. We now begin with the actual proof of Proposition~\ref{prop3}, following the argument in~\cite{B}. Since $f$ is locally bi-Lipschitz, and since the Hausdorff dimension is invariant under bi-Lipschitz maps, Lemma~\ref{lemma3} implies that it suffices to show that $$Y:=\left\{y\in H(\underline{s})\backslash \{E(\underline{s})\}: f^k(y)\in\Omega \text{ for all }k\geq 0\right\}$$ has Hausdorff dimension~$1$. We shall prove this using Lemma~\ref{lemma5}. Let $y\in Y\cap H(\underline{s})$ and, as before, put $y^k=f^k(y)$. With $x=E(\underline{s})$ we deduce from Lemma~\ref{lemma1} that \begin{equation} \label{yk} |y_d^{j+1}|>\frac{\lambda }{3}\exp{|y^j_d|}+M \end{equation} for large~$j$. We now fix a large $k$ and denote by $B_k$ the closed ball of radius $\frac12 |y^k_d|$ around $y_k$. We cover $B_k\cap \Omega$ by closed cubes of sidelength $1$ lying in $\{x\in \mathbb R^d:|x_d|\geq \frac12 |y^k_d|\}$. If $c>2^{d-1}$, then the number $N_k$ of cubes required satisfies $$N_k\leq c\; |y^k_d|\; \psi\left(2 \left|y^k_d\right|\right)^{d-1},$$ provided $k$ is large enough. Given $\varepsilon>0$ we thus can achieve that \begin{equation} \label{Nk} N_k\leq \left|y^k_d\right|^{1+\varepsilon} \end{equation} by choosing $k$ large. Let $B_0$ be the component of $f^{-k}(B_k)$ that contains~$y$. With $$\varphi:=\Lambda^{s_0}\circ \Lambda^{s_1}\circ \dots \circ \Lambda^{s_{k-1}}$$ we have $B_0=\varphi(B_k)$. Using~\eqref{n1} and~\eqref{n2} we find that if $C$ is one of the cubes of sidelength $1$ used to cover $B_k\cap \Omega$, then $$ \operatorname{diam} \varphi(C) \leq \frac{1}{\alpha^{k-1}}\frac{2}{\beta \left|y^k_d\right|} \operatorname{diam} C \leq \frac{1}{\left|y^k_d\right|} $$ if $k$ is sufficiently large. Thus we can cover $B_0\cap Y$ by $N_k$ sets of diameter $d_k$, where \begin{equation} \label{dk} d_k\leq \frac{1}{\left|y^k_d\right|}. \end{equation} In order to apply Lemma~\ref{lemma5} we estimate the radius $r_k$ of the largest ball around $y$ that is contained in~$B_0$. Let $z\in \partial B_0$ with $\|z-y\|=r_k$ and let $\sigma_0$ be the straight line connecting $y$ and~$z$. For $1\leq j\leq k$ we put $\sigma_j=f^j(\sigma_0)$, $B_j=f^j(B_0)$ and $z^j=f^j(z)$. Then $\sigma_k$ connects $y^k$ to $z^k\in \partial B_k$ and thus \begin{equation} \label{lsk} \operatorname{length}(\sigma_k)\geq \frac12\left|y_d^k\right|. \end{equation} We deduce from~\eqref{n2} that $$ \operatorname{diam} B_{k-1}= \operatorname{diam} \Lambda^{s_{k-1}}\left(B_k)\right) \leq \frac{2}{\beta \left|y^k_d\right|} \operatorname{diam} B_k = \frac{2}{\beta}$$ and hence $$\operatorname{diam} B_j\leq \frac{2}{\beta}$$ for $j\leq k-1$ by~\eqref{n1}. Since $|y_d^j|\geq M>4/\beta$ this implies that $$\sigma_j\subset B_j\subset B\left(y^j,\frac12\left|y_d^j\right|\right) \subset B\left(y^j,\frac12\left\|y^j\right\|\right)$$ for $j\leq k-1$. It thus follows from~\eqref{n3} and~\eqref{Me} that $$\operatorname{length} \sigma_{j} =\operatorname{length} \Lambda^{s_{j}}(\sigma_{j+1}) \geq \frac{2\delta}{3\left\|y^{j+1}\right\|} \operatorname{length} \sigma_{j+1} \geq \frac{\delta}{3\left|y_d^{j+1}\right|} \operatorname{length} \sigma_{j+1}$$ for $j\leq k-1$ and this implies that $$ \operatorname{length} \sigma_k \leq \left(\frac{3}{\delta}\right)^k \left(\prod_{j=1}^{k} \left|y_d^j\right|\right) \operatorname{length} \sigma_0 $$ Combining this with~\eqref{lsk} we find that $$r_k=\operatorname{length} \sigma_0 \geq \frac12 \left(\frac{\delta}{3}\right)^k\frac{1}{\prod_{j=1}^{k-1} \left|y_d^j\right|}. $$ Using~\eqref{yk} we see that we can achieve \begin{equation} \label{rk} r_k\geq \frac{1}{\left|y_d^k\right|^{\varepsilon}} \end{equation} by choosing $k$ large. We thus find that we can cover $B(y,r_k)\cap Y$ by $N_k$ sets of diameter at most $d_k$, where $N_k$, $d_k$ and $r_k$ satisfy \eqref{Nk}, \eqref{dk} and~\eqref{rk}. With $\rho=1+(d+1)\varepsilon$ it follows from \eqref{Nk}, \eqref{dk} and~\eqref{rk} that $$(d_k)^\rho N_k \leq \left|y_d^k\right|^{1+\varepsilon-\rho} = \left|y_d^k\right|^{-d \varepsilon}\leq (r_k)^d.$$ Given $\eta>0$ we can also achieve that $r_k<1$ and $d_k<\eta$ by choosing $k$ large. We thus see that the hypothesis of Lemma~\ref{lemma5} are satisfied with $r(y)=r_k$, $d(y)=d_k$ and $N(y)=N_k$. It follows that $\operatorname{dim} Y\leq \rho=1+(d+1)\varepsilon$. Since $\varepsilon>0$ was arbitrary, we obtain $\operatorname{dim} Y\leq 1$. \end{proof} \section{An explicit bi-Lipschitz map} \label{explicit} Let $B^+:=[-1,1]^{d-1}\times [0,1]$, $B^-:=[-1,1]^{d-1}\times [-1,0]$, $U^+:=\{x\in\mathbb R^d:\|x\|_2 \leq 1,x_d\geq 0\}$ and $U^-:=\{x\in\mathbb R^d:\|x\|_2 \leq 1,x_d\leq 0\}$. Then $h_1:=B^+\to B^-$, $x \mapsto x- (0,\dots,0,1)$, and $h_2:B^-\to U^-$, $x\mapsto (\|x\|_\infty/\|x\|_2)x$, are both bi-Lipschitz, and with $X:=[-1,1]^{d-1}\times \{1\}$ and $Y:=\{x\in\mathbb R^d:\|x\|_2 \leq 1,x_d=0\}$ we have $h_2(h_1(X))=Y$. It remains to define a bi-Lipschitz map $h_3:U^-\to U^+$ with $h_3(Y)=\{x\in\mathbb R^d:\|x\|_2 =1,x_d\geq 0\}$. Then $h:=h_3\circ h_2\circ h_1$ has the desired properties. In order to define $h_3$ we note that $$T(z)=\frac{z+i}{iz+1}$$ defines a bi-Lipschitz map from the lower half-disk $\{z\in \mathbb C:|z|\leq 1,\operatorname{Im} z\leq 0\}$ to the upper half-disc $\{z\in \mathbb C:|z|\leq 1,\operatorname{Im} z\geq 0\}$, with $\{z\in \mathbb C:|z|\leq 1,\operatorname{Im} z=0\}$ being mapped onto $\{z\in \mathbb C:|z|=1,\operatorname{Im} z\geq 0\}$. With $x=(x_1,\dots,x_d)=(p(x),x_d)$ and $z=\|p(x)\|_2+ix_d$ it follows that $$h_3(x)=\left(\frac{p(x)}{\|p(x)\|_2} \operatorname{Re} T(z), \operatorname{Im} T(z)\right)$$ has the desired properties. \section{Quasiregular maps}\label{quasi} Let $\Omega\subset \mathbb R^d$ be open. A continuous map $f:\Omega\to\mathbb R^d$ is called quasi\-regular if it belongs to the Sobolev space $W^1_{d,\mathrm{loc}}(\Omega)$ and if there exists a constant $K_O\geq 1$ such that \begin{equation} \label{dil} \| DF(x)\|^d\leq K_O\,J_F(x)\quad\mbox{a.e.}, \end{equation} where $J_F=\det DF$ denotes the Jacobian determinant. Equivalently, there exists $K_I\geq 0$ such that \begin{equation} \label{dil2} J_F(x)\leq K_I\,\ell(DF(x))^d\quad\mbox{a.e.} \end{equation} The smallest constants $K_O$ and $K_I$ for which the above estimates hold are called the outer and inner dilatation. For a thorough treatment of quasi\-regular maps we refer to~\cite{Rick}. To see that our map $F$ defined in Section~\ref{intro} is quasi\-regular, we note that first that~\eqref{dil} holds on the half-cube $(-1,1)^{d-1}\times (0,1)$ since $F$ is bi-Lipschitz there. By the same reason, ~\eqref{dil} holds on the bounded set $(-1,1)^{d-1}\times (1,2)$. Using~\eqref{DFexp} we deduce that~\eqref{dil} holds on $(-1,1)^{d-1}\times (1,\infty)$. Thus~\eqref{dil} holds on $(-1,1)^{d-1}\times (0,\infty)$ and in the sets obtained from this by reflection. We deduce that $F$ is indeed quasi\-regular. We mention that it follows from~\eqref{dil} and~\eqref{dil2} that if $F$ is quasiregular, then \begin{equation} \label{dil3} \| DF(x)\|\leq K\: \ell(DF(x))\quad\mbox{a.e.} \end{equation} where $K=(K_OK_I)^{1/d}$. We could also use~\eqref{dil3} instead of~\eqref{dil} or~\eqref{dil2} in the definition of quasiregularity. It follows from~\eqref{n2} and~\eqref{n3} that~\eqref{dil3} holds for $|x_d|\geq 1$ with $K=1/(\beta\delta)$. This one reason why we said in the introduction that the quasiregularity of~$F$ is among the underlying ideas of the proof. We note that for quasi\-regular maps there is no obvious definition of the Julia set; see, however,~\cite{Be10,SunYang00}. On the other hand, the escaping set $I(f)$ can be defined. It was shown in~\cite{Bergweiler08} that if $f$ is a quasi\-regular self-map of~$\mathbb R^d$ with an essential singularity at~$\infty$, then $I(f)\neq\emptyset$. In fact, $I(f)$ has an unbounded component. Fletcher and Nicks~\cite{FletcherNicks} have shown that for quasiregular maps of polynomial type the boundary of the escaping set has properties similar to the Julia set of polynomials. We mention that for the entire functions $f(z)=\lambda e^z$ or $\lambda\sin z+\mu$ considered in Theorems A--G we have $I(f)\subset J(f)$ and thus $J(f)=\overline{I(f)}$; see~\cite[Theorem~1]{Eremenko92}. This plays an important role in the proofs of these theorems. For example, McMullen actually proved that the conclusion of Theorems~B and E holds with $J(f)$ replaced by $I(f)$. Also, a crucial part in the proofs of Theorems~C, F and G is based on the fact that points which are on a hair but which are not endpoints escape to infinity under iteration very fast. This also played an important role in our proof. In particular, for the map $f$ considered in this paper we have $$\bigcup_{\underline{s}\in \underline{S}} H(\underline{s})\backslash \{ E(\underline{s})\}\subset I(f)$$ by Lemma~\ref{lemma3}. On the other hand, it is not difficult to see that $\{E(\underline{s}): \underline{s}\in \underline{S}\}$ intersects both $I(f)$ and the complement of $I(f)$.
2024-02-18T23:40:21.941Z
2010-02-22T18:46:31.000Z
algebraic_stack_train_0000
2,166
5,383
proofpile-arXiv_065-10652
\section{Introduction} We consider the interplay between three paradigmatic quantum states of bosons in rotating lattices: Mott insulators, superfluids, and fractional quantum Hall states. The Mott insulator is found when there is an integer number of particles per lattice site, and the tunneling is sufficiently suppressed relative to the interactions. It is an incompressible state, where interactions freeze the particles in place. In the standard cartoon, when the density of such a system is tuned away from commensurability, the excess particles (or holes) ``skate" across the frozen Mott sea, forming a superfluid. If the system is rotating, one expects that the collective motion of this superfluid will produce a vortex lattice. In 2007, Umucal{\i}lar and Oktel \cite{Umucalilar} argued that when the rotation rate is high enough that the number of vortices is comparable to the number of excess particles, then the superfluid will be unstable to forming a correlated state of matter with particles bound to vortices -- a situation analogous to that found in the fractional quantum Hall state. They supported this argument by estimating the energy of the superfluid and the correlated state. Here we confirm this scenario through more rigorous calculations. By using Monte-Carlo techniques we compare the energy of variational states describing fractional quantum Hall states and superfluid vortex lattices with each-other. We also compare these energies with exact results calculated for small numbers of particles. We find that there is a range of parameters for which the fractional quantum Hall states are more favorable than superfluid states. We note, however, that the energy differences between these states scales as the tunelling energy, and can be quite small. Most previous studies of analogs of fractional quantum Hall states in optical lattices have focussed on the low density limit, where there are much fewer than one particle per site. In the context of cold atoms, Hafezi, S{\o}rensen, Demler and Lukin \cite{Hafezi} gave an excellent review of the basic physics of this limit (including symmetry and topology arguments), and argued that one can continuously deform a Mott insulating state into a fractional quantum Hall state by varying the strength of an additional superlattice potential \cite{Sorensen}. They also proposed using Bragg spectroscopy to probe these states. Palmer, Klein, and Jaksch \cite{Palmer} performed a number of calculations focussed on the role of the trap, detection schemes, and on inhomogeneities which can spontaneously appear in these systems. Bhat, et al. \cite{Bhat} carried out full configuration-interaction calculations for a small number of particles in a rotating lattice with hard-wall boundary conditions. M\"oller and Cooper analyzed the relevance of composite fermion wavefunctions to describing these systems \cite{Moller}. Nigel Cooper recently produced a review of the physics of rotating cold atom clouds including analogs of the quantum Hall effect in lattices \cite{Cooper}. These, and our present study, build on initial works motivated by solid state systems \cite{solidstate}. Translating these arguments to higher densities is not completely trivial. The superfluid near the Mott state is more complicated than the standard cartoon suggests. For example, the mean-field description treats it as a two-component plasma of particles and holes, with a small imbalance between the density of particles and holes. Despite these complications, we find that when the deviation of the particle density from an integer value is commensurate with the magnetic flux one can indeed see analogs of the fractional quantum Hall effect. We start our analysis with the well-known Bose-Hubbard Hamiltonian in an effective magnetic field \begin{eqnarray} \label{Bose-Hubbard} H_0 = -t\sum_{\langle i,j\rangle}a_i^\dag a_j e^{i A_{ij}}+\frac{U}{2}\sum_i \hat{n}_i (\hat{n}_i-1), \end{eqnarray} where $a_i$ ($a_i^\dag$) is the bosonic annihilation (creation) operator at site $i$ and $\hat{n}_i = a_i^\dag a_i$ is the number operator. The tunneling is parameterized by $t$ and on-site interactions by $U$. We use the Landau gauge $\mathbf{A} = (-By,0)$, so the phases $A_{ij} = \exp(ie/\hbar c \int^{\mathbf{r}_i}_{\mathbf{r}_j}\mathbf{A}\cdot d\mathbf{l})$ acquired when hopping in $\pm$ $x$-direction are $\mp 2 \pi \alpha i_y$, where $i_y $ is the $y$ coordinate scaled by lattice constant $a$, and in $y$-direction $A_{ij} = 0$. Here, $\alpha = Ba^2/(hc/e) = p/q$ is the flux quantum per plaquette and we take $p$ and $q$ to be relatively prime integers. The single particle spectrum for this problem is the famous Hofstadter butterfly \cite{Hofstadter}. The phase boundary between the Mott insulator and superfluid carries signatures of this single-particle physics \cite{Niemeyer,Oktel,Umucalilar,Goldbaum}. Away from the tips of the Mott lobes, the physics of the superfluid-Mott transition of the non-rotating system is in the universality class of the dilute Bose gas. Thus we expect that phenomena which can be seen in the dilute Bose gas will occur there, including the analogs of fractional quantum Hall physics which we are exploring here. \section{Variational Wavefunction} \subsection{Laughlin State}\label{variational} We consider the variational ansatz \begin{equation}\label{FQH ansatz} |\Psi\rangle = \sum_{z_1,...,z_N}\psi(z_1,...,z_N)a^{\dag}_{z_1}...a^{\dag}_{z_N}|\Psi_{MI}\rangle, \end{equation} where $|\Psi_{MI}\rangle=\prod_j (a_j^\dagger)^{n_0}/\sqrt{n_0!} |{\rm vac}\rangle$ is the Mott insulator state with $n_0$ particles per site and $\psi$ is the Laughlin wavefunction \cite{Laughlin} with filling $\nu=1/m$. To describe bosons, $m$ must be even. The complex coordinate $z_i = x_i+i y_i$ specifies the location of the $i$th particle, with $i$ running from 1 to $N$, where $N$ is the number of excess particles. The sum over $z_i$ is a sum over all lattice sites. To describe a state with excess holes, we replace $a^\dag$ with $a$. To minimize the role of boundaries, the model in (\ref{Bose-Hubbard}) is typically either solved on a sphere or a torus \cite{solidstate,Haldane}. We will work in an $L\times L$ torus geometry, corresponding to quasiperiodic boundary conditions \begin{eqnarray}\label{boundary conditions} \psi(...,z_k+L,...)&=&\psi(...,z_k,...) \\ \psi(...,z_k+iL,...)&=& e^{-i\frac{2\pi m N}{L}x_k}\psi(...,z_k,...), \nonumber \end{eqnarray} For these boundary conditions the Laughlin wavefunction can explicitly be written as \cite{Haldane} \begin{widetext} \begin{eqnarray}\label{Wavefunction} \psi(z_1,...,z_N) = \mathcal{N}e^{iK_x\sum_i x_i}e^{-K_y\sum_i y_i}e^{-\frac{\pi m N}{L^2}\sum_i y_i^2}\prod_{\beta=1}^m \vartheta_1\big[(Z-Z_{\beta})\frac{\pi}{L}\big]\prod_{i<j}^{N}\Big\{\vartheta_1\big[(z_i-z_j)\frac{\pi}{L}\big]\Big\}^m. \end{eqnarray} \end{widetext} Here, $\mathcal{N}$ is the normalization factor, $Z = \sum_{i}z_i$ is $N$ times the center-of-mass coordinate, $Z_{\beta} = X_{\beta}+i Y_{\beta}$ are the {\em a-priori} arbitrary locations of the center-of-mass zeros. To satisfy the boundary conditions, one requires $\sum_{\beta}X_\beta = n_1 L$ ($n_1\in \mathbb{Z}$), $K_x = 2\pi n_2/L$ ($n_2\in \mathbb{Z}$), and $K_y = -2\pi\sum_{\beta}Y_\beta/L^2$. The quasi-periodic Jacobi theta functions are defined by \begin{eqnarray}\label{Theta function} \vartheta_1(z,e^{i\pi \tau}) = \sum_{-\infty}^{\infty}(-1)^{n-1/2}e^{i\pi\tau (n+1/2)^2}e^{(2n+1)iz}. \nonumber \end{eqnarray} For our square geometry $\tau = i$. This function is odd with respect to $z$ and has the following quasi-periodicity properties: $\vartheta_1(z+\pi)=-\vartheta_1(z)$ and $\vartheta_1(z+\tau \pi)=-e^{-i\pi\tau} e^{-2iz}\vartheta_1(z)$. The relation between the flux quantum per plaquette $\alpha = N_{\phi}/L^2$, filling fraction $\nu = N/N_{\phi}$, and excess particle density $\varepsilon = N/L^2$ is succinctly given by $\alpha \nu = \varepsilon$, where $N_{\phi}$ denotes the number of flux quanta in the $L\times L$ lattice we consider. In what follows, we will restrict ourselves to the $\nu = 1/2$ Laughlin state ($m = 2$), so that the commensurability requirement between the magnetic flux and particle density becomes $\alpha = 2\varepsilon$. \subsection{Superfluid State} We will compare the Laughlin state introduced in Sec.~\ref{variational} with a Gutzwiller mean field state \begin{equation}\label{gutz} |\Psi_{MF}\rangle = \prod_i\left(\sum_n f^i_n |n\rangle_i\right), \end{equation} where $f^i_n$ are variational parameters. This wavefunction is commonly used to describe the superfluid in the Bose-Hubbard model \cite{Bloch}. It is exact in the non-interacting limit and captures the effect of number squeezing. Its main deficit is that it does not capture any of the short-range correlations in the superfluid. Regardless, the energies it produces are good estimates of the superfluid energy. In the non-rotating case, the superfluid is translationally invariant, and the coefficients $f_n^i$ are independent of $i$. In our case, where the lattice is rotating, a vortex lattice forms, breaking translational invariance. Near the Mott lobe, the site occupations are dominated by $n=n_0$ and $n=n_0\pm1$: that is it is extremely unlikely to have more than one extra particle or hole on a given site. We therefore truncate our basis to only these three values of $n$. This will also facilitate direct comparison with configuration-interaction calculations using the same truncated basis. We work in an $L\times L$ lattice, using the boundary conditions which are equivalent to those in Eq.~(\ref{boundary conditions}). The numerical techniques for optimizing the $f_n^{i}$ are well documented \cite{Goldbaum}, and we will not repeat the detailed discussion here. These can be described in terms of a variational calculation where one minimizes $\langle \Psi_{MF}|H_0 |\Psi_{MF}\rangle$ with the constraints that the total number of particles $M$ and normalization $\langle\Psi_{MF}|\Psi_{MF}\rangle$ are fixed: this involves introducing the chemical potential $\mu$ and a number of other Lagrange multipliers. In practice it is more convenient to write $H=H_0-\mu M$, and follow an iterative procedure based upon mean field theory. These two approaches are completely equivalent. In comparing energies with our other variational state, one must be cautious and be sure to use $\langle H_0\rangle=\langle H\rangle+\mu M$. \section{Exact results on small systems}\label{exact} \subsection{Approach and Results} For small systems we can exactly diagonalize the Hamiltonian in Eq.~(\ref{Bose-Hubbard}), taking a configuration-interaction approach where we truncate the allowed number of particles on a given site to be $n_0$, $n_0-1$, or $n_0+1$. For definiteness we take $n_0=1$: changing this value just scales the hopping matrix elements $t$. For these small system sizes we can also directly calculate $\langle\Psi|H_0|\Psi\rangle$. In Sec.~\ref{monte-carlo} we will discuss larger systems where we need to resort to a Monte-Carlo algorithm for calculating this energy. We consider 12 particles in a $3\times 3$ lattice, so that the excess particle density is $1/3$. We take $\nu=1/2$ and accordingly the number of quanta of flux per plaquette is $\alpha=2/3$. Fig. \ref{exact3x3} displays the energies (measured in units of $U$) of the first few hundred exact energy eigenstates together with the energies of our two variational wavefunctions: Eqs.~(\ref{FQH ansatz}) and (\ref{gutz}). We emphasize that our ansatz for the fractional quantum Hall state is not just the Laughlin state, where flux is bound to each particle, but is rather the coexistence of a Mott state and a Laughlin state, with flux bound only to the excess particles. In Fig. \ref{exact3x3} we also show the estimate from Ref.~\cite{Umucalilar}, which is supposed to describe the correlated state near the Mott insulator, \begin{equation} \label{est} \Delta E = U n_0 \varepsilon - t (n_0 + 1) f(\alpha)\varepsilon, \end{equation} where the first term represents the on-site interaction of excess particles with the Mott insulator and the second term is the hopping energy of particles in the Hofstadter ground state denoted by $-t f(\alpha)$, $f(\alpha)>0$ being the dimensionless maximum eigenvalue of the Hofstadter spectrum. Note that $t$ is enhanced by a factor of $(n_0+1)$ due to the Mott background. No interaction energy is included, as it is expected that in this regime the excess atoms avoid one-another. It is remarkable how closely this estimate matches the results of the exact diagonalization for small $t$. \begin{figure} \includegraphics[scale=.42] {exact3x3} \caption{(Color online) Exact many-body spectrum for 12 particles in a $3\times3$ lattice with $\alpha = 2/3$ considering only 0,1, and 2 atoms per site (for $\nu = 1/2$, excess particle density is $\varepsilon = \alpha \nu = 1/3$). Also shown by the black solid line is our variational estimate of the energy of a fractional quantum Hall state of excess particles in the presence of a Mott background. Dash-dotted blue line shows the Gutzwiller mean-field superfluid energy for the same density (1+$\varepsilon$), corresponding to a vortex lattice where the cores are filled with Mott insulator. The dashed red line is the estimate of the ground state energy from Eq.~(\ref{est}), first introduced in \cite{Umucalilar}. For low enough $t$ the variational energy of the correlated state of excess particles is lower than the superfluid energy. \label{exact3x3}} \end{figure} For $t \lesssim 0.13$ the energy of our candidate fractional quantum Hall state (with optimized $Z_\beta$) is lower than that of the superfluid, while the opposite holds for larger $t$. Our physical picture of this is that as $t$ grows the Mott insulator melts, and the density of mobile atoms is no longer commensurate with the magnetic field. For very small $t$, the fractional quantum Hall state's energy agrees very well with the exact ground state energy: this is shown more clearly in Fig.~\ref{overlap_t}(b). In Fig.~\ref{overlap_t}(a) we show the overlap between our variational state and the exact ground state. At low $t$ the overlap is greater than 95\%, but it falls off with increasing $t$: presumably due to the increasing importance of particle-hole excitations. The overlap between the ground state and the mean-field superfluid (inset of Fig.~\ref{overlap_t}(a)) is never large, and their energies in Fig.~\ref{exact3x3} never approach one-another. We believe this is due in part to the fact that the mean-field state breaks translational invariance, and consequently involves a superposition of many eigenstates \cite{Mueller}. A quantum superposition of vortex lattices, may in fact be a good alternative description of the fractional quantum Hall state. Given the small difference between the energies of our two variational states, one must be somewhat cautious about ascribing too much significance to the crossing at $t \sim 0.13$. One also might be concerned that at that value of $t$, both variational states have an energy which is significantly higher than that of the ground state, suggesting that neither may be particularly good descriptions of the true ground state. A third concern is that there is no sign of a phase transition in Fig.~\ref{overlap_t}(a): the overlap between the fractional quantum Hall state and the exact ground state remains above 75\% out to $t\sim 0.15$. Despite these caveats, the large overlaps at small $t$ is convincing evidence that the ground state at low $t$ is a fractional quantum Hall state of excess particles, and it would be surprising if the system formed a correlated state at large $t$. \begin{figure} \includegraphics[scale=.42]{overlap_t} \caption{(Color online) a) Overlap between the $\nu = 1/2$ FQH + MI state [$|\Psi\rangle$ from Eq.~(\ref{FQH ansatz})] and the exact ground state [$|GS\rangle$, determined from diagonalizing Eq.~(\ref{Bose-Hubbard}) in a truncated basis] as a function of tunneling strength $t$, using the same parameters as Fig.~\ref{exact3x3}. Also shown in the inset is the overlap between a superfluid vortex lattice and $|GS\rangle$. b) Comparison of the variational and exact energies -- from Fig. \ref{exact3x3}.\label{overlap_t}} \end{figure} \subsection{Variational Parameters} In Fig.~\ref{overlap} we show how the energy of the variational state depends on the parameters $Z_\beta$, which represent where ``vortices" can be found around which the center of mass flows. The boundary conditions in Eq.~(\ref{boundary conditions}) force the wavefunction to have $m=1/\nu$ of these zeros (in the present case $m=2$). In the absence of the lattice, the energy is invariant under changing these parameters, leading to an $m$-fold degeneracy of the ground state \cite{Haldane2}. Here this symmetry is absent and the energy depends on $Z_\beta$. Not surprisingly, the overlap between the variational state and the exact ground-state is directly correlated with the energy. This overlap has a maximum when the variational energy has a minimum. \begin{figure} \includegraphics[scale=.42] {overlap} \caption{(Color online) Variational energy (a) and the overlap with the exact ground state (b) as a function of center of mass zeros $Z_1 = X_1+iY_1$, $Z_2 = L-Z_1$, measured in units of the lattice constant. As with Fig.~\ref{exact3x3}, we consider an $L\times L$ cell with $L = 3$, flux per plaquette $\alpha = 2/3$, filling factor $\nu=1/2$ and total particle number $M=12$. We take $K_x = 0$, $K_y = 0$, and $t = 0.01 U$. The lower the variational energy, the higher the overlap. The lowest energy occurs for $X_1 = Y_1 = L/2$ where the overlap is $96.4\%$. At this point, the variational energy is $0.3186 U$, which is very close to the exact ground state energy of $0.3176 U$.\label{overlap}} \end{figure} \section{Variational Monte-Carlo}\label{monte-carlo} Unfortunately the maximum size of the system which can be treated by the techniques of Sec.~\ref{exact} is quite limited. Our preceding results for small system size predominantly serve as a guide for physical intuition, and cannot quantitatively describe the physics of the infinite system. Here we introduce a variational Monte-Carlo (VMC) algorithm \cite{Monte Carlo} in order to calculate the energy $\langle \Psi|H|\Psi\rangle=\langle \Psi|H_0-\mu M|\Psi\rangle$, where $M$ is the total number of particles. This will allow us to make a more solid comparison of the energies of the superfluid and correlated states, and draw the phase diagram in Fig.~\ref{phasediagram}. This phase diagram illustrates the regions of $t-\mu$ plane where either the superfluid or correlated state has a lower energy. We begin by introducing a basis $|R=\{z_1,\cdots,z_N\}\rangle$ where the $N$ excess particles are at sites $z_1$ through $z_N$. This allows us to write \begin{eqnarray}\label{VMC expectation} \langle\Psi| H |\Psi\rangle &=& \sum_{RR^\prime} \langle \Psi|R\rangle \langle R |H|R^\prime\rangle \langle R^\prime |\Psi\rangle = \sum_R P_R E_R,\nonumber\\ P_R&=&|\langle R|\Psi\rangle|^2, \\\nonumber E_R&=&\sum_{R^\prime} \frac{ \langle R |H|R^\prime\rangle \langle R^\prime |\Psi\rangle}{ \langle R |\Psi\rangle}. \end{eqnarray} We use a Metropolis algorithm to sample the sum over $R$. Starting from some configuration $R_0$ we generate a new one $R_1$ by attempting to move a single particle by one site. We accept the move with probability $min\{1, P_{R_{1}}/P_{R_{0}}\}$: we then continue the procedure to generate $R_2$, $R_3$, \ldots. In the resulting Markov chain each configuration $R$ will appear with probability $P_R$. After $S$ steps, the energy is then estimated as $E_S=\sum_{i=1}^S E_{R_i}/S$. As is usual, we discard the first few thousand steps so as not to bias the sum by our choice of initial configuration. We use a binning analysis to estimate the statistical error on our sum \cite{Ambegaokar}. For each $R$, we calculate $E_R$ directly. The Hamiltonian only connects a finite number of different configurations (those which differ by moving one particle by one site), and the sum is straightforward numerically. As a further simplification we note that $E(\mu,t)= E_0(\mu)-(1+n_0)t K$, where $E_0=U n_0 \varepsilon + U (n_0-1)n_0/2 -\mu(n_0+\varepsilon)$ is the expectation value of the on-site terms in $H$ and $-(1+n_0)t K$ is the expectation value of the hopping energy. $K$ is independent of $n_0$, as the only role of the Mott background is to provide a Bose-enhancement term of $(1+n_0)$. By using the Monte-Carlo algorithm to calculate $K$, rather than $E$, we produce $E(\mu,t)$ for all $n_0$, $\mu$, and $t$ at once. Table~\ref{VMCtable} lists the parameters for which we have performed VMC calculations. For the smallest system sizes ($L=3,4,5$) we find that the VMC agrees with the direct calculation of the variational energy. From the chart, we conclude that finite size effects are significant in the $L=3$ cases, but for larger $L$ the differences between the energies of the two systems are within a few percent. We have not extrapolated to $L=\infty$. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline $\alpha$&$\varepsilon$&$L$&$N$&$K$&$\delta K$\\ \hline 2/3&1/3&3&3&0.7376&$6\cdot 10^{-4}$\\ &&6&12&0.5187&$4\cdot 10^{-4}$\\ \hline 4/9&2/9&3&2&0.4419&$4\cdot 10^{-4}$\\ &&6&8&0.4455& $2\cdot 10^{-4}$\\ \hline 8/25&4/25&5&4&0.3874&$1\cdot 10^{-4}$\\ &&10&16&0.3873&$5\cdot 10^{-5}$\\ \hline 1/4&1/8&4&2&0.3483&$2\cdot 10^{-4}$\\ &&8&8&0.3375& $4\cdot 10^{-5}$\\ \hline \end{tabular} \caption{Results of our variational Monte-Carlo calculation. $\alpha$ is the number of flux per plaqette, $\varepsilon$ is the density of excess particles, $L$ is the system size, $N$ is the number of excess particles, $-(1+n_0)t K$ is the hopping energy per site, where $t$ is the hopping matrix element and $n_0$ is the number of particles per site in the underlying Mott state. $K$ is dimensionless. Our estimates of the statistical error in $K$ from a binning analysis of 80000 samples are given by $\delta K$.} \label{VMCtable} \end{table} \begin{figure} \includegraphics[scale=.42] {phasediagram} \caption{Phase diagram for $\alpha = 1/4$ and $\nu = 1/2$. Boundary between Mott insulator (MI) and superfluid (SF) states is found from a mean-field calculation. Excess particle (or hole) density is $\varepsilon = \alpha \nu = 0.125$. Boundary of the coexistent $\nu = 1/2$ FQH state of excess particles (holes) and $n_0 = 1$ MI state centered around the 1.125 (0.875) constant density line is determined from a comparison of VMC and mean-field energies. We consider 8 particles in an $8\times8$ lattice in the VMC calculation. \label{phasediagram}} \end{figure} Fig.~\ref{phasediagram} illustrates our results for $\alpha=1/4$. Near the constant density line $n=1+\varepsilon$, there is a region where our variational wavefunction has a lower energy than the Gutzwiller mean field vortex lattice. This corresponds to an incompressible $\nu = 1/2$ bosonic Laughlin state above the $n_0 = 1$ Mott insulator. The same argument can be advanced for holes by just changing the creation operators in Eq. (\ref{FQH ansatz}) with annihilation operators leading to a coexisting Mott insulator and FQH state of holes near the $1-\varepsilon$ line, although it is less visible than in the particle case. \section{Creation and Observation} Several labs currently have the technology to create a rotating optical lattice \cite{Williams, Tung}, which can be directly used to create the system described here. Those experiments still are far from the Mott regime, but they are progressing rapidly. An alternative approach to implementing Eq.~(\ref{Bose-Hubbard}) is to use a non-rotating lattice, and generate the phases on the hopping matrix elements by some other means \cite{Artificial field}. The most advanced demonstration of this technique was from Lin et al. \cite{Lin}. One of the more promissing schemes for experimentally observing the incompressible states described here is through in-situ imaging of the density profile of a trapped gas \cite{Palmer,Umucalilar2}. The fractional quantum Hall states should appear as extra steps in the density profile near the Mott insulator plateaus. Moreover, as the magnitude of the effective magnetic field increases, these steps move in predictable ways: the density is set by the magnetic flux, and the size of the gap (hence the spatial size of the plateau) increases with magnetic field. One can even imagine that for a fixed flux there will appear a sequence of FQH states with larger even denominators and thus with smaller densities all the way up to the MI-SF phase boundary, however their size will be much smaller and they may not be discernible at all. Other probes for the FQH states may be noise correlations in time-of-flight experiments, measurement of the Hall conductance for the mass current in a tilted lattice, or Bragg spectroscopy \cite{Hafezi,Sorensen,Palmer,Bhat, Moller, Cooper, Umucalilar2}. A major impediment to observing these states is the need to reduce the temperature to below the scale of the gap, which is a fraction of the hopping matrix element $t$. Such temperatures are currently hard to reach reliably. \section{Summary} In summary, we have predicted that experiments on bosons in rotating lattices (or in lattices with an artificial gauge field) will see a phase where the excitations on top of a Mott insulator form a bosonic fractional quantum Hall state. We base our prediction on a set of variational calculations, supplemented by exact diagonalization of small systems. We find that the MI + $\nu=1/2$ Laughlin state has a lower energy than the Gutzwiller mean-field vortex lattice when the density of excess particles/holes, $\varepsilon=N/L^2$, is chosen appropriately ($\varepsilon=\alpha\nu=\alpha/2$), and the hopping $t$ is sufficiently small compared to the interactions $U$. In this regime we find that the overlap between the exact ground state and the Laughlin state is as large as 96\%, but the overlap with the superfluid is smaller than 10\%. We produced a phase diagram (Fig.~\ref{phasediagram}), illustrating where this novel phase should be found at low temperatures. \begin{acknowledgements} R.O.U. is supported by T\"{U}B\.{I}TAK. R.O.U. also wishes to thank Eliot Kapit for useful discussions and acknowledges the hospitality of the LASSP, Cornell University, where this work has been completed. This material is based in part upon work supported by the National Science Foundation under Grant Number PHY-0758104. \end{acknowledgements}
2024-02-18T23:40:22.303Z
2010-02-22T14:21:30.000Z
algebraic_stack_train_0000
2,188
4,449
proofpile-arXiv_065-10662
\section{Introduction} Rare events caused by rare realization of impurities often govern the properties of random systems and play an essential role in their study. Numerical computation of the probabilities of rare events is, however, computationally expensive. When the probability takes very small values, say, $10^{-15}$ or less, it is virtually impossible to calculate the correct probability value by naive random sampling. Recently, approaches based on dynamic Monte Carlo (Markov chain Monte Carlo)~\cite{metropolis1953equation, landau2005guide, *newman1999monte, *xgilks1996markov} have been shown to be useful for sampling rare events and calculating large deviations in the corresponding statistics. The novelty of the approach is that a dynamic Monte Carlo algorithm is used for calculating sample averages over configurations of impurities, instead of computing thermal averages. Successful examples in physics include applications in spin glass~\cite{koerner2006probing, matsuda2008distribution}, diluted magnets~\cite{hukushima2008monte}, and directed random walks in random media~\cite{monthus2006probing}. Some references ~\cite{hartmann2002sampling, holzloohner2003use, *holzlohner2005evaluation, yukito2008testing, driscoll2007searching} also discuss applications in information processing and other engineering problems. The aim of this paper is to apply the method to sample rare events in random matrices. Random matrices have been a classical subject with a number of applications in physics and other fields~\cite{wigner1955characteristic, *wigner1958distribution, dyson1996selected, mehta2004random}. Specifically, large deviations in the maximum eigenvalue of random matrices is a subject of recent interest in various fields such as ecology \cite{may}, cosmology~\cite{aazami2006cosmology}, mathematical statistics~\cite{roy1957some}, and information compression~\cite{candes2006near}. The tail of the distribution of the maximum eigenvalue is important because it gives the probability of all eigenvalues being negative, which is often related to the stability condition of complicated systems~\cite{aazami2006cosmology, may}. A well-known study by Tracy and Widom established the celebrated ``1/6 law"~\cite{tracy1994level,*tracy1996orthogonal} on small deviations in the maximum eigenvalue of random matrices. On the other hand, analytical studies~\cite{dean2006large, dean2008extreme, majumdar2009large, *vivo2007large} of large deviations give estimations of the tails of probabilities in special cases such as the Gaussian orthogonal ensemble (GOE) and ensemble of random Wishart matrices. However, the techniques based on the Coulomb gas representation are difficult to generalize to ensembles with other distributions of the components. Other results by mathematicians and physicists are also limited to special ensembles and/or give only the upper bound of the probabilities~\cite{candes2006near}. Thus, an efficient numerical approach that enables exploration of extreme tails of density is necessary. We propose a method based on multicanonical Monte Carlo~\cite{berg1991multicanonical, *berg1992multicanonical, berg1992new} as a promising approach to the problem. As we will show in this study, quantitative results are obtained in examples of sparse random matrices and matrices whose components are subject to the uniform density. A similar method is used in~\cite{driscoll2007searching} to calculate large deviations in the \textit{growth ratio} of matrices. The paper~\cite{driscoll2007searching}, however, focuses on applications in numerical analysis and does not compute large deviations in the largest eigenvalues. The organization of this paper is as follows: In Sec.~\ref{sec:multi}, we summarize the multicanonical Monte Carlo algorithm. In Sec.~\ref{sec:dev}, we discuss how multicanonical Monte Carlo is used to calculate large deviations and show the results of numerical experiments on the tails of the distribution of the largest eigenvalues. Sec.~\ref{sec:prob} covers the computation of the probability that all eigenvalues are negative; as noted above, this is a typical application of the proposed method. In. Sec.~\ref{sec:sparse}, sparse matrices are treated. In Sec.~\ref{sec:con}, concluding remarks are given. \section{Multicanonical Monte Carlo} \label{sec:multi} Let us summarize the idea of multicanonical Monte Carlo~\cite{berg1991multicanonical, *berg1992multicanonical, berg1992new}. Assuming the energy $E(x)$ of a state $x$, our task is to calculate the density $D(E)$ of states defined by \begin{equation}\label{ddd} D(E)= \int \delta(E(x)-E) \, dx, \end{equation} where $\delta$ is the Dirac $\delta$-function, and $\int \cdots dx$ denotes a multiple integral in the space of states $x$. A key quantity of multicanonical Monte Carlo is the weight function $f(E)$ of the energy $E$. Performing dynamic Monte Carlo sampling with the weight $f(E(x))$, we modify $f(E)$ step-by-step until the marginal density $q(E)$ of $E$ is almost flat in a prescribed interval $E_{\min}<E<E_{\max}$. The initial form of $f(E)$ is arbitrary, and we can start, for example, from a constant function. Several methods are proposed for optimizing a univariate function $f(E)$, among which a method proposed by Wang and Landau~\cite{wang2001efficient, *wang2001determining} is most useful and used in this paper. After a weight function $f^*(E)$ that gives a sufficiently flat $q(E)$ is obtained, we compute an accurate estimate $q^*(E)$ of $q(E)$ by a long simulation run with the weight $f^*(E(x))$. Then, $D(E)$ is estimated by the relation $$ D(E) \propto \frac{q^*(E)}{f^*(E)} {}_, \qquad (E_{\min}<E<E_{\max}) {}_. $$ This simple algorithm has significant advantages over conventional methods for estimation of $D(E)$. First, it realizes accurate sampling of tails of $D(E)$ without estimating densities in the high-dimensional state space of $x$. Second, when we include the region of $E$ with large values of $D(E)$ in the interval $E_{\min}<E<E_{\max}$, the mixing of dynamic Monte Carlo is dramatically facilitated. This ``annealing'' effect is the reason that multicanonical Monte Carlo is successfully used to calculate thermal averages at low temperatures in the studies of spin glass~\cite{berg1992new} and biomolecules~\cite{hansmann1993prediction}. \section{Large deviations in the largest eigenvalues} \label{sec:dev} An essential observation in the present approach is that the energy $E$ in multicanonical Monte Carlo need not be an energy in the ordinary sense. That is, we can substitute for $E$ any quantity for which we are interested in its rare fluctuations or large deviations from the average; similar approaches to other problems are found in~\cite{ holzloohner2003use, *holzlohner2005evaluation, driscoll2007searching, hukushima2008monte, yukito2008testing, matsuda2008distribution}. In this study, we regard the maximum eigenvalue $\lambda_1(x)$ of a matrix $x$ as a fictitious ``energy'' of the state $x$. Also, we can introduce an underlying density $p(x)$ that gives the probability of $x$ under random sampling. While $p(x)$ is the uniform density in statistical mechanics, $p(x)$ in the present case characterizes an ensemble of matrices. Hereafter, we assume the factorization $p(x)= \prod_{ij} p_{ij}(x_{ij})$. The normalized density $D(\lambda_1)$ of the states is written as \begin{equation}\label{ddd1} D(\lambda_1)= \int \delta(\lambda_1(x)-\lambda_1) p(x) \, dx, \end{equation} where we replace $E(x)$ in~(\ref{ddd}) with $\lambda_1(x)$. $D(\lambda_1)$ is simply the probability distribution of $\lambda_1$, whose extreme tails we are interested in. Now the application of multicanonical Monte Carlo is straightforward. We employ a Metropolis-Hastings algorithm to generate samples according to the weight $f(\lambda_1(x))p(x)$. A single component $x_{ij}$ of the random matrix $x$ is chosen and changed at each step; in ensembles of symmetric matrices, $x_{ji}$ should also be changed if $i \neq j$, which is necessary to keep the symmetry of the matrix. The candidate $x_{ij}^{new}$ of $x_{ij}$ is generated according to the proposal density $r_{ij}(x_{ij}^{new}|x^{old})$, where $x^{old}$ is the current value of $x$; $x_{ij}^{new}$ is accepted if and only if the Metropolis ratio $$ R= \frac{p_{ij}(x_{ij}^{new})}{p_{ij}(x_{ij}^{old})} \frac{r_{ij}(x^{old}|x_{ij}^{new})}{r_{ij}(x_{ij}^{new}|x^{old})} \frac{f(\lambda_1(x^{new}))}{f(\lambda_1(x^{old}))} $$ is smaller than a random number uniformly distributed in $(0,1]$. Repeating this procedure, the function $f(\lambda_1)$ is tuned by the method of Wang-Landau~\cite{wang2001efficient, *wang2001determining}. Once a weight function $f^*(\lambda_1)$ that gives a sufficiently flat $q(\lambda_1)$ is obtained, we estimate $D(\lambda_1)$ using the formula $$ D(\lambda_1) \propto \frac{q^*(\lambda_1)} {f^*(\lambda_1)} {}_, \qquad (\lambda_1^{\min}<\lambda_1<\lambda_1^{\max}) {}_, $$ where $q^*(\lambda_1)$ is the density of $\lambda_1$ estimated by a long run with the fixed weight function $f^*(\lambda_1)$. A simple choice of the proposal density is $r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new})$, which results in a simple form of the Metropolis ratio $$ R=\frac{f(\lambda_1(x^{new}))}{f(\lambda_1(x^{old}))}. $$ However, this choice may not be adequate in some cases where the support of the densities $p_{ij}(x_{ij})$ is not finite, because very large deviations in an element $x_{ij}$ can be relevant for large deviation of the largest eigenvalue. An alternative choice is to use $r_{ij}(x_{ij}^{new}|x^{old})=\tilde{r}_{ij}(x_{ij}^{new}-x_{ij}^{old})$, where $\tilde{r}_{ij}( \cdot)$ is an even function; hereafter we will call an algorithm using this proposal density as a {\it random walk scheme}. In this case, the Metropolis ratio is given by $$ R=\frac{p_{ij}(x_{ij}^{new})} {p_{ij}(x_{ij}^{old})} \frac{f(\lambda_1(x^{new}))}{f(\lambda_1(x^{old}))}. $$ We test the proposed method with the Gaussian orthogonal ensemble (GOE); GOE is an ensemble of real symmetric matrices whose entries are independent Gaussian variables~\cite{mehta2004random}. The Householder method is used to diagonalize the matrix in each step; it is also used in other examples in this paper. We employ two different forms of the proposal density: (1)~$r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new})$ and (2)~$r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new}-x_{ij}^{old})$; the latter is a special case of the random walk scheme, where $\tilde{r}_{ij}(\cdot) =p_{ij}(\cdot)$~\footnote{This choice of the proposal density is somewhat arbitrary; for example, we can use Gaussian densities with different variances and zero mean}. No significant difference between (1) and (2) is seen in our study. In Fig.~\ref{fig:1}, a result estimated with the proposed method with $r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new})$ is compared with the corresponding result of naive random sampling. The total number of matrix diagonalizations is $4.5\times10^8$ for $N=64$ and $2.25\times10^8$ for $N=128$; they are the same in both of the proposed method and naive random sampling. In the proposed method, two third of them are used to optimize the weight, while the rest is used to calculate the estimates. We confirmed that modification factors in the Wang-Landau method are sufficiently close to unity at the end of the weight optimization. For $N=64$, we also apply the random walk scheme $r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new}-x_{ij}^{old})$ and obtain the same result, but the computational time increases. The results in Fig.~\ref{fig:1} show that the proposed method enables us to estimate the tails of the density down to $\sim 10^{-15}$, which is scarcely sampled by the naive method. \begin{figure} \begin{center} \includegraphics{graph_TW_64Gibbs128Gibbs_ver2.eps} \small \caption{Density $D(\lambda_1)$ in GOE. Results of the proposed method and a naive random sampling method are compared for $N=64$ and $128$. Both almost overlap with $N^{1/6}$ scaling. The symbols \mbox{+} and $\times$ appear only in the region where naive random sampling gives meaningful results.} \label{fig:1} \end{center} \end{figure} \section{The probability that all eigenvalues are negative} \label{sec:prob} The proposed strategy also allows us to calculate the probability $P(\forall i, \lambda_i < 0)$ that all eigenvalues of a random matrix are negative, which is important in applications in a variety of fields~\cite{aazami2006cosmology, may}. Using the relation $\forall i, \lambda_i \leq \lambda_1$, this probability is calculated by $$ P(\forall i, \lambda_i < 0)= {\int_{\lambda_1^{\min}}^{0} D(\lambda_1) \,d\lambda_1} {}_. $$ Here we assume that the density $D(\lambda_1)$ of the maximum eigenvalue is estimated in an interval $[\lambda_1^{\min},\lambda_1^{\max}]$ by the proposed method. The probabilities {$P(\lambda_1<\lambda_1^{\min})$} and {$P(\lambda_1^{\max}<\lambda_1)$} are also assumed to be negligibly smaller than {$P(\lambda_1^{\min}<\lambda_1<0)$} and {$P(\lambda_1^{\min}<\lambda_1<\lambda_1^{\max})$}, respectively. The probability that all eigenvalues are positive can also calculated with a similar way; it coincides with the probability that all eigenvalues are negative when the distribution of components is symmetric with respect to the origin. First, we test the proposed method with GOE, where the asymptotic behavior of $P(\forall i, \lambda_i < 0)$ for large $N$ is given by Dean and Majumdar \cite{dean2006large, dean2008extreme} as $$ P(\forall i, \lambda_i < 0)\sim \exp\left ( -aN^2 \right ) {}_, $$ where $a=\frac{\ln 3}{4} =0.274653\cdots$ . This expression is derived by interpreting the eigenvalues as a Coulomb gas, a method that obviously does not apply general distribution of components of matrices. Confirming the result by numerical methods is difficult because we should sample very rare events to estimate the tails of the distribution. Dean and Majumdar~\cite{dean2006large, dean2008extreme} (and Aazami and Easther~\cite{aazami2006cosmology}) provided numerical results by naive random sampling, but their results are limited to small $N$, such as $N=7$ in~\cite{dean2006large, dean2008extreme} ($N=8$ with an additional assumption~\cite{dean2008extreme}). Dean and Majumdar also did numerical computation up to $N=35$ based on the Coulomb gas representation; their computation does not, however, provide an independent check to the theory and cannot be generalized to an arbitrary ensemble. Fig.~\ref{fig:exceed0_GOE} shows our numerical results for GOE. We can treat matrices up to $N = 40$, which is not treated by naive random sampling. Here we use the random walk scheme $r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new}-x_{ij}^{old})$ and the total number of matrix diagonalizations is $3\times 10^9$ for $N=4 \sim 20$ and $4.5\times 10^9$ for $N=22 \sim 40$. The results for $N \leq 7$ coincide with those by naive random sampling. They are also consistent with the fit $-0.272N^2-0.493N+0.244$ of the numerical results calculated in~\cite{dean2008extreme} with the Coulomb gas representation. Hence, probabilities as tiny as $\sim 10^{-200}$ are estimated by the proposed method and agree well with the known results. \begin{figure} \begin{center} \includegraphics{p_GOE_4.eps} \small \caption{Probability $P(\forall i, \lambda_i < 0)$ for GOE versus size $N$ of the matrices. The results of the proposed method (+) and the naive random sampling method ($\odot$) are shown. The results of naive random sampling are available only in the region $N \leq 7$. The curve indicates a quadratic fit to the results with the Coulomb gas representation given in Dean and Majumdar\cite{dean2008extreme}.} \label{fig:exceed0_GOE} \end{center} \end{figure} Next, to show the flexibility of the proposed method, we calculate the probability $P(\forall i, \lambda_i < 0)$ for an ensemble of real symmetric matrices whose entries are independently distributed with the uniform distribution $p_{ij}(x_{ij})$ defined by $$ p_{ij}(x_{ij})= \left\{ \begin{array}{ccc} \frac{1}{2\sqrt{3}L} {}_, & \mbox{$|x_{ij}|<\sqrt{3}L$} & \mbox{and $i=j$,} \\ \frac{1}{\sqrt{6}L} {}_, & \mbox{$|x_{ij}|<\sqrt{\frac{3}{2}}L$} & \mbox{and $i \neq j$,} \\ 0 & \mbox{else.} &\\ \end{array} \right. $$ Hereafter, the value of the parameter $L$ is unity, which fits the variances of the components to those of the GOE. Results of the proposed method for this ensemble are shown in Fig.~\ref{fig:exceed0_uniform}. The proposal density $r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new})$ is used. Total number of matrix diagonalizations is $4.5\times 10^9$ for each value of $N$; two third of which are used to optimize the weight. Fitting the results yields asymptotic behavior of the probability, $$ P(\forall i, \lambda _i < 0) \sim \exp \left ( -aN^2-bN-c \right ) $$ for large $N$, where $a=0.679$, $-b=4.76$, and $c=17.31$. As shown in Fig.~\ref{fig:exceed0_uniform}, these probabilities significantly differ from that for the GOE with the same variance. \begin{figure} \begin{center} \includegraphics{uniform_3.eps} \small \caption{ Probability $P(\forall i, \lambda_i < 0)$ for an ensemble of matrices whose components are uniformly distributed. The horizontal axis corresponds to the size $N$ of the matrices. The results of the proposed method (+) and the naive random sampling method ($\odot$) are shown. The results of naive random sampling are shown for $4 \leq N \leq 7$. The solid curve indicates the probability for the GOE with the same variance. } \label{fig:exceed0_uniform} \end{center} \end{figure} \section{Sparse Random Matrices} \label{sec:sparse} We also study ensembles of sparse random matrices. Once the matrices become sparse, the Coulomb gas approach is not applicable even in Gaussian cases. The proposed approach allows us to calculate the probability $P(\forall i, \lambda_i < 0)$ in these cases. In this section, we use the proposal density $r_{ij}(x_{ij}^{new}|x^{old})=p_{ij}(x_{ij}^{new})$, but some of the results are also checked by random walk schemes. Various ways of defining sparse random matrices are available. Among them, we consider two types of definitions in this study. The first is as follows: (1)~The matrix is symmetric. (2)~All diagonal entries are $-1$. (3)~Nonzero off-diagonal entries in the upper half of the matrix are mutually independent Gaussian variables with zero mean and unit variance. (4)~Total number of nonzero entries is fixed at $\gamma N$, where $\gamma$ is the average number of nonzero entries per row. (5)~The positions of nonzero off-diagonal entries in the upper half of the matrix are randomly chosen. The total number of nonzero components should be preserved with this definition. Hence, the single component update in previous sections is replaced by a trial of exchanging zero and nonzero components with resampling of the nonzero component. Other parts of the algorithm remain essentially the same. An example of the density $D(\lambda_1)$ computed by this modified method is shown in Fig.~\ref{fig:old_sparse}. \begin{figure} \begin{center} \includegraphics{sparse_DOS_dminus1_N30p3_Gibbs.eps} \small \caption{ Density $D(\lambda_1)$ in a case of sparse random matrices. The first definition is applied; results of the proposed method and the naive random sampling method are compared for $N=30$ and $\gamma=3$. The symbol \mbox{+} appears only in the region where naive random sampling gives nonzero results. } \label{fig:old_sparse} \end{center} \end{figure} The probability $P(\forall i, \lambda _i < 0)$ that all eigenvalues are negative is also successfully calculated by this algorithm for $\gamma=3,4$, and $5$, as shown in Fig.~\ref{fig:exceed0_sparse}. These results indicate that for sparse random matrices, the probability $P(\lambda _i < 0,\forall i)$ behaves as $$ P(\forall i, \lambda _i < 0) \sim \exp \left ( -a _\gamma N \right ) $$ for large $N$, where the estimated values of the constants $a _\gamma$ are $a _3=0.68$, $a _4=1.20$, and $a_5 =1.81$ for $\gamma=3, 4$, and $5$, respectively. \begin{figure} \begin{center} \includegraphics{sparse_olddef_Gibbs.eps} \small \caption{ Probabilities $P(\forall i, \lambda_i < 0)$ for an ensemble of sparse random matrices estimated by the proposed method. The first definition is applied; the results with $\gamma=3$, $4$, and $5$ versus size $N$ of the matrices are shown. The lines show linear fits of the data. } \label{fig:exceed0_sparse} \end{center} \end{figure} In the case of sparse matrices, the log-probability $\log P(\forall i, \lambda _i < 0)$ is linear in $N$, which is apparently different from the behavior proportional to $N^2$ seen in the previous two examples. However, if we plot the probability $P(\forall i, \lambda _i < 0)$ with the number $M$ of nonzero components instead of the size $N$, the dependence is linear in all examples. Because $M \propto N$ in a sparse case and $M \propto N^2$ in a dense case, the obtained results are naturally explained. The definition of sparse random matrices most frequent in the literature~\cite{ rodgers1988density, *mirlin1991universality, *semerjian2002sparse} differs from that given above. Here, a second definition of sparse random matrices is given by assigning the probability $$ p_{ij}(x_{ij})=\left (1-\frac{\gamma}{N} \right )\delta (x_{ij})+\frac{\gamma}{N}\pi (x_{ij}) $$ to all components $x_{ij}, \,\, \mbox{\small ($ i \geq j$)}$, where $\delta$ and $\pi$ denote Dirac's delta function and a Gaussian density with zero mean and unit variance, respectively; each component is assumed to be an independent sample from this distribution. In this case, all components are mutually independent and the modification for keeping the number of nonzero components is not necessary. However, since the diagonal elements can vanish, singular behavior of the density $D(\lambda_1)$ of states appears at $\lambda_1=0$, which affects the efficiency of the proposed method. Fortunately, when we are interested in $P(\forall i, \lambda _i < 0)$, this difficulty is easily treated; we use the fact that the condition that all diagonal elements are negative \mbox{$\forall i, x_{ii}<0$} is a necessary condition for \mbox{$\forall i, \lambda _i < 0$}. By using this condition, the following two-stage method is introduced. First, we calculate the conditional probability \mbox{$P(\forall i, \lambda _i < 0 \, | \, \forall i, x_{ii}<0)$}. This conditional probability can be calculated with a multicanonical algorithm, in which we reject any state $\exists i, x_{ii} \geq 0$. The second step is to calculate the probability \mbox{$P( \forall i, x_{ii}<0)$}. Elementary calculation shows that \begin{equation} \label{eq:relation} P(\forall i, x_{ii}<0) = \left(\frac{\mathstrut 1}{ \mathstrut2}\right)^N \times \left (\frac{\mathstrut \gamma}{\mathstrut N} \right )^N. \end{equation} Then, the probability $P(\forall i, \lambda _i < 0)$ is given by the product \mbox{$P(\forall i, \lambda _i < 0 \, | \, \forall i, x_{ii}<0) \times P(\forall i, x_{ii}<0)$}. Fig.~\ref{fig:exceed00new_sparse} shows examples of the probability \mbox{$P(\forall i, \lambda _i < 0 \, | \, \forall i, x_{ii}<0)$} calculated in the first step; it is linear in $N$ in the semi-log scale, as expected. The probability \mbox{$P( \forall i, x_{ii}<0)$} obtained from it is shown in Fig.~\ref{fig:exceed0new_sparse}. In this case, \mbox{$\log P( \forall i, x_{ii}<0)$} is no longer linear in $N$ because of an $O(N\log N)$ term arising from (\ref{eq:relation}). They are fitted as $$ P( \forall i, x_{ii}<0) \sim \left (\frac{\gamma}{2N} \right )^N \exp(-a_{\gamma} N)_, $$ where $a_3=0.845$, $a_4=1.14$, $a_5=1.44$, and $a_6=1.75$ for $\gamma=3, 4, 5$, and $6$, respectively. \\ \begin{figure} \begin{center} \includegraphics{sparse_dneg_Gibbs.eps} \small \caption{ Probabilities $P(\forall i, \lambda _i < 0 \, | \, \forall i, x_{ii}<0)$ for an ensemble of sparse random matrices estimated by the proposed method. The second definition is applied; the results with $\gamma=3$, $4$, $5$, and $6$ versus size $N$ of the matrices are shown. The lines show linear fits to the data. The results of naive random sampling are also shown. } \label{fig:exceed00new_sparse} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \includegraphics{sparse_dneg_Gibbs2.eps} \small \caption{ Probabilities $P(\forall i, \lambda_i < 0)$ for an ensemble of sparse random matrices estimated by the proposed method. The second definition is applied; the results with $\gamma=3$, $4$, $5$, and $6$ versus size $N$ of the matrices are shown. } \label{fig:exceed0new_sparse} \end{center} \end{figure} \section{Concluding Remarks} \label{sec:con} A method based on multicanonical Monte Carlo is proposed and applied to the estimation of large deviations in the largest eigenvalue of random matrices. The method is successfully tested with the Gaussian orthogonal ensemble (GOE), an ensemble of matrices whose components are uniformly distributed in an interval, and an ensemble of sparse random matrices. The probabilities that all eigenvalues of a matrix are negative are successfully estimated in cases where naive random sampling is largely ineffective; the smallest values of the obtained probabilities are $\sim 10^{-200}$. The method can be applied to any ensemble of matrices. Moreover, it enables sampling of rare events defined by any statistics. Hence, it will be interesting to apply the method to large deviations in other quantities, such as statistics involving eigenvectors or spacing of eigenvalues. \begin{acknowledgments} We thank Prof.~M.~Kikuchi for his support and encouragement. This work is supported by Grants-In-Aid for Scientific Research (KAKENHI, No.17540348 and No.18079004) from MEXT of Japan. This work is also supported in part by Global COE Program (Core Research and Engineering of Advanced Materials-Interdisciplinary Education Center for Materials Science), MEXT, Japan. All simulations were performed on a PC cluster at Cybermedia center, Osaka university. \end{acknowledgments} \input{iba_Random_matrix_ver23_mono4.bbl} \end{document}
2024-02-18T23:40:22.344Z
2010-02-24T10:45:52.000Z
algebraic_stack_train_0000
2,190
4,162
proofpile-arXiv_065-10727
\section{Introduction} The brightest Hyper Luminous X-ray source (HLX-1) was discovered serendipitously with {\it XMM-Newton} on 2004 November 23 \citep{Farrell09} in the outskirts of the edge-on spiral galaxy ESO 243--49, at a redshift of 0.0224 \citep{Afonso05}. Its 0.2 -- 10 keV unabsorbed X-ray luminosity, assuming isotropic emission and the galaxy distance (95 Mpc), exceeded $1.1 \times 10^{42}$ ergs s$^{-1}$. Follow-up observations with {\it XMM-Newton} and the {\it Swift} X-ray telescope (XRT) revealed that the source was variable in X-rays by more than one order of magnitude, and that luminosity changes were accompanied by changes in the spectral shape, in a way similar to Galactic black hole systems \citep{godetapjl09}. As argued before, either super-Eddington accretion or beaming of the X-ray emission could account for X-ray luminosities up to $\sim 10^{40}$ergs s$^{-1}$ for stellar mass black holes (10-50 M$_\odot$), but would require extreme tuning to explain an X-ray luminosity of $10^{42}$ ergs s$^{-1}$. Hence, HLX-1 is an excellent intermediate mass black hole (IMBH) candidate \citep{Farrell09}. To verify the IMBH nature it is essential that its association with the host galaxy is confirmed. To do this, an improved position is crucial in order to carry out multi-wavelength follow-up observations. Subsequently, it is necessary to determine whether the source resides in an environment which can provide sufficient material to power such a massive black hole \citep{milos09}. Again multi-wavelength observations are the key to determining whether this object is in a star cluster, a star forming region or a globular cluster. As ULXs emit mostly in the X-ray domain, X-ray observations are thus the key to understanding how much material is required to feed the black hole as the luminosity is directly related to the mass accretion rate. Therefore constant monitoring of the X-ray emission will allow us to place constraints on the quantity of material necessary in the neighbouring environment. Here we present {\it Chandra} observations of HLX-1 with the High Resolution Camera (HRC) accorded under the Directors Discretionary Time (DDT) program, which has allowed us to revise and improve the position of HLX-1, along with {\em Swift} XRT data revealing that the X-ray luminosity of this source remains above $10^{42}$ ergs s$^{-1}$, following its recent rebrightening in August 2009 \citep{godetapjl09}. We discuss the mass accretion rate of HLX-1 using these observations. We also reveal evidence for the nature of the environment around HLX-1 through ultra-violet imaging with the {\it Swift} UV/Optical Telescope (UVOT); a finding independently supported by archival {\em Galaxy Evolution Explorer} ($\it{GALEX}$) observations. \section{Data analysis} \subsection{The first HRC-I observation} We obtained a 1 ks DDT observation of HLX-1 with the HRC-I camera onboard {\it Chandra} on 2009 July 4 (ObsID: 10919). We extracted all the events in the detector (i.e. the background) and plotted the number of counts per energy channel (PI) with the {\scriptsize CIAO} v4.1.1 task {\scriptsize WAVDETECT}. Between PI = 25 and 120 the background is lower and the sensitivity of the instrument is higher \citep[see e.g.][]{cameron}. We thus filtered the event list to include only these energies and performed source detection with {\scriptsize WAVDETECT} utilising an exposure map. No source is detected within the {\it XMM-Newton} error circle of HLX-1, and only 2 sources are detected in the field of view. The two sources that are detected correspond to known {\it XMM-Newton} sources: 2XMM J011050.4-460013 which was similar in flux to the {\it XMM-Newton} detection of HLX-1, and 2XMM J010953.9-455538. These sources have respectively 17 and 6 counts in the HRC-I image, which gives 0.018 and 0.006 ct s$^{-1}$. Therefore, HLX-1 must have dropped in flux down to at most 0.006 ct s$^{-1}$ (the faintest source detected), compared to 0.03 ct s$^{-1}$ expected from the flux in the most recent {\it XMM-Newton} observation. This faint flux is confirmed using {\em Swift} observations taken one month later \citep{godetapjl09}. \subsection{The second HRC-I observation} Following the re-brightening of HLX-1 \citep{godetapjl09} we obtained a second deeper DDT observation of 10 ks with the {\it Chandra} HRC-I 2009 on August 17 (ObsID: 11803). We generated several images of the HRC-I field of view with a binning of 4, 8, 16 and 32 pixels, and performed source detection using {\scriptsize WAVDETECT}. A total of 11 sources were detected, including a source consistent with the position of HLX-1 with a net count rate of 0.098 $\pm$ 0.003 ct s$^{-1}$. We cross-matched the entire source list with the 2MASS catalogue \citep{skrutskie} and found 3 matching objects which appear to be the real counterparts to the X-ray sources. A boresight correction was applied to the X-ray image which lead to a small shift of 0.285\arcsec. After applying the astrometry correction to the image {\scriptsize WAVDETECT} was run again. The final corrected position for HLX-1 was found to be RA = 01$^h$10$^m$28$\fs$3 and Dec = -46$^\circ$04'22$\farcs$3, which is consistent with the previous derived positions. The error on this position was then estimated from the 1 $\sigma$ error of the 2MASS catalogue at that position \citep[0.1\arcsec\ see][]{skrutskie}, the 1 $\sigma$ {\scriptsize WAVDETECT} 2D error (0.030\arcsec), and the root mean square (RMS) of the alignment of the X-ray image on the 2MASS reference stars (0.003\arcsec). Combining these errors, a total 95\% error of 0.3\arcsec\ was derived. \subsection{The {\it Swift} UVOT data} The {\it Swift} UVOT observed the field of ESO 243-49 on 2009 August 5, 6, 16, 18, 19 and 20 for a total exposure of 38~ks. Observations were performed in the {\it uvw2 } ($\sim$1600-2500\AA) filter only. Data reduction and analysis of the data were done using the software release {\sc headas} 6.6.2 and version 20090630 of the UVOT calibration files. At the location of the core of ESO 243-49 there is an extended object (Figure \ref{fig2}), with some hints of an elongated emission towards the position of HLX-1. No other source is observed above the flux level of this galaxy at the {\it Chandra} position of HLX-1. We cannot determine an upper limit from subtracting the galaxy contribution at that position as we do not know when or even if HLX-1 stops emitting in the {\it uvw2 } wavelength range. A source at the X-ray position would only be discernible from the galaxy if the source is significantly brighter than the emission at that position. Therefore, to determine an upper limit to the detection of the ULX above the flux of the galaxy, we first co-added the individual observations. We then placed 7 circular apertures of 3\arcsec\ radius on the image, one at the location of the X-ray source and 6 others around an ellipse on the galaxy of similar galaxy isophotal brightness to the region containing the source. Then using a background region of 20\arcsec\, located in a source free region close to the galaxy, we determined the magnitude of the galaxy in each aperture using the {\it Swift} tool {\sc uvotsource}. To be compatible with the calibration, which is determined for a 5\arcsec\ aperture \citep{poole}, we applied an aperture correction to the count rate using a table of aperture correction factors contained within the calibration files. We determined the standard deviation of the magnitudes determined from the 7 apertures to be 0.10 mag. We then took the 3 $\sigma$ upper-limit for detecting a source at the {\it Chandra} position to be the magnitude determined from the source region located at the {\it Chandra} position minus 3 times the standard deviation, which gives a 3 $\sigma$ upper-limit of $20.25$ mag. \subsection{The {\it GALEX} data} Given the possible detection of extended emission in the {\it uvw2}, we retrieved archived data taken by {\it GALEX} in the near- (NUV, $\sim$ 1800-2800\AA) and far-UV (FUV, $\sim$1500-2000\AA) bands. The field of HLX-1 was observed by {\it GALEX} as part of the deep survey on 2004 September 27 for $\sim$13 ks in the NUV and $\sim$8 ks in the FUV. A clear extension towards the location of HLX-1 from the bulge of ESO 243-49 can be seen in both images (see Figure \ref{fig3}), with the dominant emission occurring in the FUV. No point source was detected coincident with the position of HLX-1 in either band. In order to determine magnitude upper limits, we measured the counts within regions centered on the {\it Chandra} position with radii corresponding to the 80$\%$ encircled energy radii (3'' and 4'' for the NUV and FUV respectively). The total count rate inside six similar sized regions located around the galaxy, at the same isophotal brightness as the location of HLX-1 were also determined. Using a source-free area of the image near the galaxy with radii of 6'' and 8'' for the NUV and FUV respectively we measured the total background rates. These were scaled to the same extraction region size as used for the source regions. All the count rates (both source and background) were then corrected for the wings of the PSF outside the extraction region. Net count rates were calculated by subtracting the background rates from the source rates. Magnitudes were then calculated for the net rates using the zero points given in \citet{Morrissey07}. The standard deviation for the magnitudes from each of the 7 regions were 0.3 and 0.2 mag for the NUV and FUV respectively. We then determined the 3 $\sigma$ upper limits as the magnitudes derived for the region centered on the HLX-1 {\it Chandra} position minus 3 times the standard deviation, in the same way as for the UVOT data. This gives upper limits of 20.4 mag and 21.4 mag for the NUV and FUV respectively. \subsection{The XRT data} The third Target-of-Opportunity (ToO) observation of our {\it Swift}-XRT monitoring campaign was performed on 2009 October 02 \citep[hereafter designated S4 as it is the fourth of our {\em Swift} observations of this source, see][]{godetapjl09} for 9.4 ks. All the {\it Swift}-XRT Photon Counting data were processed using the tool {\scriptsize XRTPIPELINE} v0.12.3. To analyse the data, we used the same method as described in \citet{godetapjl09}. This new observation revealed that the HLX-1 0.3--10 keV count rate dropped from $\sim 3.3\times 10^{-2}$ to $\sim 1.9\times 10^{-2}$ count s$^{-1}$ when compared to the previous observation taken on 2009 August 16 \citep[S3; see ][]{godetapjl09}. Along with this reduction in count rate, we also observe a spectral softening as illustrated in Fig.~\ref{fig1}. The hardness, defined as the ratio of the 0.3--1 keV count rate over the 1--10 keV count rate, is $0.26\pm 0.05$ in S4, compared to a value of $0.42\pm 0.04$ in S3. The errors are quoted at the 1 $\sigma$ level. Both a unique absorbed disk black-body (DBB) and an absorbed power-law (PL) model give comparable fits ($\chi^2/dof = 9.8/6$ and $\chi^2/dof = 10/6$, respectively). The $n_H$-value was fixed at $4\times 10^{20}$ cm$^{-2}$ \citep{Farrell09} as the poor quality of the spectra meant that we were not able to derive a meaningful value for this quantity, although the allowed values always included the fixed one. For the DBB model, we obtained a temperature of $kT = 0.20^{+0.03}_{-0.02}$ keV, consistent with the disc blackbody temperature found in previous observations with {\em XMM-Newton} \citep[see][]{Farrell09} and a 0.2--10 keV unabsorbed luminosity of $1.0\pm 0.3\times 10^{42}$ erg cm$^{-2}$ s$^{-1}$, while for the PL model we obtained a value for the photon index of $\Gamma = 3.5\pm 0.3$ and a 0.2--10 keV unabsorbed luminosity of $1.9^{+0.5}_{-0.4}\times 10^{42}$ erg cm$^{-2}$ s$^{-1}$. A full discussion of the spectral nature of HLX-1 can be found in \citet{godetapjl09}. We add here that the spectral state observed for S4 is intermediate between S1 and S3, which appears to portray smooth spectral evolution that resembles that of black hole binaries, supporting the interpretation in \citet{godetapjl09}. However, here we exploit only the luminosity so as to constrain mass accretion limits, as discussed below. \begin{figure}[!t] \includegraphics[angle=-90,width=\columnwidth]{webb_f1.ps} \caption{{\it Swift}-XRT PC grade 0--12 unfolded spectra of HLX-1: S4 $=$ 2009-10-02 (black) and S3 $=$ 2009-08-16 (red). The solid lines correspond to the best-fit models. For S4, we used a unique absorbed disk black body. The $n_H$-value was fixed at $4\times 10^{20}$ cm$^{-2}$ in both cases. } \label{fig1} \end{figure} \begin{figure}[!h] \includegraphics[width=\columnwidth]{webb_f2.ps} \caption{{\it Swift}-UVOT un-smoothed {\it uvw2 } image of ESO 243-49. The white contours show the orientation of the galaxy in the J-band (Webb et al. in preparation). The black circle indicated by the white arrow is centered on the {\it Chandra} position of HLX-1, with the radius representing the 95$\%$ error bounds.} \label{fig2} \end{figure} \begin{figure*}[!t] \centerline{\includegraphics[width=\columnwidth]{webb_f3.ps}} \caption{Archival {\it GALEX} near-UV (left) and far-UV (right) images of ESO 243-49. Both images have been smoothed using a 2 pixel radius Gaussian smoothing function. The white contours show the orientation of the galaxy in the J-band (Webb et al. in preparation). The black circles indicated by the white arrows are the {\it Chandra} position of HLX-1, with the radii indicating the 95\% positional error of 0.3". } \label{fig3} \end{figure*} \section{Discussion and conclusions} Using {\em Chandra} observations of HLX-1, we have determined an improved position with a 95\% confidence error radius of 0.3\arcsec. This is sufficiently small that we should eventually be able to identify the source at other wavelengths, and eventually perform broad band spectroscopic observations. Although no point-like source is detected with UV observations at this position, HLX-1 appears to be situated near the edge of a region of UV excess stretching from the nucleus of ESO 243-49 towards the {\em Chandra} position. However, it is not yet certain that this emission is related to HLX-1, but the fact that no similar extended emission is seen in the radio, infra-red, optical, or X-ray domains \citep[][Webb et al. in preparation]{Farrell09}, makes it unlikely that this is either a foreground or background source. Follow-up observations with a higher resolution instrument would also help to confirm the association and the extended nature, as the low resolution of {\em GALEX} (5\arcsec\ Full Width Half Maximum (FWHM)) and the {\em UVOT} (better at 2.9\arcsec\ FWHM) would mean that it would be difficult to resolve a source at the centre of the galaxy and a second towards the position of HLX-1. However, the fact that this emission appears to be stronger in the FUV could hint towards star formation taking place in that region, as the UV flux primarily originates from the photospheres of O- through later-type B-stars (M$>$3M$_\odot$), and thus measures star formation averaged over a $\sim$10$^8$ yr timescale \citep[e.g.][]{lee09}. Starburst environments are thought to be able to generate IMBHs through runaway collisions and mergers of massive stars \citep{freitag06}. Further, the massive stars present in such environments can supply the necessary material to be accreted onto the black hole to provide the luminosities observed if one is captured by the black hole and then proceeds to main sequence Roche-lobe overflow \citep{hopman04}. An alternative situation is described by \citet{sun10} where a trail of star formation could be created by ram-pressure stripping of gas and stars from a dwarf galaxy which has recently interacted with ESO 243-49. In this case HLX-1 could have been an intermediate mass black hole which was once at the centre of the dwarf galaxy, but has now had most of the gas and stars stripped from it via the gravitational interaction with ESO 243-49. This may resemble a globular cluster if detected in the optical domain. To determine whether accretion from the medium around the stripped galaxy is possible, we use the latest X-ray observation of HLX-1, which has a 0.2--10 keV unabsorbed luminosity of $1.9^{+0.5}_{-0.4}\times 10^{42}$ erg cm$^{-2}$ s$^{-1}$ if we fit with a power law model. This is the highest luminosity that we have observed over the last five years whilst this source has been bright (previous observations with {\em Rosat} in the early nineties gave non-detections, confirming that the source was more than a factor 10 fainter than in these observations, Webb et al. in preparation). HLX-1 is not bright in the radio, infra-red, optical or NUV domains \citep[][Webb et al. in preparation]{Farrell09}, so we can assume that the majority of the emission is in the X-rays and we can use this luminosity (L) to deduce the approximate mass accretion rate (\.M), where L = $\eta$ \.M c$^2$ and $\eta$ = GM/Rc$^2$ (M is the mass of the black hole and R the innermost stable circular orbit i.e. 6 times the Schwarzschild radius, therefore supposing a non-rotating black hole). We assume a mass of 500 M$_\odot$. We find \.M $\sim$ 2.1 $\times$ 10$^{-4}$ M$_\odot$ yr$^{-1}$. This is well within the ranges of the matter available for accretion predicted by \citet{sun10}, therefore supporting the idea that HLX-1 {\em could} be embedded in a star forming region and that it may have originated from a stripped dwarf galaxy. \acknowledgments We thank Neil Gehrels and the {\em Swift team} for according us the {\em Swift} observations as well as Harvey Tananbaum and the {\em Chandra} team for approving the two {\em Chandra} DDT observations. S.A.F. and S.O. acknowledge STFC funding. This research has made use of data obtained from the {\it Chandra} Data Archive and software provided by the {\it Chandra} X-ray Center. We thank the {\it GALEX} collaboration and the Space Telescope Science Institute for providing access to the UV images used in this work. {\it GALEX} is a NASA Small Explorer Class mission. We thank Simon Rosen for valuable discussions and we are grateful to the referee for comments which improved this paper. {\it Facilities:} \facility{{\it Chandra}}, \facility{{\it Swift}}, \facility{{\it GALEX}}
2024-02-18T23:40:22.580Z
2010-02-19T22:01:57.000Z
algebraic_stack_train_0000
2,201
3,224
proofpile-arXiv_065-10741
\section{Introduction} \label{sec:intro} This work is dedicated to Albrecht Dold whose enormous influence on the development of topology continues to this very day. In fact, it was his stimulating paper~\cite {DG} (written jointly with Daciberg Gon\c calves) which got me interested in coincidences of maps and which inspired me to introduce a related obstruction theory (cf.~\cite {K2}--\cite {K7}). Here we apply it to the special case where the domain is a sphere. Throughout this paper let $f_1,f_2\:S^m\to N$ be two (continuous) maps into an $n$-dimensional smooth connected manifold $N$ without boundary. We will also assume that $m,n\ge2$ (the remaining cases being well understood). We want to make the coincidence locus $$ \refstepcounter {Thm} \label {eq:1.1} \mathrm {C}(f_1,f_2) = \{x\in S^m\mid f_1(x)=f_2(x)\} \leqno (\theThm) $$ as small as possible (in some sense) by deforming $f_1$ and $f_2$. \begin {Def} \label {def:1.2} (compare~\cite {DG}, p.~293--296 and~\cite {GR2}, \S1, where a different terminology is used for the same concepts; see also~\cite {K7}, 5.3). (i) The pair $(f_1,f_2)$ is called {\it loose} if there exist maps $f_i'$ homotopic to $f_i$, $i=1,2$, whose coincidence locus $\mathrm {C}(f_1',f_2')$ is empty (i.e.\ ``$f_1$ and $f_2$ can be deformed away from one another''). (ii) In the special case $f_1=f_2=:f$ we have a refined notion: the pair $(f,f)$ is {\it loose by small deformation} if for every metric on $N$ and for every $\varepsilon>0$ there exists an $\varepsilon$-approximation $\bar f$ of $f$ such that $\bar f(x)\ne f(x)$ for all $x\in S^m$. \end{Def} \begin{Pro} \label {pro:1.3} All pairs of maps $f_1,f_2\:S^m\to N$ are loose if at least one of the following conditions hold: {\rm (i)} $m<n$; or {\rm (ii)} $N$ is not compact; or {\rm (iii)} the fundamental group $\pi_1(N)$ is not finite; or {\rm (iv)} $N$ is the total space of a Serre fibration with a section and with strictly positive dimensions of the fiber and base (e.g.\ if $N$ is the product of such spaces). \end{Pro} \begin{Pro} \label {pro:1.4} Assume that {\rm (v)} $N$ is noncompact or has trivial Euler characteristic $\chi(N)$ (this holds e.g.\ when $n$ is odd), or {\rm (vi)} $\pi_{m-1}(S^{n-1})=0$. Then for all maps $f\:S^m\to N$ the pair $(f,f)$ is loose by small deformation. \end{Pro} \begin{Ex} [surfaces] \label {ex:1.5} Let $n=2$ and $m\ge1$. Then all such pairs $(f,f)$ are loose by small deformation except when $m=2$ and $N=S^2$ or $\mathbb{RP}(2)$. \end{Ex} It is not hard to verify the looseness claims in~\ref {pro:1.3}, \ref {pro:1.4} and~\ref {ex:1.5} by elementary considerations (see section~\ref {sec:2} below). However, in general a deeper analysis is needed and the following approach has proved to be fruitful. For every pair of (base point preserving) maps $f_1,f_2\:S^m\to N$ we defined (in~\cite {K6}) an invariant $$ \refstepcounter {Thm} \label {eq:1.6} \omega^\#(f_1,f_2) \in \pi_m\left(S^n\wedge(\Omega N)^+\right) \leqno (\theThm) $$ which reflects the geometry of a (generic) coincidence submanifold $C\subset S^m$, its normal bundle as well as certain path space data (for details see~\cite {K6} or section~\ref {sec:3} below). The invariant $\omega^\#(f_1,f_2)$ depends only on the (base point preserving) homotopy classes of $f_1,f_2$ and must vanish if $(f_1,f_2)$ is loose (and, in particular, in all cases listed in proposition~\ref {pro:1.3}). The converse holds in a ``stable'' dimension range. \begin{Thm} \label {thm:1.7} {\rm (compare~\cite {K3}, 1.10 and~\cite{K2}, 2.2). } Assume $m<2n-2$. Then a pair $(f_1,f_2)$ is loose precisely if $\omega^\#(f_1,f_2)=0$. In the special case $f_1=f_2=:f$ the pair $(f,f)$ is loose by small deformation precisely if $\omega^\#(f,f)=0$. \end{Thm} Our looseness obstruction is always compatible with additions in homotopy groups: $$ \refstepcounter {Thm} \label {eq:1.8} \omega^\#(f_1+f_1',f_2+f_2') = \omega^\#(f_1,f_2) + \omega^\#(f_1',f_2') \leqno (\theThm) $$ for all $[f_i],[f_i']\in \pi_m(N)$, $i=1,2$. There is also a canonical involution \ $\mathrm {inv}$ \ on the group $\pi_m(S^n\wedge(\Omega N)^+)$. It plays a r\^ole e.g.\ in the symmetry relation $$ \refstepcounter {Thm} \label {eq:1.9} \omega^\#(f_2,f_1) = \mathrm {inv}\left(\omega^\#(f_1,f_2)\right). \leqno (\theThm) $$ \medskip Two special settings are very central in coincidence theory. \bigskip {\bf I. The root case: $f_2=y_0$ } (where the fixed value $y_0\in N$ is given). Here our invariant yields the degree homomorphism $$ \refstepcounter {Thm} \label {eq:1.10} \deg^\#:=\omega^\#(-,y_0)\ \:\ \pi_m(N)\to\pi_m(S^n\wedge(\Omega N)^+). \leqno (\theThm) $$ (For a purely homotopy theoretical interpretation in terms of an enriched Hopf-Ganea invariant see theorem~7.2 in~\cite {K6}.) Clearly $\deg^\#$ vanishes on $i_*(\pi_m(N\setminus\{*\}))$ where $i\:N\setminus\{*\}\hookrightarrow N$ denotes the inclusion of the complement of a point $*$ in $N$. It turns out that the resulting sequence $$ \refstepcounter {Thm} \label {eq:1.11} \pi_m(N\setminus\{*\}) \stackrel {i_*}{\longrightarrow} \pi_m(N) \stackrel {\deg^\#}{\longrightarrow} \pi_m(S^n\wedge(\Omega N)^+) \leqno (\theThm) $$ is very often exact, e.g.\ when $m\le2n-3$ or $n\le2$ or $N$ is a sphere or a (real, complex or quaternionic) projective space of arbitrary dimension (cf.~\cite {K6}, (6.5)). Thus in these cases $\deg^\#(f)$ is the {\it complete} looseness obstruction for the pair $(f,y_0)$. \begin {Que} \label {que:1.12} Is the sequence~(\ref {eq:1.11}) {\it always} exact? \end{Que} The degree $\deg^\#$ can be considered to be our basic coincidence invariant since (in view of~(\ref {eq:1.8}) and~(\ref {eq:1.9})) $$ \refstepcounter {Thm} \label {eq:1.13} \begin {array} {lcl} \omega^\#(f_1,f_2)&=&\omega^\#(f_1,y_0) + \omega^\#(y_0,f_2) \\ &=&\deg^\#(f_1) + \mathrm {inv}(\deg^\#(f_2)) \end{array} \leqno (\theThm) $$ for all $[f_1],[f_2]\in\pi_m(N,y_0)$. \bigskip {\bf II. The selfcoincidence case: $f_1=f_2=:f$. } Consider the bundle $ST(N)$ of unit tangent vectors over $N$ (with respect to some metric) and the resulting exact (horizontal) homotopy sequence as well as the Freudenthal suspension homomorphism $E$: $$ \refstepcounter {Thm} \label {eq:1.14} \xymatrix{ \ldots \ar[r] & \pi_m(ST(N)) \ar[r] & \pi_m(N) \ar[r]^{\!\!\!\!\!\!\d} & \pi_{m-1}(S^{n-1}) \ar[r] \ar[d]^E & \ldots \\ & & & \pi_m(S^n) & . } \leqno (\theThm) $$ \begin {Thm} \label {thm:1.15} Given $[f]\in\pi_m(N)$, we have the following logical implications: \medskip {\rm (i)} $\d([f])\in\pi_{m-1}(S^{n-1})$ vanishes; $\phantom{j}\!$$\Updownarrow$ {\rm (ii)} $(f,f)$ is loose by small deformation; \medskip $\phantom{i}$$\Downarrow$ \quad $\left(\Updownarrow \mbox { if } N=\mathbb{RP}(n)\right)$ \medskip {\rm (iii)} $(f,f)$ is loose (by any deformation); \medskip $\phantom{i}$$\Downarrow$ \quad $\left(\Updownarrow \mbox { if } N=S^n\right)$ \medskip {\rm (iv)} $\omega^\#(f,f)=0$; $\phantom{i}$$\Updownarrow$ {\rm (v)} $E(\d([f])=0$. \end{Thm} Thus $\omega^\#(f,f)$ is just ``one desuspension short'' of being the {\it complete} looseness obstruction. The equivalence of~(i) and~(ii) was already noted by Dold and Gon\c calves in~\cite {DG}. Observe also that all conditions (i)--(v) in~\ref {thm:1.15} {\it except} (iii) are compatible with covering projections $p\:\tilde N\to N$ (compare~\cite {K7},1.22). \begin{Cor} \label {cor:1.16} The conditions {\rm (i)}$,\ldots,${\rm (v)} in~$\ref {thm:1.15}$ are all equivalent if the suspension homomorphism $E$, when restricted to $\d(\pi_m(N))$ (cf.~$(\ref {eq:1.14})$), is injective and, in particular, if $m\le n+3$ or if $m=n+4\ne10$ or in the stable dimensional range $m\le 2n-3$. \end{Cor} Indeed, in these three dimension settings $E$ is injective whenever $n\equiv 0(2)$. Clearly conditions (i)--(v) in~\ref {thm:1.15} are automatically satisfied under the assumptions of proposition~\ref {pro:1.4}. \begin{Ex} [$N=\mathbb{RP}(n)$ or $S^n$] \label{ex:1.17} If $m\le n+4$, then the five conditions in~\ref {thm:1.15} are equivalent for all maps $f\:S^m\to\mathbb{RP}(n)$ and $\tilde f\:S^m\to S^n$ (even in the exceptional case $m=n+4=10$ since $\pi_{10}(S^6)=0$). However, this is no longer true for $m=n+5=11$. Indeed, according to~\cite {T} and~\cite{P} we have in~(\ref {eq:1.14}) $$ \frac12H \: \pi_{11} (S^6) \stackrel {\cong}{\longrightarrow} \mathbb {Z}; \quad \pi_{10} (S^5) \cong \mathbb {Z}_2; \quad \pi_{10} (V_{7,2}) =0 $$ where $H$ denotes the Hopf invariant. Thus $\d$ is onto, but $E$ and hence $E\circ\d$ is trivial here. Therefore, given any map $f\:S^{11}\to\mathbb{RP}(6)$ and a lifting $\tilde f\:S^{11}\to S^6$ of it, the invariants $\omega^\#(f,f)$ and $\omega^\#(\tilde f,\tilde f)$ vanish and the pair $(\tilde f,\tilde f)$ is loose (see~\cite{K6}, 1.12; compare also~\cite {GW}). But $(f,f)$ is loose and, equivalently, $(\tilde f,\tilde f)$ is loose by small deformation only (and precisely) if $[f]\in2\cdot\pi_{11}(\mathbb{RP}(6))$ or, equivalently, if $H(\tilde f)\equiv0(4)$ (compare~\cite {GR1} and~\cite{K7}, 1.19; the delicate difference between conditions (ii) and (iii) in~\ref {thm:1.15} is further illustrated e.g.\ in~\cite{GR2} and~\cite{K7}, 1.22). For simple examples of nontrivial $\omega^\#$-values consider the case $m=n$. If a map $\tilde f\:S^n\to S^n$ has (standard) mapping degree $d$, then $$ \pm\omega^\#(\tilde f,\tilde f) = E\circ\d(\tilde f) = \left(1+(-1)^n\right)d $$ in $\pi_n(S^n\wedge(\Omega S^n)^+)\cong\pi_n(S^n)\cong\mathbb {Z}$. Assume that $n$ is even. Then $(\tilde f,\tilde f)$ is loose if and only if $\tilde f$ is nullhomotopic; the same holds for maps from $S^n$ into $\mathbb{RP}(n)$. This shows that the exception made in example~\ref {ex:1.5} is indeed necessary. \mbox{}\hfill$\square$ \end{Ex} \medskip In previous work (cf.~\cite{K6}) we simplified our $\omega^\#$-invariant by {\it assuming} that $f_1$ or $f_2$, say $f_2$, is ``not coincidence producing'' (cf.~\cite{BS}), i.e.\ $$ \refstepcounter {Thm} \label {eq:1.18} (\bar f_2,f_2) \mbox{ is loose for {\it some} map }\bar f_2\:S^m\to N. \leqno (\theThm) $$ Then $(f_1,f_2)$ and $(f_1-\bar f_2,(f_2-f_2)\sim y_0)$ have a similar coincidence behaviour and $$ \refstepcounter {Thm} \label {eq:1.19} \omega^\#(f_1,f_2) = \deg^\#(f_1-\bar f_2). \leqno (\theThm) $$ In this paper we start from the decomposition (up to homotopy) $$ (f_1,f_2) \sim (f_1-f_2,y_0) + (f_2,f_2) $$ which is always available without any assumption. In view of~(\ref {eq:1.8}) this implies the basic equation $$ \refstepcounter {Thm} \label {eq:1.20} \omega^\#(f_1,f_2) = \deg^\#(f_1-f_2)+\omega^\#(f_2,f_2). \leqno (\theThm) $$ Vanishing results concerning the second (``selfcoincidence'') summand are not only interesting in view of theorem~\ref {thm:1.15}, but they also allow us to reduce our analysis to studying the degree homomorphism $\deg^\#$ or, equivalently, Hopf-Ganea invariants (cf.~\cite{K6}, 7.2). It turns out that the cardinality $\#\pi_1(N)\in\mathbb {Z}\cup\{\infty\}$ of the fundamental group of $N$ plays a crucial r\^ole. \begin{Thm} \label {thm:1.21} Assume that {\rm (i)} $\#\pi_1(N)>2$ and $N$ is orientable or not; or {\rm (ii)} $\#\pi_1(N)\ge2$ and $N$ is orientable. Then $\omega^\#(f,f)=0$ for all $f\:S^m\to N$. In particular $$ \omega^\#(f_1,f_2)=\deg^\#(f_1-f_2) $$ for all $[f_1],[f_2]\in\pi_m(N)$. Also the set of all possible values of our $\omega^\#$-invariant is restricted by the relation $$ \{\omega^\#(f_1,f_2)\}\ =\ \im(\deg^\#)\ \subset\ \ker(\mathrm {id} + \mathrm {inv}) $$ where \ $\mathrm {id}=$ identity and \ $\mathrm {inv}$ \ denotes the canonical involution of $\pi_m(S^n\wedge(\Omega N)^+)$ (cf.~$(\ref {eq:1.9})$ and~$(\ref {eq:1.13})$). If in addition $m<2n-2$ or $m\le n+3$, then the pair $(f,f)$ is loose by small deformation for every map $f\:S^m\to N$. \end{Thm} On the one hand, the proof (given in section~\ref {sec:4} below) compares the contributions of the pathcomponents of the loop space $\Omega N$ to $\omega^\#(f,f)$; on the other hand, it uses the fact that at most one such pathcomponent can contribute nontrivially to our selfcoincidence invariant. \begin{Ex} \label{ex:1.22} As an application consider the case where $m=n$ and $N$ is an orientable $n$-manifold with $\pi_1(N)=\mathbb {Z}_2$. Then $$ \mathrm {inv}=(-1)^n\mathrm {id} \quad \mbox{on} \quad \pi_n(S^n\wedge(\Omega N)^+)\cong\mathbb {Z}\oplus\mathbb {Z}. $$ If $n$ is even then \ $\ker(\mathrm {id}+\mathrm {inv})=0$; therefore each pair of maps $f_1,f_2\:S^m\to N$ is loose (since $\omega^\#(f_1,f_2)=0$). \end{Ex} \begin{Ex} [real Grassmann manifolds] \label{ex:1.23} Consider the manifold $G_{r,k}$ (or $\tilde G_{r,k}$, resp.) of nonoriented (or oriented, resp.) $k$-planes in $\mathbb R^r$, $0<k<r$. If $r$ is even, then $G_{r,k}$ is orientable, $\pi_1(G_{r,k})\ne\{0\}$ and hence $\omega^\#(f,f)=0$ for all maps from $S^m$, $m\ge1$, to $G_{r,k}$ or $\tilde G_{r,k}$. This does not seem to follow from~\ref {pro:1.3} or~\ref {pro:1.4}; indeed, the Euler characteristic of $G_{r,k}$ is strictly positive if both $r$ and $k$ are even (see~\ref {fac:4.3} below; compare also our example~\ref {ex:2.2}). \end{Ex} A further vanishing result (quoted from~\cite{K6},1.9; compare also~\cite{K7}, 5.4) is of interest here. \begin{Pro} \label {pro:1.24} Assume $\pi_1(N)\ne0$. Given $[f]\in\pi_m(N)$, $m\ge2$, if $j_*\circ\d([f])$ vanishes then so does $\omega^\#(f,f)$. \end{Pro} Here $j\:S^{n-1}\hookrightarrow N\setminus\{*\}$ denotes the inclusion of the boundary sphere of a small $n$-ball in $N$ centered at a point $*\in N$. According to~\cite{K7}, 5.4, the condition $j_*\circ\d([f])=0$ just means that $f$ is not coincidence producing (cf.~(\ref {eq:1.18})). \mbox{}\hfill$\square$ \medskip Let us summarize: {\it although our selfcoincidence invariant $\omega^\#(f,f)$ is ``only one desuspension short'' of being a complete looseness obstruction, it vanishes for a great number of maps $f$ defined on spheres} (in contrast, in~\cite{K2} $\omega^\#(f,f)$ was shown to be highly nontrivial in many examples where the domain of $f$ is a general closed manifold $M$). \begin{Cor} \label {cor:1.25} Assume $\pi_1(N)\ne0$ and $n\ge1$. If $\omega^\#(f,f)\ne0$ for a map $f\:S^m\to N$ then the following restrictions must all be satisfied: {\rm a)} $n$ is even and $m\ge n\ge 4$, or else $m=2$ and $N=\mathbb{RP}(2)$; and {\rm b)} $N$ is closed and nonorientable, \ $\pi_1(N)=\mathbb {Z}_2$, \ $\chi(N)\ne0$; \ moreover the homomorphism $i_*\:\pi_m(N\setminus\{*\},y_0)\to\pi_m(N,y_0)$ (induced by the inclusion of $N$, punctured at some point $*\ne y_0$) is not onto; furthermore, $N$ does not allow a free smooth action by a nontrivial finite group; and {\rm c)} $E\circ\d([f])\ne0$ and $j_*\circ\d([f])\ne0$ (compare~$(\ref {eq:1.14})$ and~$\ref {thm:1.21}$). \end{Cor} Thus an obvious place to look for nontrivial values of $\omega^\#(f,f)$ are even-dimensional real projective spaces (cf.\ already example~\ref {ex:1.17}). \begin{Ex} \label {ex:1.26} If $n=4,8,12,14,16$ or $20$, then there exist infinitely many homotopy classes $[f]\in\pi_{2n-1}(\mathbb{RP}(n))$ such that $\omega^\#(f,f)\ne0$. We see this with the help of the weaker invariant $\omega(f,f)\in\pi_{m-n}^S$ (cf.~(\ref {eq:3.15}) below and~\cite{K2}, 2.1), which stabilizes and simplifies $\omega^\#(f,f)$ and is more easily computable. However, for many maps $f,f_1,f_2$ from $S^m$ into a nonorientable $n$-manifold $N$ this approach does not yield nontrivial $\omega$-values. Indeed, $\omega(f,f)=0$ whenever $2\cdot\pi_{m-n}^S=0$, e.g.\ when $m-n=1,2,4,5,6,8,9,12,14,16$ or $17$ (cf.~\cite{T}). Moreover, for all $m$ and $n$ $$ \refstepcounter {Thm} \label {eq:1.27} (\chi(N)-1)\ \omega(f_1,f_2) = 0 \leqno (\theThm) $$ where $\chi(N)$ stands for the Euler characteristic of $N$. \end{Ex} \begin{Ex} [\rm $\mathbf{N=G_{5,2}}$; compare~\ref {ex:1.23}] \label {ex:1.28} The invariant $\omega(f_1,f_2)$ vanishes for all maps $f_1,f_2\:S^m\to G_{5,2}$, $m\ge1$. In particular, the induced homomorphism $$ \mathrm {coll}_*\:\pi_m(G_{5,2})\to\pi_m(S^6) $$ is trivial for $m\le10$ where \ $\mathrm {coll}$ \ denotes the degree one map which collapses all but an open topdimensional ball into a point. (Note that e.g.\ for $m=6,7$ or $8$, resp., $\pi_m(G_{5,2})\cong\pi_m(V_{5,2})$ is isomorphic to $\mathbb {Z}_2$, $\mathbb {Z}\oplus\mathbb {Z}_2$ and $\mathbb {Z}_2$, resp.; cf.~\cite{P}). \mbox{}\hfill$\square$ \end{Ex} In section~\ref {sec:4} below we will deduce the vanishing theorem~\ref {thm:1.21} and relations such as~(\ref {eq:1.27}) and~\ref{ex:1.28} from a careful analysis of the root case in section~\ref {sec:3}. While our looseness obstructions lie in complicated groups which are usually hard to compute, they give rise to simple numerical invariants (defined in section~\ref {sec:5} below). These generalize the Nielsen numbers which have played such a central r\^ole in topological fixed point theory (cf.\ e.g.~\cite{B} and~\cite{K6}, 1.12~(iv)). In analogy to the classical procedure, our Nielsen numbers $\mathrm {N}^\#(f_1,f_2)\ge\mathrm {N}(f_1,f_2)$ count those ``Reidemeister classes'' $A\in\pi_0(\Omega N)=\pi_1(N)$ which make an essential contribution to $\omega^\#(f_1,f_2)$ and $\tilde\omega(f_1,f_2)$, resp. \begin{Thm} \label {thm:1.29} {\rm (cf.~\cite{K6}, 1.2). } For every pair $(f_1',f_2')$ of maps homotopic to $(f_1,f_2)$ the number of pathcomponents of the coincidence subspace $\mathrm {C}(f_1',f_2')\subset S^m$ is at least $\mathrm {N}^\#(f_1,f_2)$. \end{Thm} There are quite a few examples where $\mathrm {N}^\#(f_1,f_2)$ is the best possible such lower bound (see e.g.\ the ``Wecken theorems'' in~\cite{K6}, 1.3, 1.12, 1.13, 1.14$,\ldots$). \begin{Thm} \label {thm:1.30} For all maps $f_1,f_2\:S^m\to N$ we have: $$ \begin{array}{ccc} \mathrm {N}^\#(f_1,f_2)=0 & \mbox{if and only if} & \omega^\#(f_1,f_2)=0\ ; \\ \mathrm {N}(f_1,f_2)=0 & \mbox{if and only if} & \tilde\omega(f_1,f_2)=0. \end{array} $$ \end{Thm} We call this the norm property of our Nielsen numbers: they decide whether elements in the image group of our $\omega$-invariants are zero, just like norms of vectors do. (Recall, in addition, analogues of the triangular inequality, cf.~\cite{K6}, 6.2.) By definition $\mathrm {N}^\#(f_1,f_2)$ is among the integers between $0$ and $\#\pi_1(N)$. But few of these actually occur as Nielsen numbers. \begin{Thm} \label{thm:1.31} Let $k\in\mathbb {Z}\cup\{\infty\}$ be the number of elements in $\pi_1(N)$. Then for each pair of maps $f_1,f_2\:S^m\to N$ the Nielsen numbers $\mathrm {N}^\#(f_1,f_2)$ and $\mathrm {N}(f_1,f_2)$ may assume only the two values $0$ or $k$ or, if $k=2$ and $N$ is an even-dimensional, closed, nonorientable manifold with nontrivial Euler characteristic, also $1$ as a third possible value. \end{Thm} Here again (as in~\cite{K7}) the case $N=\mathbb{RP}(n)$ turns out to be particularly interesting. E.g.\ assume that $m=n$ is even and let $p\:S^n\to\mathbb{RP}(n)$ be the canonical covering map. Then $\mathrm {N}^\#(f_1,f_2)=\mathrm {N}(f_1,f_2)$ equals $0,1$ and $2$, resp., when $(f_1,f_2)=(y_0,y_0)$, $(p,p)$ and $(p,y_0)$, resp.\ (cf.\ the end of section~\ref {sec:4} below). \begin{Rem} \label {rem:1.32} It will be convenient to consider base point preserving maps and homotopies in sections~\ref {sec:3}--\ref {sec:5} below. Recall, however, that looseness and Nielsen numbers depend only on free homotopy classes and so does the vanishing of our $\omega$-invariants (see e.g.~\cite{K6}, 1.2, 2.1 and the appendix there). \end{Rem} \begin{Rem} \label {rem:1.33} Our approach can also be extended to the setting of fibre preserving maps between smooth fibrations. The resulting coincidence invariants are closely related to A.~Dold's fixed point index which was defined and studied in~\cite{D}. \end{Rem} \begin{ConNot} Throughout this paper $N$ denotes a smooth connected manifold without boundary (Hausdorff and having a countable basis). Our notation will often not distinguish between a constant map and its value. \end{ConNot} \subsection*{Acknowledgment} It is a pleasure to thank D.~Gon\c calves and E.~Kudryavtseva for stimulating discussions. \section{Looseness} \label{sec:2} In this section we use rather elementary techniques to establish the looseness results in propositions~\ref {pro:1.3} and~\ref {pro:1.4} as well as example~\ref {ex:1.5}. \begin{Lem} \label {lem:2.1} Let $y_1\ne y_2$ be different points in $N$ and assume that the homomorphism \ $i_*\:\pi_m(N\setminus\{y_1\},y_2)\to\pi_m(N,y_2)$ induced by the inclusion map is onto. Then for all maps $f_1,f_2\:S^m\to N$ the pair $(f_1,f_2)$ is loose. \end{Lem} \Proof Let $S_+^m$ and $S_-^m$ denote the hemispheres defined by $x_1\ge0$ and $x_1\le0$, resp., $x=(x_1,\dots,x_{m+1})\in S^m$. Then $f_2$ is homotopic to a map $f_2'$ such that $f_2'(S_+^m)\subset N\setminus\{y_1\}$ and $f_2'|_{S_-^m}\equiv y_2$. Similarly, $f_1\sim f_1'$ such that $f_1'|_{S_+^m}\equiv y_1$ and $f_1'(S_-^m)\subset N\setminus\{y_2\}$. Clearly the pair $(f_1',f_2')$ is coincidence free. \mbox{}\hfill$\square$ \medskip Proposition~\ref {pro:1.3} follows as a corollary since each of the conditions~(i) through~(iv) imply that \ $i_*$ is onto. If $m<n$, this is seen by making a map $f\:S^m\to N$ transverse to $\{y_1\}$. If $N$ is not compact, deform $f$ by an isotopy along a smoothly embedded path which starts in $N\setminus f(S^m)$, ends in $y_1$ and avoids $y_2$. If the universal covering space $p\:\tilde N\to N$ has infinitely many layers, consider a lifting $\tilde f\:S^m\to\tilde N$ of $f$. Apply an isotopy along suitable disjoint paths in $\tilde N$ which start in $\tilde N\setminus\tilde f(S^m)$ and end in the finitely many points of $\tilde f(S^m)\cap p^{-1}(\{y_1\})$. The corresponding homotopy in $N$ deforms $f$ into $N\setminus\{y_1\}$. Finally, in case~(iv) of proposition~\ref {pro:1.3} the exact homotopy sequence of the fibration splits. Thus $f$ can be deformed into the union of the fibre and the image of the section. \mbox{}\hfill$\square$ \begin{Ex} \label {ex:2.2} For $r=2r'>2$ and $m\ge1$ every pair of maps $f_1,f_2\:S^m\to G_{r,2}$ (into the Grassmann manifold of $2$-planes in $\mathbb R^r$) is loose. Indeed, due to the complex structure on $\mathbb R^r=\mathbb C^{r'}$, the fibration $S^{r-2}\hookrightarrow V_{r,2}\to S^{r-1}$ (of the Stiefel manifold of $2$-frames in $\mathbb R^r$) has a canonical section; hence $$ \pi_m(V_{r,2})\cong\pi_m(S^{r-2})\oplus\pi_m(S^{r-1}). $$ For $m\ge3$ this group is also isomorphic to $\pi_m(G_{r,2})$, and the summands correspond to the subspaces $$ A:=\{\mbox{$2$-planes containing the base vector $e_r$}\} $$ and $$ B:=\{\mbox{complex lines in $\mathbb C^{r'}$}\}. $$ Since $A\cup B\subsetneqq G_{r,2}$, our claim follows from lemma~\ref {lem:2.1}. \mbox{}\hfill$\square$ \end{Ex} \proof{of proposition~\ref {pro:1.4}} A homotopy lifting argument shows that $(f,f)$ is loose by small deformation if and only if the pulled back bundle $f^*(TN)$ has a nowhere vanishing section over $S^m$ (compare~\cite{DG}). If $N$ is noncompact or $\chi(N)=0$, then the tangent bundle $TN$ itself has a nonzero section over $f(S^m)$ which we can pull back. In any case every $n$-plane bundle over $S^m$ allows a section with a single zero; it can be removed if its ``index map'' $q\:S^{m-1}\to S^{n-1}$ (compare~\cite{Wy} or also~\cite{K6}, (28)) is nullhomotopic. \mbox{}\hfill$\square$ \medskip The claim in example~\ref {ex:1.5} follows since $\pi_{m-1}(S^1)=0$ except when $m=2$. Furthermore $\pi_2(N)=0$ for every surface other than $N=S^2$ or $\mathbb{RP}(2)$. \section{Looseness obstructions} \label{sec:3} In this section we recall the definitions of the various versions of the $\omega$-invariants. (For more details we refer to~\cite{K6}; see also~\cite{K3} and~\cite{K2}.) Moreover we compare the Nielsen components of $\omega^\#$ in the root case (in~\ref {pro:3.12}). This will be needed to establish our main results. Unless specified otherwise we will assume that $m,n\ge2$. Fix basepoints $x_0\in S^m$ and $y_0\in N$, and a local orientation of $N$ at $y_0$. Throughout the remainder of this paper we consider (continuous) maps $f_1,f_2,f,\ldots\:(S^m,x_0)\to(N,y_0)$. Our notation will not distinguish between a constant map and its value. After a suitable approximation we may assume that $(f_1,f_2)\:S^m\to N\times N$ is smooth and transverse to the diagonal $$ \Delta = \{(y,y)\in N\times N\mid y\in N\}. $$ Then the coincidence set $$ \refstepcounter {Thm} \label {eq:3.1} C=\mathrm {C}(f_1,f_2)=(f_1,f_2)^{-1}(\Delta)=\{x\in S^m\mid f_1(x)=f_2(x)\} \leqno (\theThm) $$ is a smooth closed $n$-codimensional submanifold of $S^m$. The map $(f_1,f_2)$ induces an isomorphism of (normal) vector bundles $$ \refstepcounter {Thm} \label {eq:3.2} \nu(C,S^m)\cong(f_1,f_2)^*(\nu(\Delta,N\times N))\cong f_1^*(TN)|_C. \leqno (\theThm) $$ Now pick a homotopy $G\:C\times I\to S^m$ which deforms the inclusion map $g\:C\hookrightarrow S^m$ to the constant map with value $x_0$. (Such a contraction exists and is unique up to homotopy rel $(0,1)$ since $n\ge2$ and the complement of any point in $S^m$ allows ``linear'' homotopies.) Then $G$ induces a vector bundle isomorphism from $f_1^*(TN)|_C=g^*(f_1^*(TN))$ to \ $C\times T_{y_0}(N)$. Composing this with~(\ref {eq:3.2}) and using our choice of a local orientation of $N$ at $y_0$, we get a framing $$ \refstepcounter {Thm} \label {eq:3.3} \bar g^\#\:\nu(C,S^m)\stackrel{\cong}{\longrightarrow} C\times\mathbb R^n. \leqno (\theThm) $$ Furthermore we obtain the map $$ \refstepcounter {Thm} \label {eq:3.4} \tilde g\:C \to \Omega(N,y_0)=:\Omega N \leqno (\theThm) $$ which assigns the (concatenated) loop $$ \refstepcounter {Thm} \label {eq:3.5} y_0=f_1(x_0) \stackrel{(f_1\circ G(x,-))^{-1}}{\longrightsquigarrow} f_1(x)=f_2(x) \stackrel{f_2\circ G(x,-)}{\longrightsquigarrow} f_2(x_0) = y_0 \leqno (\theThm) $$ to $x\in C$. The resulting bordism class $$ \refstepcounter {Thm} \label {eq:3.6} \omega^\#(f_1,f_2)=[C,\bar g^\#,\tilde g] \leqno (\theThm) $$ of the framed compact submanifold $C\subset S^m$ (cf.~(\ref {eq:3.1});~(\ref {eq:3.3})) together with the map $\tilde g$ (cf.~(\ref {eq:3.4})) depends only on the homotopy classes $[f_i]\in\pi_m(N,y_0)$, $i=1,2$. Via the Pontryagin-Thom procedure, $\omega^\#(f_1,f_2)$ can also be interpreted as an element in the $m$-th homotopy group of the Thom space $S^n\wedge(\Omega N)^+$ of the trivial $n$-plane bundle over the loop space $\Omega N$. (Here ``$+$'' stands for a disjointly added point. Note that the bordism theories of submanifolds in $S^n$ and $\mathbb R^n$ are equivalent in codimensions $n\ge2$; thus it is not necessary that $f_1(x_0)\ne f_2(x_0)$, as was assumed in~\cite{K6}.) If we ignore the map $\tilde g$ we obtain the simpler invariant $$ \refstepcounter {Thm} \label {eq:3.7} \underline{\omega}^\#(f_1,f_2)=[C,\bar g^\#] \in \pi_m(S^n). \leqno (\theThm) $$ However, often this means a considerable loss of information. Indeed, in general the loop space $\Omega N$ has a very rich topology and, in particular, can be highly disconnected. Already its decomposition into pathcomponents leads to the important ``Nielsen decomposition'' of coincidence sets, as follows. Given a pathcomponent $A\in\pi_0(\Omega N)=\pi_1(N,y_0)$, restrict your attention to the corresponding partial coincidence manifold $$ C_A:=\mathrm {C}_A(f_1,f_2):=\tilde g^{-1}(A) \ \ \subset \ \ \mathrm {C}(f_1,f_2) $$ which is again a closed $n$-codimensional submanifold of $S^n$ and endowed with the restricted framing $\bar g_A^\#=\bar g^\#|$ and the map $\tilde g_A=\tilde g|\:C_A\to A\subset\Omega N$. This leads to the invariants $$ \refstepcounter {Thm} \label {eq:3.8} \omega_A^\#(f_1,f_2)=[C_A,\bar g_A^\#,\tilde g_A] \in \pi_m(S^n\wedge A^+) \leqno (\theThm) $$ and $$ \refstepcounter {Thm} \label {eq:3.9} \underline{\omega}_A^\#(f_1,f_2)=[C_A,\bar g_A^\#] \in \pi_m(S^n). \leqno (\theThm) $$ \begin{Ex} \label {ex:3.10} For all $[f]\in\pi_m(N,y_0)$ we have $\underline{\omega}^\#(y_0,f)=\mathrm {coll}_*([f])$. Here $$ \mathrm {coll}\: N\to N/(N\setminus\Int{B^n}) = B^n/\d B^n = S^n $$ is the map which collapses the complement of a small ball $B^n$ near $y_0\in\d B^n$ to a point and preserves the local orientation of $N$. \mbox{}\hfill$\square$ \end{Ex} In the root case we will now compare the various Nielsen components of $\omega^\#$. Note that there are two canonical isomorphisms $$ \refstepcounter {Thm} \label {eq:3.11} \rho_{A*}, \lambda_{A*}\: \pi_m(S^n\wedge A_0^+) \stackrel{\cong}{\longrightarrow} \pi_m(S^n\wedge A^+) \leqno (\theThm) $$ (compare~(\ref {eq:3.8})) where $A_0$ denotes the pathcomponent of $\Omega N$ containing the constant loop. They are induced by the homotopy equivalences $\rho_A,\lambda_A\:A_0\to A$ which compose each loop $\ell\in A_0$ to the right (or left, resp.) with a fixed loop $\ell_A\in A$; e.g.\ $\rho_A(\ell)$ travels first along $\ell$ and then along $\ell_A$. To simplify notation we will write $\omega_0^\#$ for $\omega_{A_0}^\#$. \begin{Pro} \label {pro:3.12} Given a map $f\:(S^m,x_0)\to(N,y_0)$ and $A\in\pi_1(N)$, we have $$ \omega_A^\#(f,y_0)=\rho_{A*}\left(\omega_0^\#(f,y_0)\right) \quad \mbox{and} \quad \underline{\omega}_A^\#(f,y_0)=\underline{\omega}_0^\#(f,y_0) $$ as well as $$ \omega_A^\#(y_0,f)=\iota_{A*}\circ\lambda_{A*}\left(\omega_0^\#(f,y_0)\right) \quad \mbox{and} \quad \underline{\omega}_A^\#(y_0,f)=\iota_{A*}\left(\underline{\omega}_0^\#(y_0,f)\right) $$ where $\iota_{A*}$ composes the framing with an orientation preserving (or reversing) automorphism of $\mathbb R^n$ according as $A$ is (or is not) orientation preserving, i.e.\ the tangent bundle of $N$, when pulled by $\ell_A\:S^1\to N$, $[\ell_A]=A$, becomes trivial or, equivalently, $A$ lies in the kernel of the composed homomorphism $$ w_1(N)\:\pi_1(N,y_0) \to H_1(N) \to \mathbb {Z}_2 $$ which evaluates the first Stiefel-Whitney class of $N$. \end{Pro} \pproof{for some of the following arguments compare also the proof of theorem~4.3 in~\cite{K6}} In view of proposition~\ref {pro:1.3}~(iii) we need to consider only the case when $\pi_1(N)$ is finite. Thus the fibre $p^{-1}(\{y_0\})$ in the universal covering space $p\:\tilde N\to N$ consists of points $\{\tilde y_1,\dots,\tilde y_k\}$ which all lie in a suitable embedded $n$-ball $V$ in $\tilde N$. Now lift $f$ to a map $\tilde f\:S^m\to\tilde N$. After a suitable homotopy we may assume that $\tilde f$ is smooth, with regular value $\tilde y_1$, and agrees on a tubular neighbourhood $U\cong\tilde C\times V\subset S^m$ of $\tilde C:=\tilde f^{-1}(\{\tilde y_1\})$ with the projection to $V\subset\tilde N$. Then $$ \mathrm {C}(f,y_0)=\mathrm {C}(y_0,f) = f^{-1}(\{y_0\}) = \tilde f^{-1}(\{\tilde y_1,\dots,\tilde y_k\}) $$ consists of the ``parallel'' copies $$ \tilde C_i:=\tilde C\times \{\tilde y_i\} \subset U, \quad i=1,\dots,k. $$ Note that the straight path $\gamma_{ij}$ joining $\tilde y_i$ to $\tilde y_j$ in the ball $V$ yields an isotopy which moves $\tilde C_i$ to $\tilde C_j$ in $U\subset S^m$, $1\le i,j\le k$. We can compose it with a contraction $G|_{\tilde C_j\times I}$ of $\tilde C_j$ in order to get the required contraction of $\tilde C_i$ in $S^m$. This extra part of the contraction induces a concatenation of the path in~(\ref {eq:3.5}) with the loop $p\circ\gamma_{ij}^{\mp1}$. On the other hand the isotopy is compatible with the framings of $\tilde C_i$ and $\tilde C_j$ if $p\circ\gamma_{ij}$ preserves the local orientation of $N$ or in case we are dealing with $\omega^\#(f,y_0)$ since then $$ \nu(\tilde C_t,S^n)\cong\tilde C_t\times TV\cong f^*(TN)|_{\tilde C_t}, $$ $0\le t\le 1$, throughout the isotopy. Thus let us consider $\omega^\#(y_0,f)$. Here $\tilde C_i$ and $\tilde C_j$ are framed via the orientations of $T_{\tilde y_i}\tilde N$ and $T_{\tilde y_j}\tilde N$, resp., induced by $p$ from the given orientation of $N$ at $y_0$. If $p\circ\gamma_{ij}$ reverses it, then the two framings correspond to opposite orientations of $V$, and the isotopy does not change this. \mbox{}\hfill$\square$ \medskip Next let us recall how the involution \ $\mathrm {inv}$ \ in~(\ref {eq:1.9}) is defined (cf.~\cite{K6}, p.~632). Given $[C,\bar g^\#,\tilde g]\in\pi_m(S^n\wedge(\Omega N)^+)$, \ $\mathrm {inv}$ \ retains the submanifold $C\subset S^m$ but changes its framing by $(-1)^n\cdot\alpha$ where the vector bundle automorphism $\alpha$ of the trivial bundle $C\times\mathbb R^n$ is determined by $TN$ and the homotopy $C\times I\to N$ which evaluates $\tilde g$ (cf.~\cite{K3},~3.1). In addition, we have to compose $\tilde g$ with the selfmap of $\Omega N$ which inverts the direction of loops. \begin {Ex} [$\mathbf{m=n\equiv0(2)}$, $N$ orientable] \label {ex:3.13} Here the framings (= coorientations) of the (isolated) points $x\in C$ remain unchanged, but we have to travel backwards along each loop $\tilde g(x)$. \end{Ex} \begin{War} In general the symmetry relation~(\ref {eq:1.9}) does not necessarily extend to $\underline{\omega}^\#$: the weaker invariant $\underline{\omega}^\#(f_2,f_1)$ may depend on more than just its weak counterpart $\underline{\omega}^\#(f_1,f_2)$. \end{War} A weaker version of our looseness obstruction $\omega^\#(f_1,f_2)$ is often much easier to handle and to compute. If we forget about {\it embeddings} of coincidence manifolds into $S^m$ and if we keep track only of stabilized normal bundles we obtain the invariants $\tilde\omega_A(f_1,f_2)$, $A\in\pi_1(N)$, and $$ \refstepcounter {Thm} \label {eq:3.14} \tilde\omega(f_1,f_2)\ \ \ \in\ \ \ \Omega_{m-n}^\mathrm {fr}(\Omega N)\ \cong\ \pi_{m+q}(S^{n+q}\wedge(\Omega N)^+), \quad q\gg0, \leqno (\theThm) $$ which lie in the $(m-n)$-dimensional framed bordism group of the loop space $\Omega N$ of $N$ (cf.~\cite{K3}). Similarly $$ \refstepcounter {Thm} \label {eq:3.15} \omega(f_1,f_2):=\tilde{\underline\omega}(f_1,f_2), \ \ \tilde{\underline\omega}_A(f_1,f_2)\ \ \ \in\ \ \ \Omega_{m-n}^\mathrm {fr} = \pi_{m-n}^S \leqno (\theThm) $$ (compare~\cite{K2}, 1.4) are the stabilized versions of $\underline{\omega}^\#(f_1,f_2)$ and $\underline{\omega}_A^\#(f_1,f_2)$. In general this stabilization procedure leads to a loss of information; not so, however, in the dimension range $m<2n-2$ \ where $\tilde\omega(f_1,f_2)$ is just as strong as $\omega^\#(f_1,f_2)$ and, in fact, is the {\it complete} looseness obstruction for the pair $(f_1,f_2)$ (cf.~\cite{K6}, (16) and~\cite{K3}, theorem~1.10). \section{Selfcoincidences} \label{sec:4} In this section we prove theorem~\ref {thm:1.21} and further vanishing results and discuss their consequences. Given any map $f\:(S^m,x_0)\to(N,y_0)$ let us look at the coincidence data of the pair $(f,f)$. Clearly the map $\tilde g$ (cf.~(\ref {eq:3.4})) is canonically nullhomotopic since $f_1=f_2=f$ in~(\ref {eq:3.5}). For small generic approximations of $(f,f)$ the partial coincidence manifolds $C_A$ are empty (and hence nullbordant) whenever $A\in\pi_1(N)$ is nontrivial. We conclude that $$ \refstepcounter {Thm} \label {eq:4.1} \omega^\#(f,f)=s_*(\underline\omega_0^\#(f,f)) \leqno (\theThm) $$ where $$ s_*\:\pi_m(S^n) \to \pi_m(S^n\wedge(\Omega N)^+) $$ is induced by the inclusion of the constant loop into $\Omega N$ (recall also the notation of~\ref {pro:3.12}). \medskip \proof{of theorem~\ref {thm:1.21}} We base our argument on the identity $$ \refstepcounter {Thm} \label {eq:4.2} \underline\omega^\#_A(f,f)=\underline\omega_A^\#(f,y_0) + \underline\omega_A^\#(y_0,f) \leqno (\theThm) $$ (valid for all $A\in\pi_1(N)$; cf.~(\ref {eq:1.13})) and on proposition~\ref {pro:3.12}. The assumptions in~\ref {thm:1.21} just mean that the homomorphism $w_1\:\pi_1(N)\to\mathbb {Z}_2$ (cf.~\ref {pro:3.12}) is not injective. In other words, there exists an element $A$ of $\pi_1(N)$ which is both nontrivial and orientation preserving. Then on the one hand $\underline\omega^\#_A(f,f)=0$. On the other hand~\ref {pro:3.12} and~(\ref {eq:4.2}) combine to show that $\underline\omega^\#_A(f,f)=\underline\omega^\#_0(f,f)$. Thus in view of~(\ref {eq:4.1}) the full $\omega^\#$-invariant of the pair $(f,f)$ -- and of every pair freely homotopic to it (cf.\ remark~\ref {rem:1.32}) -- vanishes. To complete the proof of theorem~\ref {thm:1.21} note that $$ \omega^\#(f,f) = \omega^\#(f,y_0) + \omega^\#(y_0,f) = (\mathrm {id} + \mathrm {inv})(\deg^\#(f)) $$ (cf.~(\ref {eq:1.8}),~(\ref {eq:1.9}) and~(\ref {eq:1.10})). Recall also corollary~\ref {cor:1.16}. \mbox{}\hfill$\square$ \medskip Corollary~\ref {cor:1.25} follows now from~\ref {pro:1.3}, \ref {pro:1.4}, (\ref {eq:1.11}), (\ref {eq:1.13}), \ref {thm:1.15} (to be discussed below), \ref {thm:1.21}, \ref{pro:1.24} and the compatibility of $\underline\omega(f,f)$ with covering maps (compare~(\ref {eq:4.1})). As for the conclusion of example~\ref {ex:1.22} see also~\ref {ex:3.13}. Similarly, the statement in example~\ref {ex:1.23} is a consequence of theorem~\ref {thm:1.21} and the following wellknown \begin{Fac} [real Grassmann manifolds] \label {fac:4.3} Given integers $0<k<r$, the manifold $G_{r,k}$ of all $k$-dimensional linear subspaces of $\mathbb R^r$ enjoys the following properties: \begin{enumerate} \item $\pi_1(G_{r,k})\cong\mathbb {Z}_2$ if $r>2$; \item $G_{r,k}$ is orientable if and only if $r$ is even; \item $\dim(G_{r,k})=k(r-k)$; \item the Euler characteristic vanishes if $k\not\equiv r\equiv0(2)$ and equals the binomial coefficient $$ \chi(G_{r,k})={[r/2]\choose[k/2]}>0 $$ in all other cases. \end{enumerate} \end{Fac} \begin{Que} \label {que:4.4} What about $\omega^\#(f,f)$ for maps into $G_{r,k}$ or $\tilde G_{r,k}$ when $r$ is odd and $k>2$? \end{Que} \proof{of theorem~\ref {thm:1.15}} A homotopy lifting argument shows that $(f,f)$ is loose by small deformation precisely if the pulled back vector bundle $f^*(TN)$ has a nowhere vanishing section over $S^m$ (cf.~\cite{DG} or~\cite{K7},~5.3) or, equivalently, if $f$ lifts to $ST(N)$ (cf.~(\ref {eq:1.14})). Thus (i)$\iff$(ii). Recall also from~\cite{K7},~5.7 that $\pm E(\d[f])$ equals the invariant $\underline\omega^\#(f,f)$ which is just as strong as $\omega^\#(f,f)$ (cf.~(\ref {eq:4.1})). Next compare the fibre homotopy sequence of $ST(N)$ and of the configuration space $\tilde C_2(N)=N\times N\setminus\Delta$ (cf.~\cite{K7},~5.4). We see that (iii)$\implies$(i) provided the induced homomorphism $j_*$ (cf.~\ref {pro:1.24}) is injective. This is the case e.g.\ when $N=\mathbb{RP}(n)$. Finally recall that (iv) implies (iii) when $N=S^n$; this is a special case of~\cite{K6},~1.12. \mbox{}\hfill$\square$ \medskip Next we prove corollary~\ref {cor:1.16}. Clearly $E$ (cf.~(\ref {eq:1.14})) is injective if $m=n$ or $m\le2n-3$ or $n=2$ (since $\pi_{m-1}(S^1)=0$ for $m>2$). Moreover $\d(\pi_m(N))=0$ if $n$ is odd since then the fibration $ST(N)\to N$ allows a section over each compactum (compare~(\ref {eq:1.14})). Thus our corollary follows from \begin {Pro} \label {pro:4.5} Consider the suspension homomorphism $$ E\:\pi_{m-1}(S^{n-1}) \to \pi_m(S^n) $$ and $$ E^\infty\:\pi_{m-1}(S^{n-1}) \to \pi_{m-n}^S $$ and assume that $n$ is even. If $m\le n+3$ or if $m=n+4\ne8,10$ then $E$ and $E^\infty$ are injective. If $m=8$ and $n=4$, then $E$ is injective, but $E^\infty$ is not. \end{Pro} \Proof We will use Toda's tables~\cite{T}. Since $E^\infty\:\pi_5(S^3)\cong\mathbb {Z}_2\to\pi_2^S\cong\mathbb {Z}_2$ is onto and hence bijective, it remains only to study the (nonstable) cases where $n=4$ and $m=7$ or $8$. The groups in the exact EHP-sequence $$ \pi_7(S^3) \stackrel{E}{\longrightarrow} \pi_8(S^4) \stackrel{H}{\longrightarrow} \pi_8(S^7) \stackrel{P}{\longrightarrow} \pi_6(S^3) \stackrel{E}{\longrightarrow} \pi_7(S^4) \longrightarrow \ldots $$ (cf.~\cite{W}, XII, 2.3) have order $2,4,2$ and $12$, resp.; hence both homomorphisms $E$ are injective here. Moreover the kernel of $$ E^\infty\:\pi_7(S^4)\cong\mathbb {Z}\oplus\mathbb {Z}_{12} \to \pi_3^S\cong\mathbb {Z}_{24} $$ is generated by $[\iota_4,\iota_4]$ and has a trivial intersection with $E(\pi_6(S^3))=\{0\}\oplus\mathbb {Z}_{12}$. Our claim follows for $(m,n)=(7,4)$ and similarly for $(m,n)=(8,4)$ (since $\pi_7(S^3)\ne0=\pi_4^S$). \mbox{}\hfill$\square$ \begin{Cor} \label {cor:4.6} There exists a map $f\:S^8\to\mathbb{RP}(4)$ such that $\omega^\#(f,f)\ne0$ but $\tilde\omega(f,f)=0$. Of course the corresponding liftings $\tilde f\:S^8\to S^4$ have the same property. \end{Cor} \Proof The groups in the exact sequence $$ \pi_8(ST(S^4)) \to \pi_8(S^4) \stackrel{\d}{\longrightarrow} \pi_7(S^3) $$ (cf.~(\ref {eq:1.14})) have order 2 (cf.~\cite{P}), 4 and 2, resp.; thus there is an element $[\tilde f]\in\pi_8(S^4)$ such that $\d([\tilde f])$ and hence $\underline\omega^\#(\tilde f,\tilde f)=E(\d[\tilde f])$ is nontrivial, but $\omega(\tilde f,\tilde f)=\tilde{\underline\omega}(\tilde f,\tilde f)\in\pi_4^S=0$ (compare also~(\ref {eq:4.1})). \mbox{}\hfill$\square$ \medskip Next we want to explore the fact that it is often easier to compute the stabilized versions of our $\omega$-invariants. Indeed they just sum up the contributions of the partial coincidence manifolds $C_A$ (not registering their linkings); also we just multiply a given bordism class by $-1$ if we compose the framing with a reflection of $\mathbb R^n$. \begin {Pro} \label {pro:4.8} Given $[f]\in\pi_m(N)$, the stabilized invariant $\tilde\omega(f,f)$ is determined by $\tilde{\underline\omega}(f,f)$, i.e.\ by $$ \omega(f,f) = \chi(N) \, \omega (f,y_0) = \omega (f,y_0) + E^\infty (\mathrm {coll}_*([f])) = \pm E^\infty \circ \d ([f]). $$ (Here $\chi(N)$ denotes the Euler characteristic of $N$; for \ $\mathrm {coll}_*$ see~$\ref {ex:3.10}$.) If the stable suspension homomorphism $E^\infty\:\pi_{m-1}(S^{n-1})\to\pi_{m-n}^S$ is injective (or if $m\le n+3$) then the conditions {\rm (i)--(v)} in theorem~$\ref {thm:1.15}$ are all equivalent to \smallskip {\rm (vi)} $\omega(f,f)=0$. \end{Pro} \Proof The first identity was already established in~\cite{K2},~2.2; the corresponding claim for $\omega^\#(f,f)$ (see~\cite{K6},~5.1) is more complicated and not so easy to use in calculations. The second identity follows from~(\ref {eq:1.13}) and~\ref {ex:3.10}. Furthermore note that $\omega(f,f)=\underline{\tilde\omega}(f,f)$ is the stable suspension of $\underline\omega^\#(f,f)=\pm E(\d([f]))$ (cf.~\cite{K7},~5.7). If $m\le n+3$ and $n$ is even, then $E^\infty$ is injective (cf.~\ref {pro:4.5}). If $n$ is odd, all conditions (i)--(vi) hold automatically. \mbox{}\hfill$\square$ \begin{Pro} \label {pro:4.9} Assume that $k:=\#\pi_1(N)\ge2$. Consider arbitrary maps\\ $f,f_1,f_2\:(S^m,x_0)\to(N,y_0)$. If $N$ is orientable, then $$ \chi(N) \cdot \omega(f_1,f_2) \ \ \ \ \ \ \ \ =\ 0 \eqno\mbox{\rm (cf.~(\ref {eq:3.14}))} $$ and in particular $$ \chi(N) \cdot E^\infty(\mathrm {coll}_*([f]))\ =\ 0 \eqno\mbox{\rm (cf.~\ref {ex:3.10}).} $$ \medskip If $N$ is not orientable then \; $E^\infty(\mathrm {coll}_*([f]))\ =\ 0$ \; and $$ \ \ (\chi(N)-1) \ \omega(f_1,f_2)\ \ =\ 0\ ; $$ if in addition $k>2$, then \; $\omega(f_1,f_2)=0$. \end{Pro} \Proof Note that $\omega(y_0,f)=E^\infty(\mathrm {coll}_*([f]))$ (cf.~\ref {ex:3.10}). If $N$ is orientable, then $$ \refstepcounter {Thm} \label {eq:4.10} 0=\omega(f,f)=\omega(f,y_0) + \omega(y_0,f) = \chi(N) \cdot \omega(f,y_0) \leqno (\theThm) $$ (cf.~\ref {thm:1.21}, (\ref {eq:1.13}) and~\ref {pro:4.8}). Thus multiplication by the Euler characteristic annihilates also $\omega(y_0,f)=-\omega(f,y_0)$ and hence $\omega(f_1,f_2)$ (by~(\ref {eq:1.13})). If $N$ is not orientable, then it follows from proposition~\ref {pro:3.12} that $$ \refstepcounter {Thm} \label {eq:4.11} \tilde{\underline\omega}_A(f,y_0)=\tilde{\underline\omega}_0(f,y_0) \quad \mbox {and} \quad \tilde{\underline\omega}_A(y_0,f)=\varepsilon_A\cdot\tilde{\underline\omega}_0(y_0,f) \leqno (\theThm) $$ for every $A\in\pi_1(N)$ where $\varepsilon_A=+1$ or $-1$ according as $A$ is orientation preserving or reversing. Thus $$ \omega(y_0,f)=\sum_A \tilde{\underline\omega}_A(y_0,f)=0. $$ Moreover $$ \tilde{\underline\omega}_A(f,f) = \tilde{\underline\omega}_A(f,y_0) + \tilde{\underline\omega}_A(y_0,f) =0 $$ whenever $A\ne0$ (cf.~(\ref {eq:1.13}) and~(\ref {eq:4.1})). If in addition $k>2$ then there exist both orientation preserving and reversing $A\ne0$ so that $\tilde{\underline\omega}_0(f,y_0)=-\varepsilon_A\cdot\tilde{\underline\omega}_0(y_0,f)$ both for $\varepsilon_A=+1$ and $\varepsilon_A=-1$; hence $\omega(f,y_0)$, being an even multiple of an element of order 2, vanishes, and so does $\omega(f_1,f_2)$, again by~(\ref {eq:1.13}). If $\pi_1(N)$ consists only of $0$ and $A\ne0$, then $$ \refstepcounter {Thm} \label {eq:4.12} \tilde{\underline\omega}_0(f,y_0)=\tilde{\underline\omega}_A(f,y_0) =-\tilde{\underline\omega}_A(y_0,f)=\tilde{\underline\omega}_0(y_0,f) \leqno (\theThm) $$ and $$ \refstepcounter {Thm} \label {eq:4.13} \omega(f,y_0)=2\tilde{\underline\omega}_0(f,y_0) =\omega(f,f)=\chi(N)\,\omega(f,y_0) \leqno (\theThm) $$ (cf.~\ref {pro:4.8}), so that multiplication by $\chi(N)-1$ annihilates $\omega(f,y_0)$ and hence also $\omega(f_1,f_2)=\omega(f_1,y_0)$ (cf.~(\ref {eq:4.11})). \mbox{}\hfill$\square$ \medskip Given a map $f\:(S^m,x_0)\to(N,y_0)$, consider a lifting $\tilde f\:(S^m,x_0)\to(\tilde N,\tilde y_0)$ to the universal covering space $p\:\tilde N\to N$. \begin{Lem} \label {lem:4.14} We have $$ \tilde{\underline\omega}_0(f,y_0) = \omega(\tilde f,\tilde y_0) \quad \mbox {and} \quad \tilde{\underline\omega}_0(y_0,f) = \omega(\tilde y_0,\tilde f). $$ \end{Lem} \Proof In the Nielsen decomposition $$ f^{-1}(\{y_0\}) = \tilde f^{-1}(p^{-1}(\{y_0\})) = \tilde f^{-1}(\{A\tilde y_0\mid A\in\pi_1(N)\}) $$ of the relevant coincidence manifold, $\tilde f^{-1}(\{\tilde y_0\})$ is the component indexed by $A=0$. \mbox{}\hfill$\square$ \medskip Let us apply this to the special case $\tilde N=S^n$ and $N=\mathbb{RP}(n)$, $n$ even. Then $$ \omega(f,f) = 2 \tilde{\underline\omega}_0(y_0,f) = 2 \omega(\tilde y_0,\tilde f) = 2E^\infty([\tilde f]) $$ (cf.~(\ref {eq:4.13}), \ref {lem:4.14}, \ref {ex:3.10} and~(\ref {eq:3.15})). This establishes the claim which follows theorem~\ref {thm:1.31}. Also if $m=2n-1$ and e.g.\ $n=4,8,12,14,16$ or $20$, then $\ker E^\infty\cong\mathbb {Z}$ and $E^\infty$ maps $\pi_m(S^n)$ onto the stable stem $\pi_{n-1}^S$ which contains elements of order greater than $2$ (cf.~\cite{T}). This proves the statement in example~\ref {ex:1.26}. Similarly, (\ref {eq:4.13}) shows also (together with theorem~\ref {thm:1.21}) that $\omega(f,f)\in2\cdot\pi_{m-n}^S$ whenever $N$ is not simply connected. This (together with~\ref {pro:4.9}) establishes~(\ref {eq:1.27}). Finally let us discuss example~\ref {ex:1.28}. According to fact~\ref {fac:4.3}, the Grassmann manifold $G_{5,2}$ is not orientable, $6$-dimensional and its Euler characteristic equals~$2$. Our claim follows from proposition~\ref {pro:4.9} and the Freudenthal suspension theorem. \section{Nielsen numbers} \label{sec:5} \begin{Def} \label {def:5.1} Given $f_1,f_2\:S^m\to N$, the {\it ``strong'' Nielsen number} $\mathrm {N}^\#(f_1,f_2)$ (and its stabilized analogue \ $\mathrm {N}(f_1,f_2)$, resp.) is the number of elements $A\in\pi_1(N)$ such that $\omega^\#_A(f_1,f_2)$ (and $\tilde\omega_A(f_1,f_2)$, resp.) does not vanish (cf.~(\ref {eq:3.8}) and~(\ref {eq:3.14})). \end{Def} Since generic coincidence manifolds are compact these Nielsen numbers are always finite. \medskip \proof{of theorem~\ref {thm:1.30}} If $\omega^\#(f_1,f_2)$ (or $\tilde\omega(f_1,f_2)$, resp.) vanishes, then so do all the partial invariants $\omega^\#_A(f_1,f_2)$ (or $\tilde\omega_A(f_1,f_2)$, resp.), $A\in\pi_1(N)$, as well as the corresponding Nielsen number. The converse is also obvious in the stabilized setting. However, $\omega^\#(f_1,f_2)$ keeps track also of embeddings and of possible linking phenomena among the partial coincidence submanifolds $\mathrm {C}_A(f_1,f_2)$ in $S^m$. Assume that $\mathrm {N}^\#(f_1,f_2)=0$. Then all triples $(\mathrm {C}_A(f_1,f_2),\bar g_A^\#,\tilde g_A)$ (cf.~(\ref {eq:3.8})) admit individual nullbordisms in $S^m\times I$. Conceivably these can not be fitted together disjointly to yield the full {\it embedded} nullbordism required to show that $\omega^\#(f_1,f_2)$ vanishes. In fact, it is an open question whether the first claim in~\ref {thm:1.30} still holds when the common domain of $f_1$ and $f_2$ is not a sphere. But here we can use the additive structure of homotopy groups. As in~(\ref {eq:1.8}) we have for all $A\in\pi_1(N)$ $$ \refstepcounter {Thm} \label {eq:5.2} 0=\omega^\#_A(f_1,f_2)=\omega^\#_A(f_1,f_1) + \omega^\#_A(y_0,f_2-f_1). \leqno (\theThm) $$ The claim of our theorem is obvious when $\pi_1(N)=0$ or in the selfcoincidence case since only $A=0$ plays a r\^ole there (cf.~(\ref {eq:4.1})). If $A\ne0$ then $\omega^\#_A(f_1,f_1)=0$ and hence $\omega^\#_A(y_0,f_2-f_1)=0$ (cf.~(\ref {eq:5.2})). This implies -- in this special root case -- that the full invariant $\omega^\#(y_0,f_2-f_1)$ vanishes; indeed, a nullbordism of $\mathrm {C}_A(y_0,f_2-f_1)$ gives rise to {\it disjoint} ``parallel'' nullbordisms of all the other partial coincidence manifolds $\mathrm {C}_{A'}(y_0,f_2-f_1)$, $A'\in\pi_1(N)$ (compare the proof of~\ref {pro:3.12} or also~\cite{K6},~4.3). Thus in turn $\omega^\#(f_1,f_2)=\omega^\#(f_1,f_1)$ (cf.~(\ref {eq:1.8})) and we are back in the selfcoincidence case. \mbox{}\hfill$\square$ \medskip \proof{of theorem~\ref {thm:1.31}} If $k=\#\pi_1(N)$ is infinite, then all pairs $(f_1,f_2)$ are loose and hence all Nielsen numbers vanish (cf.~(\ref {eq:3.1})). If $k>2$ or if $k=2$ and $N$ is orientable, then always $\omega^\#(f_1,f_1)=0$ (cf.~\ref {thm:1.21}); therefore $\omega^\#(f_1,f_2)=\omega^\#(y_0,f_2-f_1)$ (cf.~(\ref {eq:5.2})) and $\tilde\omega(f_1,f_2)=\tilde\omega(y_0,f_2-f_1)$. Again~\ref {pro:3.12} implies that the corresponding Nielsen numbers can assume only the values $0$ and $k$ in this root case. For the remaining cases compare corollary~\ref {cor:1.25}. \mbox{}\hfill$\square$ \begin{Rem} \label {rem:5.3} Consider the split exact sequence $$ 0\to\ker(\mathrm {forg})\hookrightarrow\pi_m(S^n\wedge(\Omega N)^+) \stackrel{\mathrm {forg}}{\longrightarrow} \bigoplus_{A\in\pi_1(N)}\pi_m(S^n\wedge A^+)\to0\ \ ; $$ here, given any element $\omega^\#=[C,\bar g^\#,\tilde g]\in\pi_m(S^n\wedge(\Omega N)^+)$, $\mathrm {forg}(\omega^\#)$ keeps track only of the individual $A$-components $[C_A=\tilde g^{-1}(A),\bar g^\#|,\tilde g|]$ and forgets about possible linkings. As we will see below, the kernel of \ $\mathrm {forg}$ \ can be highly nontrivial if $\#\pi_1(N)>1$. \end{Rem} Consider also the function $\mathrm {N}^\#$ which counts the nontrivial $A$-components. Clearly it vanishes on $\ker(\mathrm {forg})$ and, in addition, can often assume all integer values between $0$ and $\#\pi_1(N)$. This shows that theorems~\ref {thm:1.30} and~\ref {thm:1.31} impose strong restrictions on those $\omega^\#$-values which can actually be realized by pairs $(f_1,f_2)$ of maps. \begin{Ex} [real projective spaces] \label {ex:5.4} Consider $N=\mathbb{RP}(n)$ and its double cover $\tilde N=S^n$. There is a wellknown isomorphism $$ \pi_m(S^n\wedge(\Omega N)^+)\cong\pi_m(S_1^n\vee S_2^n\vee\tilde N,\tilde N) $$ (cf.\ e.g.~\cite{K6},~(61)), and the forgetful map \ $\mathrm {forg}$ \ (cf.~\ref {rem:5.3}) corresponds to the homomorphism $$ \pi_m(\bigvee^2S^n\vee\tilde N) \to \bigoplus_{i=1}^2\pi_m(S_i^n\vee\tilde N) $$ which is induced by the obvious projections. Let $\iota_1,\iota_2$ and $\iota_0$ be represented by the inclusions of the three spheres into their wedge. Then the summands of $$ \ker(\mathrm {forg}) \cong \pi_m(S^{2n-1})\oplus(\pi_m(S^{3n-2}))^4\oplus\ldots $$ in the Hilton decomposition of $\pi_m(S_1^n\vee S_2^n\vee\tilde N)$ (cf.~\cite{W},~XI,~6) corresponds precisely to those basic products which involve both $\iota_1$ and $\iota_2$. \end{Ex}
2024-02-18T23:40:22.623Z
2010-02-18T22:04:21.000Z
algebraic_stack_train_0000
2,204
9,426
proofpile-arXiv_065-10798
\section{Introduction} \label{sec:intro} Quantum algorithms~\cite{nielsen00} are typically described in terms of the evolution of the state of a quantum system under a prescribed sequence of unitary transformations, followed by the extraction of a problem solution from the outcome of a measurement performed on the quantum system. While the unitary transformations involved are clearly crucial, they are not the primary reference point of the analysis in such a circuit formulation of quantum algorithms; this belongs to the state of the system. However, in the circuit formulation of any particular quantum algorithm, the ingredient which varies from one application of the algorithm to another (i.e.\ the input) is typically a unitary transformation. Therefore, it seems appropriate to focus on the unitaries in a quantum algorithm and to regard the algorithm as a tool for discrimination of unitary transformations. This approach has been suggested before~\cite{childs99,vedral07} and subject to analysis in selected scenarios~\cite{bergou05,chefles07}. The purpose of the latter articles was to extend the Deutsch-Jozsa algorithm by investigating the possibility of discriminating amongst a larger class of unitary transformations than that encountered in the original Deutsch-Jozsa algorithm. Our purpose is to further promote unitary (and state) discrimination as tools for analyzing quantum algorithms. Specifically we consider the Deutsch-Jozsa problem, which is to be solved with the aid of an oracle unitary of a specific form. We ask whether the notions of unitary discrimination can be used to reach the standard quantum algorithm for solving the problem, whether they yield alternative algorithms for solving the problem and what restrictions they impose on quantum algorithms which are implemented on quantum systems initially in noisy mixed states. The remainder of this paper is organized as follows. In Sec.~\ref{sec:oraclediscrimination} we describe how oracle-assisted quantum algorithms can be viewed as unitary discrimination tools. This is applied to the Deutsch-Jozsa problem, using a particular oracle unitary, in two distinct ways in the following sections. In Sec.~\ref{sec:djalg} we use unitary and state discrimination to arrive at the set of all algorithms which solve the Deutsch-Jozsa problem with certainty. In Sec.~\ref{sec:djmixed} we assume a restricted set of possible initial states, which are mixed, for solving the Deutsch-Jozsa problem. We determine a lower bound on the problem size, beneath which a classical algorithm will succeed in solving the Deutsch-Jozsa problem with greater certainty than any quantum algorithm. \section{Discrimination of quantum operations in oracle algorithms} \label{sec:oraclediscrimination} Certain computational problems, such as searching, Simon's problem~\cite{simon94} and the Deutsch-Jozsa problem~\cite{deutsch92}, are oracle-assisted, meaning that they are to be solved using a binary oracle function $f:\{0,1\}^n \mapsto \{0,1\}^m,$ whose form depends on the nature (and the specific instance that is invoked) of the computational problem. The task is to solve the problem with the fewest oracle invocations; the efficiency of the solution is quantified by the number of oracle invocations used. Different instances of a given problem correspond to different oracle functions. The associated quantum algorithms~\cite{deutsch92,simon94,shor97,grover97} require a well-determined number, depending on $n,$ of qubits prepared in a suitable initial state $\ket{\Psi_\textsf{0}}$. These qubits are made to evolve collectively in way described by a specific sequence of unitary transformations. Ultimately a measurement yields an outcome from which the problem solution can be extracted with high probability. In oracle-based quantum algorithms the oracle is invoked via evolution described by a unitary transformation $\hat{U}_f,$ whose form depends on the problem as well as the particular oracle function, $f$, in use, i.e.\ the particular instance of the problem. In the simplest cases, to which we restrict our consideration, the target of the oracle is a single binary variable, i.e.\ $m=1$. Examples include the Deutsch-Jozsa~\cite{deutsch92} and Grover's search algorithm~\cite{grover97}. For a given type of problem, there can be different possibilities for the number of qubits required and the structure of the oracle unitary. We consider instances where the number of qubits required is $n.$ In terms of computational basis states, $\ket{x} := \ket{x_n} \otimes \ldots \otimes \ket{x_1}$ with $x_i\in \left\{0,1\right\}$ we assume that the oracle unitary operates via \begin{equation} \hat{U}_f\ket{x} = \left( -1\right)^{f(x)}\, \ket{x} \label{eq:oracleunitary} \end{equation} and this is extended linearly to all linear combinations of computational basis states. Applications of the oracle unitary may be interspersed with other unitaries, $\hat{V}_0, \ldots, \hat{V}_M$ which are \emph{oracle independent}, i.e.\ these remain fixed for all choices of oracle function $f.$ The resulting algorithm has the structure illustrated in Fig.~\ref{fig:generalalg} and the \emph{algorithm unitary} is $\hat{U}_\mathrm{alg} = \hat{V}_M \hat{U}_f \ldots \hat{U}_f \hat{V}_1 \hat{U}_f \hat{V}_0.$ \begin{figure}[h] \includegraphics[scale=.70]{fig1.eps} \caption{General structure for oracle-based quantum algorithm where the oracle output is a single bit. The oracle is invoked $M$ times. The initial and final unitaries, $\hat{V}_0, \hat{V}_M$ are not generally necessary but are enable initialization and measurement in the computational basis. \label{fig:generalalg}} \end{figure} This is the most general structure for $m=1$ oracle-assisted algorithms since two successive applications of the oracle unitary provide a trivial identity unitary. The final state $\ket{\Psi_f} = \hat{U}_\mathrm{alg} \ket{\Psi_\textsf{0}}$ depends on the particular oracle function~$f$. In such scenarios the \emph{input to the algorithm is the oracle function} and not the initial state. The algorithm output then identifies or classifies the input oracle function. For example, in the Deutsch-Jozsa problem the algorithm determines whether the oracle function belongs to the class of constant or balanced functions (these terms are described below). In the algorithm for searching a database with one marked item, located at $s,$ the oracle is defined as $f(x) =0$ if $x\neq s$ and $f(s)=1.$ Determining $s$ is equivalent to identifying which of the possible oracle functions was used. Analogously, the associated quantum algorithms amount to tools for classifying or discriminating between the possible oracle unitaries. The framework for unitary discrimination requires a set of known unitaries $\left\{ \hat{U}_1, \hat{U}_2, \ldots \right\}$ and associated probabilities $\left\{ p_1, p_2, \ldots \right\}.$ One party chooses one of these unitaries, $\hat{U}_j,$ with probability $p_j$ and another party must determine which unitary was selected by applying the unitary one or more times, together with other quantum operations, to a quantum system. Ultimately a measurement is performed on the quantum system and the choice of unitary is inferred from the measurement outcome. Unitary discrimination can be reduced to a quantum state discrimination problem~\cite{acin01,dariano01,sacchi05,chefles07} by applying the unitary to a standard initial state, most generally described by density operator $\hat{\rho}_\mathrm{0},$ and attempting to discriminate between the possible resulting output states. In this article we consider cases where \emph{the unitary is applied once only.} Thus the possible output states are $\hat{\rho}_{j} = \hat{U}_j \hat{\rho}_\mathrm{0} \hat{U}_j^\dagger$, occurring with probability $p_j,$ for $j=1,2,\ldots.$ The general framework for discriminating between states~\cite{hellstrom76,barnett09} requires a POVM with positive operator elements $\left\{ \hat{\pi}_1, \hat{\pi}_2, \ldots \right\}$ that satisfy $\sum_j \hat{\pi}_j = \hat{I}$ and a rule for associating states with measurement outcomes. In the \emph{minimum error discrimination} scenario, which we consider here, we are required to select one state for each outcome and the ``undecided'' inference of the unambiguous discrimination~\cite{barnett09} scenario is not permitted. Here it is possible to make an incorrect inference and the task is to choose measurement that minimize the probability with which such an error occurs. For unitary discrimination, both the measurement and the initial state must be chosen so as to minimize the error probability. \section{Application to the Deutsch-Jozsa algorithm on arbitrary initial states} \label{sec:djalg} The Deutsch-Jozsa problem consides oracle functions $f: \{0,1\}^n \mapsto \{0,1\}$ that are required to fall into one of two classes: \emph{constant}, meaning that $f$ returns the same value for all possible arguments or \emph{balanced}, meaning that $f$ returns $0$ for exactly half of the arguments and $1$ for the other half. The task is to determine the function class with the minimum number of oracle invocations. A classical algorithm proceeds by evaluating $f$ at randomly chosen distinct arguments. This will succeed with certainty in all cases after $N/2+1$ oracle invocations~\cite{deutsch92,cleve98}, where $N=2^n$ is the number of possible argument values. A quantum algorithm exists and~\cite{deutsch92,cleve98}, in its modified form~\cite{collins98}, uses an oracle unitary of the form given in Eq.~\eqref{eq:oracleunitary} exactly once to solve the problem with certainty~\cite{deutsch92,cleve98}, giving an exponential speed-up in terms of $n$. We aim to use the notions of unitary discrimination to arrive at all quantum algorithms which can determine function class with certainty while using $n$ qubits and an oracle of the form of Eq.~\eqref{eq:oracleunitary}. Since each function class is represented by many unitaries, this requires discrimination between two quantum operations. For each given class of functions, the quantum operation is \begin{equation} \hat{\rho}_\mathrm{0} \mapsto \hat{\rho}_\mathrm{f} :=\sum_{f\, \textrm{in class}} p_f \hat{U}_f \hat{\rho}_\mathrm{0} \hat{U}_f^\dagger \end{equation} where the sum is over all possible functions in the given class and $p_f$ is the probability with which each function could be chosen given that the particular class is chosen. For \emph{constant functions,} Eq.~\eqref{eq:oracleunitary} implies that $\hat{U}_f =\hat{I}$ and thus \begin{equation} \hat{\rho}_\mathrm{0} \mapsto \hat{\rho}_{\mathrm{f} \, \mathrm{const}} = \hat{\rho}_\mathrm{0}. \label{eq:rhoconst} \end{equation} For balanced functions, expanding in the computational basis, $\hat{\rho}_\mathrm{0}= \sum_{x,y=0}^{N-1} \rho_{\mathrm{0}\; xy}\ket{x}\bra{y},$ gives \begin{equation} \hat{\rho}_\mathrm{0} \mapsto \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} = \sum_{f\, \textrm{balanced}} p_f \sum_{x,y=0}^{N-1} \left( -1\right)^{f(x) + f(y)} \rho_{\mathrm{0}\, xy} \ket{x}\bra{y}. \label{eq:rhobalone} \end{equation} which gives the density matrix elements after the balanced function operation $\rho_{\mathrm{f} \, \mathrm{bal}\, xy} := \bra{x} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \ket{y}$ as \begin{equation} \rho_{\mathrm{f} \, \mathrm{bal}\, xy} = \sum_{f\, \textrm{balanced}} p_f \left( -1\right)^{f(x) + f(y)} \rho_{\mathrm{0}\; xy}. \label{eq:rhobaltwo} \end{equation} Thus $\rho_{\mathrm{f} \, \mathrm{bal}\, xx} = \rho_{\mathrm{0}\; xx}$ for $x = 0, \ldots, N-1$. For non-diagonal density matrix entries, the summation will be complicated by the possibly different probabilities for each balanced function. We shall assume that the probabilities with which each balanced function is selected are identical. Thus $p_f = 1/B$ where $B$ is the number of balance functions. Enumerating the number of balanced function amounts to counting the number of ways in which $N/2$ of the $N$ possible arguments which will return $0$ can be selected. Thus $B = \binom{N}{N/2}$ and \begin{equation} \rho_{\mathrm{f} \, \mathrm{bal}\, xy} = \frac{1}{B}\; \rho_{\mathrm{0}\; xy} \sum_{f\, \textrm{balanced}} \left( -1\right)^{f(x) + f(y)}. \label{eq:rhobalthree} \end{equation} The sum $\sum_{f\, \textrm{balanced}} \left( -1\right)^{f(x) + f(y)}$ can be evaluated for given values of $x$ and $y$ by determining for how many balanced functions $f(x) = f(y)$ and for how many $f(x) \neq f(y).$ If $x \neq y,$ this is independent of the choices of $x$ and $y$. This can be established by noting that the collection of all balanced functions can be listed by assigning $0$ to $N/2$ of the $N$ possible argument ``slots'' and $1$ to the remaining slots. The argument values, $x=0,1,2,\ldots N-1$ in this process merely serve as labels and interchanging two of them will not affect the sum. Thus it suffices to compute this for $x=0$ and $x=1.$ The four possibilities are tabulated in Table~\ref{tab:balanced}. \begin{table}[h] \begin{ruledtabular} \begin{tabular}{cccc} $f(x=0)$ & $f(y=1)$ & $\left( -1\right)^{f(x) + f(y)}$ & Number of Instances \\ \hline $0$ & $0$ & $1$ & $\binom{N-2}{N/2}$\\ $0$ & $1$ & $-1$ & $\binom{N-2}{N/2 -1}$\\ $1$ & $0$ & $-1$ & $\binom{N-2}{N/2 -1}$\\ $1$ & $1$ & $1$ & $\binom{N-2}{N/2}$\\ \end{tabular} \end{ruledtabular} \caption{Four possibilities for $\left( -1\right)^{f(x) + f(y)}$ for $x=0$ and $y=1.$ The two leftmost columns provide the possible combinations of values returned by $f.$ The last column lists the number of times that each possibility occurs. For example, the balanced function for which $f(0) = f(1) = 0$ must return $1$ in $N/2$ of the remaining $N-2$ arguments. The number of ways in which this arises is $\binom{N-2}{N/2}.$ Similar arguments apply to the other cases. \label{tab:balanced} } \end{table} Thus, if $x\neq y$ then \begin{align} \sum_{f\, \textrm{balanced}} \left( -1\right)^{f(x) + f(y)} & = 2 \left[ \binom{N-2}{N/2} - \binom{N-2}{N/2-1} \right] \nonumber \\ & = -\binom{N-2}{N/2}\; \frac{2}{N/2-1} \nonumber \\ & = - \frac{B}{N-1} \label{eq:faddition} \end{align} where $B$ is the number of balanced functions and the last two lines follow from algebraic manipulations of combinatorials. Eqs.~\eqref{eq:rhobalthree} and \eqref{eq:faddition} imply that, if $x\neq y$, \begin{equation} \rho_{\mathrm{f} \, \mathrm{bal}\, xy} = - \frac{1}{N-1}\; \rho_{\mathrm{0}\; xy}. \label{eq:rhobalfour} \end{equation} The cases of all values of $x$ and $y$ are then summarized as \begin{equation} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} = \frac{1}{N-1}\, \left( -\hat{\rho}_\mathrm{0} + N \sum_{x=0}^{N-1} \hat{P}_x \hat{\rho}_\mathrm{0} \hat{P}_x \right) \label{eq:rhobalfive} \end{equation} where $\hat{P}_x := \ket{x}\bra{x}.$ Minimum error discrimination between the two density operators $\hat{\rho}_{\mathrm{f} \, \mathrm{const}}$ and $\hat{\rho}_{\mathrm{f} \, \mathrm{bal}}$ requires a POVM with two outcomes and two positive operator elements $\hat{\pi}_\mathrm{const}$ and $\hat{\pi}_\mathrm{bal},$ where $\hat{\pi}_\mathrm{const} + \hat{\pi}_\mathrm{bal} = \hat{I}.$ The probability with which an incorrect inference is made is \begin{equation} p_\mathrm{error} = p_\mathrm{const} \Trace{\left[\hat{\rho}_{\mathrm{f} \, \mathrm{bal}}\hat{\pi}_\mathrm{const}\right]} + p_\mathrm{bal} \Trace{\left[\hat{\rho}_{\mathrm{f} \, \mathrm{const}}\hat{\pi}_\mathrm{bal}\right]} \label{eq:errordensity} \end{equation} where $p_\mathrm{const}$ ($p_\mathrm{bal}$) is the probability of selecting a function from the constant (balanced) class. A standard derivation~\cite{sacchi05} gives \begin{equation} p_\mathrm{error} = \frac{1}{2}\; \left( 1 - \lVert p_\mathrm{const} \hat{\rho}_{\mathrm{f} \, \mathrm{const}} - p_\mathrm{bal} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \rVert \right) \label{eq:errorprob} \end{equation} where the trace norm satisfies \begin{equation} \lVert \hat{A} \rVert := \Trace{\left[ \sqrt{\hat{A}^\dagger \hat{A}}\; \right]} = \sum_i \sigma_i(\hat{A}) \end{equation} with $\left\{ \sigma_i(A) \right\}$ being the singular values of $\hat{A}.$ The trace norm is clearly invariant under unitary transformations in the sense that, if $\hat{V}$ is any unitary then $\lVert \hat{A} \rVert = \lVert \hat{V} \hat{A}\hat{V}^\dagger \rVert.$ Although Eqs.~\eqref{eq:errordensity} and~\eqref{eq:errorprob} are equivalent, they have distinct uses in terms of determining conditions under which the algorithm will succeed. As we shall show, Eq.~\eqref{eq:errorprob} yields the optimal initial state and Eq.~\eqref{eq:errordensity}, the optimal measurement for success. Eq.~\eqref{eq:errorprob} implies that the quantum algorithm for solving the Deutsch-Jozsa problem will succeed with certainty when $\lVert p_\mathrm{const} \hat{\rho}_{\mathrm{f} \, \mathrm{const}} - p_\mathrm{bal} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \rVert=1.$ Note that $\lVert p_\mathrm{const} \hat{\rho}_{\mathrm{f} \, \mathrm{const}} \rVert =p_\mathrm{const}$ and $\lVert p_\mathrm{bal} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \rVert =p_\mathrm{bal}$, giving $\lVert p_\mathrm{const} \hat{\rho}_{\mathrm{f} \, \mathrm{const}} \rVert + \lVert p_\mathrm{bal} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \rVert = p_\mathrm{bal} + p_\mathrm{const} =1.$ Thus the quantum algorithm will succeed with certainty if and only if \begin{equation} \lVert p_\mathrm{const} \hat{\rho}_{\mathrm{f} \, \mathrm{const}} - p_\mathrm{bal} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \rVert = \lVert p_\mathrm{const} \hat{\rho}_{\mathrm{f} \, \mathrm{const}} \rVert + \lVert p_\mathrm{bal} \hat{\rho}_{\mathrm{f} \, \mathrm{bal}} \rVert. \end{equation} In this context, an important general result~\cite{qiu08} is that if $\hat{A}$ and $\hat{B}$ are positive semidefinite operators then $\lVert \hat{A} - \hat{B}\rVert = \lVert \hat{A} \rVert + \lVert \hat{B} \rVert$ if and only if $\hat{A}$ and $\hat{B}$ have orthogonal support. The support of a positive semidefinite operator is the subspace spanned by its eigenstates which correspond to non-zero eigenvalues. Thus the quantum algorithm will succeed with certainty if and only if the operators $\hat{\rho}_{\mathrm{f} \, \mathrm{const}}$ and $\hat{\rho}_{\mathrm{f} \, \mathrm{bal}}$ have orthogonal support, or equivalently \begin{equation} \hat{\rho}_{\mathrm{f} \, \mathrm{const}}\hat{\rho}_{\mathrm{f} \, \mathrm{bal}} = \hat{\rho}_{\mathrm{f} \, \mathrm{bal}}\hat{\rho}_{\mathrm{f} \, \mathrm{const}} =0. \label{eq:orthogrho} \end{equation} Defining $\hat{\Lambda} : = \sum_{x=0}^{N-1} \hat{P}_x \hat{\rho}_\mathrm{0} \hat{P}_x$, which is easily shown to be a positive operator, and using Eqs~\eqref{eq:rhoconst}, \eqref{eq:rhobalfive} and~\eqref{eq:orthogrho} gives \begin{align} \commut{\hat{\rho}_\mathrm{0}}{\hat{\Lambda}}& = 0 \quad \textrm{and} \label{eq:condone}\\ N \hat{\Lambda} \hat{\rho}_\mathrm{0} & = \hat{\rho}_\mathrm{0}^2. \label{eq:condtwo} \end{align} Eq~\eqref{eq:condone} implies that $\hat{\rho}_\mathrm{0}$ and $\hat{\Lambda}$ can be simultaneously diagonalized. Denote the associated basis of eigenstates by $\{ \ket{\phi_j} \; | \; j = 1,\ldots N \}.$ Thus \begin{equation} \hat{\rho}_\mathrm{0} = \sum_{j=1}^L r_j \ket{\phi_j}\bra{\phi_j} \end{equation} where $L>0$ is the number of non-zero eigenvalues of $\hat{\rho}_\mathrm{0}$ and $0< r_j \leqslant 1$ satisfy $\sum_{j=1}^L r_j = 1.$ Likewise \begin{equation} \hat{\Lambda} = \sum_{j=1}^N \lambda_j \ket{\phi_j}\bra{\phi_j} \label{eq:lambdadiag} \end{equation} where $\lambda_j \geqslant 0$. Eq.~\eqref{eq:condtwo} implies that $N \lambda_j r_j = r_j^2$ for $j=1, \ldots, L$ and this gives \begin{equation} \lambda_j = \frac{r_j}{N} \label{eq:lambdar} \end{equation} for $j=1, \ldots, L$. Additionally, \begin{align} \hat{\Lambda} & = \sum_{x=0}^{N-1} \hat{P}_x \sum_{j=1}^L r_j \ket{\phi_j}\bra{\phi_j} \hat{P}_x \\ & = \sum_{x=0}^{N-1} \hat{P}_x \sum_{j=1}^L r_j \lvert \phi_j(x) \rvert^2 \end{align} where $\phi_j(x):= \innerprod{x}{\phi_j}.$ Thus Eq.~\eqref{eq:lambdadiag} gives \begin{equation} \lambda_k = \sum_{x=0}^{N-1} \sum_{j=1}^L r_j \lvert \phi_k(x) \rvert^2 \lvert \phi_j(x) \rvert^2 \end{equation} and, when combined with Eq.~\eqref{eq:lambdar}, \begin{equation} \frac{r_k}{N} = \sum_{x=0}^{N-1} \sum_{j=1}^L r_j \lvert \phi_k(x) \rvert^2 \lvert \phi_j(x) \rvert^2 \end{equation} for $k\leqslant L.$ The fact that $L>0$ implies that $r_1 \neq 0.$ Thus \begin{equation} \frac{r_1}{N} = r_1\sum_{x=0}^{N-1} \lvert \phi_1(x) \rvert^4 +\sum_{x=0}^{N-1} \sum_{j=2}^L r_j \lvert \phi_k(x) \rvert^2 \lvert \phi_j(x) \rvert^2 \label{eq:firsteigenval} \end{equation} where the second term on the right is defined to mean $0$ when $L=1.$ The second term on the right of Eq.~\eqref{eq:firsteigenval} is non-negative. The first term contains $\sum_{x=0}^{N-1} \lvert \phi_1(x) \rvert^4$ which is subject to the constraint that $\sum_{x=0}^{N-1} \lvert \phi_1(x) \rvert^2 = 1.$ A Lagrange multiplier approach shows that $\sum_{x=0}^{N-1} \lvert \phi_1(x) \rvert^4 \geqslant 1/N$, provided that $0 \leqslant \lvert \phi_1(x) \rvert^2 \leqslant 1,$ which is always satisfied. This minimum is attained when $\lvert \phi_1(x) \rvert^2 = 1/N$ (or equivalently $\phi_1(x) = e^{i\theta_x}/\sqrt{N}$ where $\theta_x$ is real) for $x = 0, \ldots N-1$. Thus the second term on the right of Eq.~\eqref{eq:firsteigenval} is identically zero and $L=1.$ Thus the quantum algorithm will succeed with certainty if and only if the input is a pure state, i.e.\ (dropping the subscript and changing notation to be consistent with Fig.~\ref{fig:generalalg}) \begin{equation} \hat{\rho}_\mathrm{0} = \ket{\Psi_0}\bra{\Psi_0}. \end{equation} The pure state must be such that it produces a minimum for the first term on the right of Eq.~\eqref{eq:firsteigenval} and thus \begin{equation} \ket{\Psi_0} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} e^{i\theta_x} \ket{x}. \label{eq:optimalstate} \end{equation} Eq.~\eqref{eq:errordensity} and the positivity of the POVM elements and density operators requires that for the algorithm to succeed with certainty the measurement operators must satisfy $\Trace{\left[\hat{\rho}_{\mathrm{f} \, \mathrm{bal}}\hat{\pi}_\mathrm{const}\right]} = \Trace{\left[\hat{\rho}_{\mathrm{f} \, \mathrm{const}}\hat{\pi}_\mathrm{bal}\right]}=0.$ But $\hat{\pi}_\mathrm{const} = \hat{I} - \hat{\pi}_\mathrm{bal}$ implies implies that this requirement is equivalent to $\Trace{\left[\hat{\rho}_{\mathrm{f} \, \mathrm{const}}\hat{\pi}_\mathrm{const}\right]} = \Trace{\left[\hat{\rho}_{\mathrm{f} \, \mathrm{bal}}\hat{\pi}_\mathrm{bal}\right]} = 1.$ The fact that the density operators and the measurement operators are positive semidefinite and that their eigenvalues fall in the range $\left[ 0,1\right]$ then implies that $\hat{\pi}_\mathrm{const}$ must be the projector onto the support of $\hat{\rho}_{\mathrm{f} \, \mathrm{const}}.$ The associated POVM elements are \begin{subequations} \begin{align} \hat{\pi}_\mathrm{const} & = \ket{\Psi_0}\bra{\Psi_0} \quad \textrm{and}\\ \hat{\pi}_\mathrm{bal} & = \hat{I} - \ket{\Psi_0}\bra{\Psi_0}. \end{align} \label{eq:optimalPOVM} \end{subequations} Eqs.~\eqref{eq:optimalstate} and~\eqref{eq:optimalPOVM} give the initial states and measurements for a general quantum algorithm which solves the Deutsch-Jozsa problem using a single invocation of an oracle unitary of Eq.~\eqref{eq:oracleunitary} and no ancillary qubits. The standard algorithm, for which $\ket{\Psi_0} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \ket{x},$ is one example of this~\cite{cleve98,collins98}. Also the general algorithm will successfully identify the function class regardless of the probabilities with which the various admissible functions are selected. This can be verified by applying the algorithm unitary transformations for the various admissible functions to the initial state of Eq.~\eqref{eq:optimalstate}, computing the final state and using this to determine the POVM operators of Eq.~\eqref{eq:optimalPOVM} to determine probabilities of the two possible outcomes. \section{Application to the Deutsch-Jozsa algorithm on mixed initial states} \label{sec:djmixed} In some proposed implementations of quantum computing, such as solution state nuclear magnetic resonance (NMR)~\cite{chuang98a,cory98,cory97,gershenfeld97,marx00,vdsypen01,vdsypen00,vdsypen04,negrevergne05,negrevergne06}, the initial state of the quantum system is mixed and therefore the problem cannot be solved with certainty by using the scheme of Fig.~\ref{fig:generalalg}. The notion of minimum error discrimination between quantum operations can be applied to bound the success probability of any algorithm on such mixed input states. For a general mixed state, Eqs.~\eqref{eq:rhoconst}, \eqref{eq:rhobalfive} and \eqref{eq:errorprob} imply \begin{equation} p_\mathrm{error} = \frac{1}{2}\; \left( 1 - \left\lVert \left( p_\mathrm{const} + \frac{p_\mathrm{bal}}{N-1} \right) \hat{\rho}_\mathrm{0} - \frac{p_\mathrm{bal} N}{N-1} \hat{\Lambda} \right\rVert \right). \end{equation} We consider the special case where $p_\mathrm{const} = p_\mathrm{bal} = 1/2$ and thus \begin{equation} p_\mathrm{error} = \frac{1}{2}\; \left[ 1 - \frac{N}{2(N-1)}\; \left\lVert \hat{\rho}_\mathrm{0} - \hat{\Lambda} \right\rVert \right]. \end{equation} Attempting to apply the triangle inequality to the trace norm and using the facts that $\left\lVert \hat{\rho}_\mathrm{0} \right\rVert = \left\lVert \hat{\Lambda} \right\rVert =1$ results in an inequality which is always satisfied. However, in some circumstances a similar approach can yield meaningful bounds. Specifically we consider situations in which an ensemble of quantum systems is first allowed to reach thermal equilibrium followed by application of a preparatory, oracle-independent unitary. The resulting state constitutes the initial state for the system. We shall assume that the system consists of $n$ qubits and has a Hamiltonian \begin{equation} \hat{H} = \sum_{i=1}^n \frac{\hbar \omega_i }{2} \hat{\sigma}_z^{(i)} + \sum_{i<j}^n \frac{\hbar \pi J_{ij}}{2} \hat{\sigma}_z^{(i)} \otimes \hat{\sigma}_z^{(j)} \end{equation} where $\omega_i$ is the precession frequency of the $i^\mathrm{th}$ qubit and $J_{ij}$ is the coupling between the $i^\mathrm{th}$ and $j^\mathrm{th}$ qubits (this is typical for solution state NMR~\cite{cory98,chuang98a}). The thermal equilibrium density operator is $\hat{\rho}_\mathrm{th} =e^{-\beta \hat{H}}/Z$ where $\beta = 1/kT$ and $Z = \Trace{\left[ e^{-\beta \hat{H}} \right]}.$ We shall assume that $J_{ij} \ll \omega_k$ and we shall ignore coupling terms in the density operator which is constructed from \begin{equation} e^{-\beta \hat{H}} \approx e^{-\sum_{i=1}^n \alpha_i\hat{\sigma}_z^{(i)} } \end{equation} where $\alpha_i:= \hbar \omega_i /2kT.$ Our final assumption is that $\alpha_i \ll 1$ (in typical solution state NMR scenarios, $\omega_i \sim 500\unit{MHz}$ and $T \sim 300\unit{K}$ so that $\alpha_i \sim 10^{-5}$). Thus \begin{equation} \hat{\rho}_\mathrm{th} \approx \frac{1}{N} \hat{I} - \hat{\rho}_\mathrm{dev}. \end{equation} where the deviation density operator is $\hat{\rho}_\mathrm{dev} := \sum_{i=1}^n \alpha_i \hat{\sigma}_z^{(i)}/N.$ Note that $\Trace{\hat{\rho}_\mathrm{dev}}=0.$ Given the thermal equilibrium state as an initial state of the system, there is a range of possible quantum operations that can be applied prior to the first invocation of the oracle. These could contain non-unitary operations; examples include pseudopure state preparation~\cite{chuang98a,cory98} or algorithmic cooling~\cite{boykin02, *schulman05}. Our aim is to focus on scenarios in which such non-unitary operations are avoided and we only consider scenarios where \emph{operations applied prior to the first oracle invocation are unitary without involving any ancillary systems.} Here the most general initial state of the $n$ qubit system is represented by $\hat{\rho}_\mathrm{0} = \hat{V} \hat{\rho}_\mathrm{th} \hat{V}^\dagger$ where $\hat{V}$ is the preparatory unitary operation. Then \begin{equation} \hat{\rho}_\mathrm{0} = \frac{1}{N} \hat{I} - \hat{\rho}_\mathrm{dev}^\prime \end{equation} where $\hat{\rho}_\mathrm{dev}^\prime = \hat{V}\hat{\rho}_\mathrm{dev} \hat{V}^\dagger$ and \begin{equation} \hat{\Lambda} = \frac{1}{N} \hat{I} - \sum_{x=0}^{N-1} \hat{P}_x \hat{\rho}_\mathrm{dev}^\prime \hat{P}_x. \end{equation} Define $\hat{\Lambda}_\mathrm{dev}^\prime:= \sum_{x=0}^{N-1} \hat{P}_x \hat{\rho}_\mathrm{dev}^\prime \hat{P}_x.$ Thus \begin{equation} p_\mathrm{error} = \frac{1}{2}\; \left[ 1 -\frac{N}{2(N-1)}\; \left\lVert \hat{\rho}_\mathrm{dev}^\prime - \hat{\Lambda}_\mathrm{dev}^\prime \right\rVert \right]. \end{equation} The triangle inequality yields \begin{equation} p_\mathrm{error} \geqslant \frac{1}{2}\; \left[ 1 -\frac{N}{2(N-1)}\; \left( \left\lVert \hat{\rho}_\mathrm{dev}^\prime \right\rVert + \left\lVert \hat{\Lambda}_\mathrm{dev}^\prime \right\rVert \right) \right]. \end{equation} The diagonal nature of $\hat{\Lambda}_\mathrm{dev}^\prime$ gives \begin{equation} \left\lVert \hat{\Lambda}_\mathrm{dev}^\prime \right\rVert = \sum_{x=0}^{N-1} \left\lvert \bra{x}\hat{\rho}_\mathrm{dev}^\prime \ket{x} \right\rvert. \end{equation} Denote the eigenstates and eigenvalues of $\hat{\rho}_\mathrm{dev}^\prime$ by $\left\{ \ket{\chi_i} \; | \; i=1,\ldots,N \right\}$ and $\left\{ c_i \; | \; i=1,\ldots,N \right\}$ respectively. Then \begin{align} \left\lVert \hat{\Lambda}_\mathrm{dev}^\prime \right\rVert & = \sum_{x=0}^{N-1} \sum_{i=0}^N \left\lvert c_i \right\rvert \left\lvert \innerprod{x}{\chi_i} \right\rvert^2 \nonumber \\ & = \sum_{i=0}^N \left\lvert c_i \right\rvert \sum_{x=0}^{N-1} \left\lvert \innerprod{x}{\chi_i} \right\rvert^2 \nonumber \\ & = \sum_{i=0}^N \lvert c_i \rvert = \left\lVert \hat{\rho}_\mathrm{dev}^\prime \right\rVert. \end{align} Thus \begin{equation} p_\mathrm{error} \geqslant \frac{1}{2}\; \left[ 1 -\frac{N}{(N-1)}\; \left\lVert \hat{\rho}_\mathrm{dev}^\prime \right\rVert \right]. \end{equation} The unitary invariance of the trace norm implies that $\left\lVert \hat{\rho}_\mathrm{dev}^\prime \right\rVert = \left\lVert \hat{\rho}_\mathrm{dev} \right\rVert.$ The singular values of $\hat{\rho}_\mathrm{dev}$ are $\left\{ \lvert \alpha_1 + \alpha_2 + \ldots \alpha_n\rvert/N, \right.$ $\left. \lvert -\alpha_1 + \alpha_2 + \ldots \alpha_n\rvert/N, \lvert \alpha_1 - \alpha_2 + \ldots \alpha_n\rvert/N, \ldots \right\}.$ Without loss of generality assume that $\alpha_1\geqslant \alpha_2 \geqslant \alpha_3, \ldots \geqslant 0.$ Then each singular value is bounded from below by $n \alpha_1/N$ and \begin{equation} \left\lVert \hat{\rho}_\mathrm{dev}^\prime \right\rVert \leqslant n \alpha_1. \end{equation} Thus \begin{equation} p_\mathrm{error} \geqslant \frac{1}{2}\; \left( 1 -\frac{Nn \alpha_1}{N-1}\; \right). \label{eq:errorthermal} \end{equation} Defining $\eps:= Nn\alpha_1 /(N-1)$ gives $p_\mathrm{error} \geqslant (1-\eps)/2.$ This gives the failure probability for an application of the algorithm to a single ensemble member and it indicates a non-deterministic output. The algorithm may be run repeatedly using the same unitary on a large number of independent quantum systems, all initially described by the same density operator and this will ultimately increase the chances of successfully identifying the oracle class. However, this must be compared to a classical probabilistic algorithm. Such an analysis has been done in the case of a pseudopure-state input~\cite{anderson05}. Here the quantum algorithm is reconfigured, using a single ancillary qubit to which the function class is written whenever a correct pure state input is used. For mixed input states, it is possible that an erroneous function class may be written to this ancillary qubit. Over an entire ensemble, the function class is inferred by effectively taking a ``majority-vote'' of computational basis measurement outcomes on the ancillary qubits for individual ensemble members. The probability of successful inference only depends on the ensemble size and the probabilities with which each of the two possible measurement outcomes occur; the initial state merely determines these probabilities in terms of $\eps$. This is to be compared with a classical probabilistic algorithm. The results are that, for large ensemble size (i.e.\ in the limit as this becomes infinite) the quantum algorithm succeeds with larger probability than the classical probabilistic algorithm if $\eps > \sqrt{3/4}.$ For the quantum algorithm to succeed with greater probability than the classical algorithm, this implies \begin{equation} \alpha_1 > \sqrt{3/4}\; \frac{N-1}{N}\; \frac{1}{n} \end{equation} or, alternatively \begin{equation} n > \sqrt{3/4}\; \frac{N-1}{N}\; \frac{1}{\alpha_1}. \end{equation} In typical solution-state NMR situations $\alpha_1 \sim 10^{-5}$ and thus \begin{equation} n > \sqrt{3/4}\; \frac{N-1}{N}\; 10^5. \end{equation} Thus with current solution state NMR technology and an implementation of the algorithm which starts with the thermal equilibrium state, uses no ancillary qubits and only uses the oracle unitary of Eq.~\eqref{eq:oracleunitary} plus any other unitaries, the minimum number of qubits required for the quantum algorithm to succeed with greater probability than any classical algorithm is approximately $10^5$. Current implementations of the Deutsch-Jozsa algorithm have not even exceeded $n=10.$\\ \section{Conclusion} In conclusion we have demonstrated the usefulness of the notion that quantum algorithms are tools for discriminating between oracle unitary transformations. For the Deutsch-Jozsa problem, applying techniques associated with unitary discrimination results in the complete set of quantum algorithms which can solve the problem with certainty; these have a structure similar to that of the standard version of the algorithm. We have also explored the issue of solving the Deutsch-Jozsa problem when the initial state of the quantum system is selected from a restricted set. For the case of a thermal equilibrium state followed by an arbitrary oracle-independent unitary, this yields a lower bound on the number of function arguments such that the quantum algorithm succeeds with greater probability than any classical algorithm. This is several orders of magnitude larger than that which has been implemented experimentally to date. We should point out that our analysis assumed certain constraints. First we assumed the oracle unitary of Eq.~\eqref{eq:oracleunitary}, which operates on $n$ qubits. However, some versions of the algorithm require $n$ argument qubits plus one ancilla qubit and use an oracle unitary defined by $\hat{U}_f\ket{x}\ket{y} := \ket{x}\ket{y\oplus f(x)}$ (the rightmost system in this notation is the single ancilla qubit, the leftmost the $n$ argument qubits)~\cite{deutsch92}. The unitary considered in our analysis can be reached from this by fixing the ancilla qubit in a special state, converting the problem to one of phase estimation~\cite{cleve98}. Whether this yields a larger set of quantum algorithms which solves the problem with certainty or reduces the lower bound in the thermal equilibrium state scenario is open to investigation. Second, it is known that using ancilla qubits entangled with ``system'' qubits enhances the possibility of successful discrimination between unitaries~\cite{dariano01} and it is conceivable that this could also yield a larger set of quantum algorithms and improved lower bounds in the thermal equilibrium case. Finally, in the thermal equilibrium scenario, relaxation to the thermal equilibrium state could be followed by a non-unitary quantum operation~\cite{fahmy08} and we have not assessed bounds in any such cases. \acknowledgments The author would like to thank Michael Frey for many useful discussions and is grateful for support from the Mesa State Faculty Professional Development Fund.
2024-02-18T23:40:22.848Z
2010-02-23T01:17:29.000Z
algebraic_stack_train_0000
2,216
5,931
proofpile-arXiv_065-10837
\section{Introduction}\label{intro} Hamiltonian problems are of great interest in many fields of application, ranging from the macro-scale of celestial mechanics, to the micro-scale of molecular dynamics. They have been deeply studied, from the point of view of the mathematical analysis, since two centuries. Their numerical solution is a more recent field of investigation, which has led to define symplectic methods, i.e., the simplecticity of the discrete map, considering that, for the continuous flow, simplecticity implies the conservation of $H(y)$. However, the conservation of the Hamiltonian and simplecticity of the flow cannot be satisfied at the same time unless the integrator produces the exact solution (see \cite[page\,379]{HLW}). More recently, the conservation of energy has been approached by means of the concept of the {\em discrete line integral}, in a series of papers \cite{IP1,IP2,IT1,IT2,IT3}, leading to the definition of {\em Hamiltonian Boundary Value Methods (HBVMs)} \cite{BIS,BIT2,BIT,BIT1}, which is a class of methods able to preserve, for the discrete solution, polynomial Hamiltonians of arbitrarily high degree (and then, a {\em practical} conservation of any sufficiently differentiable Hamiltonian). In more details, in \cite{BIT}, HBVMs based on Lobatto nodes have been analysed, whereas in \cite{BIT1} HBVMs based on Gauss-Legendre abscissae have been considered. In the last reference, it has been actually shown that both formulae are essentially equivalent to each other, since the order and stability properties of the method turn out to be independent of the abscissae distribution, and both methods are equivalent, when the number of the so called {\em silent stages} tends to infinity. In this paper this conclusion if further supported, since we prove that HBVMs, when cast as Runge-Kutta methods, are such that the corresponding matrix of the tableau has the nonzero eigenvalues coincident with those of the corresponding Gauss-Legendre formula (isospectral property of HBVMs). This property can be also used to further analyse the existing connections between HBVMs and Runge-Kutta collocation methods. With this premise, the structure of the paper is the following: in Section~\ref{hbvms} the basic facts about HBVMs are recalled; in Section~\ref{iso} we state the main result of this paper, concerning the isospectral property; in Section~\ref{coll} such property is further generalized to study the existing connections between HBVMs and Runge-Kutta collocation methods; finally, in Section~\ref{fine} a few concluding remarks are given. \section{Hamiltonian Boundary Value Methods}\label{hbvms} The arguments in this section are worked out starting from the arguments used in \cite{BIT,BIT1} to introduce and analyse HBVMs. We consider canonical Hamiltonian problems in the form \begin{equation}\label{hamilode}\dot y = J\nabla H(y), \qquad y(t_0) = y_0\in\mathbb R^{2m}, \end{equation} \noindent where $J$ is a skew-symmetric constant matrix, and the Hamiltonian $H(y)$ is assumed to be sufficiently differentiable. The key formula which HBVMs rely on, is the {\em line integral} and the related property of conservative vector fields: \begin{equation}\label{Hy} H(y_1) - H(y_0) = h\int_0^1 {\dot\sigma}(t_0+\tau h)^T\nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau, \end{equation} \noindent for any $y_1 \in \mathbb R^{2m}$, where $\sigma$ is any smooth function such that \begin{equation} \label{sigma}\sigma(t_0) = y_0, \qquad\sigma(t_0+h) = y_1. \end{equation} \noindent Here we consider the case where $\sigma(t)$ is a polynomial of degree $s$, yielding an approximation to the true solution $y(t)$ in the time interval $[t_0,t_0+h]$. The numerical approximation for the subsequent time-step, $y_1$, is then defined by (\ref{sigma}). After introducing a set of $s$ distinct abscissae, \begin{equation}\label{ci}0<c_{1},\ldots ,c_{s}\le1,\end{equation} \noindent we set \begin{equation}\label{Yi}Y_i=\sigma(t_0+c_i h), \qquad i=1,\dots,s,\end{equation} \noindent so that $\sigma(t)$ may be thought of as an interpolation polynomial, interpolating the {\em fundamental stages} $Y_i$, $i=1,\dots,s$, at the abscissae (\ref{ci}). We observe that, due to (\ref{sigma}), $\sigma(t)$ also interpolates the initial condition $y_0$. \begin{rem}\label{c0} Sometimes, the interpolation at $t_0$ is explicitly required. In such a case, the extra abscissa $c_0=0$ is formally added to (\ref{ci}). This is the case, for example, of a Lobatto distribution of the abscissae \cite{BIT}.\end{rem} Let us consider the following expansions of $\dot \sigma(t)$ and $\sigma(t)$ for $t\in [t_0,t_0+h]$: \begin{equation} \label{expan} \dot \sigma(t_0+\tau h) = \sum_{j=1}^{s} \gamma_j P_j(\tau), \qquad \sigma(t_0+\tau h) = y_0 + h\sum_{j=1}^{s} \gamma_j \int_{0}^\tau P_j(x)\,\mathrm{d} x, \end{equation} \noindent where $\{P_j(t)\}$ is a suitable basis of the vector space of polynomials of degree at most $s-1$ and the (vector) coefficients $\{\gamma_j\}$ are to be determined. We shall consider an {\em orthonormal basis} of polynomials on the interval $[0,1]$\footnote{The use of an arbitrary polynomial basis is also permitted and has been considered in the past (see for example \cite{IT3,IP2}), however as was shown in \cite{BIT1}, among all possible choices, the Legendre basis turns out to be the optimal one.}, i.e.: \begin{equation}\label{orto}\int_0^1 P_i(t)P_j(t)\mathrm{d} t = \delta_{ij}, \qquad i,j=1,\dots,s,\end{equation} \noindent where $\delta_{ij}$ is the Kronecker symbol, and $P_i(t)$ has degree $i-1$. Such a basis can be readily obtained as \begin{equation}\label{orto1}P_i(t) = \sqrt{2i-1}\,\hat P_{i-1}(t), \qquad i=1,\dots,s,\end{equation} \noindent with $\hat P_{i-1}(t)$ the shifted Legendre polynomial, of degree $i-1$, on the interval $[0,1]$. \begin{rem}\label{recur} From the properties of shifted Legendre polynomials (see, e.g., \cite{AS} or the Appendix in \cite{BIT}), one readily obtains that the polynomials $\{P_j(t)\}$ satisfy the three-terms recurrence relation: \begin{eqnarray*} P_1(t)&\equiv& 1, \qquad P_2(t) = \sqrt{3}(2t-1),\\ P_{j+2}(t) &=& (2t-1)\frac{2j+1}{j+1} \sqrt{\frac{2j+3}{2j+1}} P_{j+1}(t) -\frac{j}{j+1}\sqrt{\frac{2j+3}{2j-1}} P_j(t), \quad j\ge1. \end{eqnarray*} \end{rem} We shall also assume that $H(y)$ is a polynomial, which implies that the integrand in \eqref{Hy} is also a polynomial so that the line integral can be exactly computed by means of a suitable quadrature formula. It is easy to observe that in general, due to the high degree of the integrand function, such quadrature formula cannot be solely based upon the available abscissae $\{c_i\}$: one needs to introduce an additional set of abscissae $\{\hat c_1, \dots,\hat c_r\}$, distinct from the nodes $\{c_i\}$, in order to make the quadrature formula exact: \begin{eqnarray} \label{discr_lin} \displaystyle \lefteqn{\int_0^1 {\dot\sigma}(t_0+\tau h)^T\nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau =}\\ && \sum_{i=1}^s \beta_i {\dot\sigma}(t_0+c_i h)^T\nabla H(\sigma(t_0+c_i h)) + \sum_{i=1}^r \hat \beta_i {\dot\sigma}(t_0+\hat c_i h)^T\nabla H(\sigma(t_0+\hat c_i h)), \nonumber \end{eqnarray} \noindent where $\beta_i$, $i=1,\dots,s$, and $\hat \beta_i$, $i=1,\dots,r$, denote the weights of the quadrature formula corresponding to the abscissae $\{c_i\}\cup\{\hat c_i\}$, i.e., \begin{eqnarray}\nonumber \beta_i &=& \int_0^1\left(\prod_{ j=1,j\ne i}^s \frac{t-c_j}{c_i-c_j}\right)\left(\prod_{j=1}^r \frac{t-\hat c_j}{c_i-\hat c_j}\right)\mathrm{d}t, \qquad i = 1,\dots,s,\\ \label{betai}\\ \nonumber \hat\beta_i &=& \int_0^1\left(\prod_{ j=1}^s \frac{t-c_j}{\hat c_i-c_j}\right)\left(\prod_{ j=1,j\ne i}^r \frac{t-\hat c_j}{\hat c_i-\hat c_j}\right)\mathrm{d}t, \qquad i = 1,\dots,r. \end{eqnarray} \begin{rem}\label{c01} In the case considered in the previous Remark~\ref{c0}, i.e. when $c_0=0$ is formally added to the abscissae (\ref{ci}), the first product in each formula in (\ref{betai}) ranges from $j=0$ to $s$. Moreover, also the range of $\{\beta_i\}$ becomes $i=0,1,\dots,s$. However, for sake of simplicity, we shall not consider this case further. \end{rem} \begin{defn}\label{defhbvmks} The method defined by the polynomial $\sigma(t)$, determined by substituting the quantities in \eqref{expan} into the right-hand side of \eqref{discr_lin}, and by choosing the unknown coefficient $\{\gamma_j\}$ in order that the resulting expression vanishes, is called {\em Hamiltonian Boundary Value Method with $k$ steps and degree $s$}, in short {\em HBVM($k$,$s$)}, where $k=s+r$ \, \cite{BIT}.\end{defn} According to \cite{IT2}, the right-hand side of \eqref{discr_lin} is called \textit{discrete line integral} associated with the map defined by the HBVM($k$,$s$), while the vectors \begin{equation}\label{hYi} \hat Y_i \equiv \sigma(t_0+\hat c_i h), \qquad i=1,\dots,r, \end{equation} \noindent are called \textit{silent stages}: they just serve to increase, as much as one likes, the degree of precision of the quadrature formula, but they are not to be regarded as unknowns since, from \eqref{expan} and (\ref{hYi}), they can be expressed in terms of linear combinations of the fundamental stages (\ref{Yi}). In the sequel, we shall see that HBVMs may be expressed through different, though equivalent, formulations: some of them can be directly implemented in a computer program, the others being of more theoretical interest. Because of the equality \eqref{discr_lin}, we can apply the procedure described in Definition \ref{defhbvmks} directly to the original line integral appearing in the left-hand side. With this premise, by considering the first expansion in \eqref{expan}, the conservation property reads $$\sum_{j=1}^{s} \gamma_j^T \int_0^1 P_j(\tau) \nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau=0,$$ \noindent which, as is easily checked, is certainly satisfied if we impose the following set of orthogonality conditions: \begin{equation} \label{orth} \gamma_j = \int_0^1 P_j(\tau) J \nabla H(\sigma(t_0+\tau h))\mathrm{d}\tau, \qquad j=1,\dots,s. \end{equation} \noindent Then, from the second relation of \eqref{expan} we obtain, by introducing the operator \begin{eqnarray}\label{Lf}\lefteqn{L(f;h)\sigma(t_0+ch) =}\\ \nonumber && \sigma(t_0)+h\sum_{j=1}^s \int_0^c P_j(x) \mathrm{d} x \, \int_0^1 P_j(\tau)f(\sigma(t_0+\tau h))\mathrm{d}\tau,\qquad c\in[0,1],\end{eqnarray} \noindent that $\sigma$ is the eigenfunction of $L(J\nabla H;h)$ relative to the eigenvalue $\lambda=1$: \begin{equation}\label{L}\sigma = L(J\nabla H;h)\sigma.\end{equation} \begin{defn} Equation (\ref{L}) is the {\em Master Functional Equation} defining $\sigma$ ~\cite{BIT1}.\end{defn} \begin{rem}\label{MFE} From the previous arguments, one readily obtains that the Master Functional Equation (\ref{L}) characterizes HBVM$(k,s)$ methods, for all $k\ge s$. Indeed, such methods are uniquely defined by the polynomial $\sigma$, of degree $s$, the number of steps $k$ being only required to obtain an exact quadrature formula (see (\ref{discr_lin})).\end{rem} To practically compute $\sigma$, we set (see (\ref{Yi}) and (\ref{expan})) \begin{equation} \label{y_i} Y_i= \sigma(t_0+c_i h) = y_0+ h\sum_{j=1}^{s} a_{ij} \gamma_j, \qquad i=1,\dots,s, \end{equation} \noindent where $$a_{ij}=\int_{0}^{c_i} P_j(x) \mathrm{d}x, \qquad i,j=1,\dots,s.$$% \noindent Inserting \eqref{orth} into \eqref{y_i} yields the final formulae which define the HBVMs class based upon the orthonormal basis $\{P_j\}$: \begin{equation} \label{hbvm_int} Y_i=y_0+h \sum_{j=1}^s a_{ij}\int_0^1 P_j(\tau) J \nabla H(\sigma(t_0+\tau h))\,\mathrm{d}\tau, \qquad i=1,\dots,s. \end{equation} For sake of completeness, we report the nonlinear system associated with the HBVM$(k,s)$ method, in terms of the fundamental stages $\{Y_i\}$ and the silent stages $\{\hat Y_i\}$ (see (\ref{hYi})), by using the notation \begin{equation}\label{fy} f(y) = J \nabla H(y). \end{equation} \noindent In this context, it represents the discrete counterpart of \eqref{hbvm_int}, and may be directly retrieved by evaluating, for example, the integrals in \eqref{hbvm_int} by means of the (exact) quadrature formula introduced in \eqref{discr_lin}: \begin{eqnarray}\label{hbvm_sys} \lefteqn{ Y_i =}\\ &=& y_0+h\sum_{j=1}^s a_{ij}\left( \sum_{l=1}^s \beta_l P_j(c_l)f(Y_l) + \sum_{l=1}^r\hat \beta_l P_j(\hat c_l) f(\widehat Y_l) \right),\quad i=1,\dots,s.\nonumber \end{eqnarray} \noindent From the above discussion it is clear that, in the non-polynomial case, supposing to choose the abscissae $\{\hat c_i\}$ so that the sums in (\ref{hbvm_sys}) converge to an integral as $r\equiv k-s\rightarrow\infty$, the resulting formula is \eqref{hbvm_int}. \begin{defn}\label{infh} Formula (\ref{hbvm_int}) is named {\em $\infty$-HBVM of degree $s$} or {\em HBVM$(\infty,s)$} \, \cite{BIT1}. \end{defn} This implies that HBVMs may be as well applied in the non-polynomial case since, in finite precision arithmetic, HBVMs are undistinguishable from their limit formulae \eqref{hbvm_int}, when a sufficient number of silent stages is introduced. The aspect of having a {\em practical} exact integral, for $k$ large enough, was already stressed in \cite{BIS,BIT,BIT1,IP1,IT2}. On the other hand, we emphasize that, in the non-polynomial case, \eqref{hbvm_int} becomes an {\em operative method} only after that a suitable strategy to approximate the integrals appearing in it is taken into account. In the present case, if one discretizes the {\em Master Functional Equation} (\ref{Lf})--(\ref{L}), HBVM$(k,s)$ are then obtained, essentially by extending the discrete problem (\ref{hbvm_sys}) also to the silent stages (\ref{hYi}). In order to simplify the exposition, we shall use (\ref{fy}) and introduce the following notation: \begin{eqnarray*} \{\tau_i\} = \{c_i\} \cup \{\hat{c}_i\}, && \{\omega_i\}=\{\beta_i\}\cup\{\hat\beta_i\},\\[2mm] y_i = \sigma(t_0+\tau_ih), && f_i = f(\sigma(t_0+\tau_ih)), \qquad i=1,\dots,k. \end{eqnarray*} \noindent The discrete problem defining the HBVM$(k,s)$ then becomes, \begin{equation}\label{hbvmks} y_i = y_0 + h\sum_{j=1}^s \int_0^{\tau_i} P_j(x)\mathrm{d} x \sum_{\ell=1}^k \omega_\ell P_j(\tau_\ell)f_\ell, \qquad i=1,\dots,k. \end{equation} By introducing the vectors $${\bf y} = (y_1^T,\dots,y_k^T)^T, \qquad e=(1,\dots,1)^T\in\mathbb R^k,$$ and the matrices \begin{equation}\label{OIP}\Omega={\rm diag}(\omega_1,\dots,\omega_k), \qquad {\cal I}_s,~{\cal P}_s\in\mathbb R^{k\times s},\end{equation} whose $(i,j)$th entry are given by \begin{equation}\label{IDPO} ({\cal I}_s)_{ij} = \int_0^{\tau_i} P_j(x)\mathrm{d}x, \qquad ({\cal P}_s)_{ij}=P_j(\tau_i), \end{equation} \noindent we can cast the set of equations (\ref{hbvmks}) in vector form as $${\bf y} = e\otimes y_0 + h({\cal I}_s {\cal P}_s^T\Omega)\otimes I_{2m}\, f({\bf y}),$$ \noindent with an obvious meaning of $f({\bf y})$. Consequently, the method can be regarded as a Runge-Kutta method with the following Butcher tableau: \begin{equation}\label{rk} \begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & {\cal I}_s {\cal P}_s^T\Omega\\ \hline &\omega_1\, \dots~ \omega_k \end{array}\end{equation} \begin{rem}\label{ascisse} We observe that, because of the use of an orthonormal basis, the role of the abscissae $\{c_i\}$ and of the silent abscissae $\{\hat c_i\}$ is interchangeable, within the set $\{\tau_i\}$. This is due to the fact that all the matrices ${\cal I}_s$, ${\cal P}_s$, and $\Omega$ depend on all the abscissae $\{\tau_i\}$, and not on a subset of them, and they are invariant with respect to the choice of the fundamental abscissae $\{c_i\}$. \end{rem} In particular, when a Gauss distribution of the abscissae $\{\tau_1,\dots,\tau_k\}$ is considered, it can be proved that the resulting HBVM$(k,s)$ method \cite{BIT1}: \begin{itemize} \item has order $2s$ for all $k\ge s$; \item is symmetric and perfectly $A$-stable (i.e., its stability region coincides with the left-half complex plane, $\mathbb C^-$ \cite{BT}); \item reduces to the Gauss-Legendre method of order $2s$, when $k=s$; \item exactly preserves polynomial Hamiltonian functions of degree $\nu$, provided that \begin{equation}\label{knu}k\ge \frac{\nu s}2.\end{equation} \end{itemize} Additional results and references on HBVMs can be found at the {\em HBVMs Homepage} \cite{HBVMsHome}. \section{The Isospectral Property}\label{iso} We are now going to prove a further additional result, related to the matrix appearing in the Butcher tableau (\ref{rk}), corresponding to HBVM$(k,s)$, i.e., the matrix \begin{equation}\label{AMAT}A = {\cal I}_s {\cal P}_s^T\Omega\in\mathbb R^{k\times k}, \qquad k\ge s,\end{equation} \noindent whose rank is $s$. Consequently it has a $(k-s)$-fold zero eigenvalue. In this section, we are going to discuss the location of the remaining $s$ eigenvalues of that matrix. Before that, we state the following preliminary result, whose proof can be found in \cite[Theorem\,5.6 on page\,83]{HW}. \begin{lem}\label{gauss} The eigenvalues of the matrix \begin{equation}\label{Xs} X_s = \left( \begin{array} {cccc} \frac{1}2 & -\xi_1 &&\\ \xi_1 &0 &\ddots&\\ &\ddots &\ddots &-\xi_{s-1}\\ & &\xi_{s-1} &0\\ \end{array} \right) , \end{equation} with \begin{equation}\label{xij}\xi_j=\frac{1}{2\sqrt{(2j+1)(2j-1)}}, \qquad j\ge1,\end{equation} coincide with those of the matrix in the Butcher tableau of the Gauss-Legendre method of order $2s$.\end{lem} \medskip We also need the following preliminary result, whose proof derives from the properties of shifted-Legendre polynomials (see, e.g., \cite{AS} or the Appendix in \cite{BIT}). \begin{lem}\label{intleg} With reference to the matrices in (\ref{OIP})--(\ref{IDPO}), one has $${\cal I}_s = {\cal P}_{s+1}\hat{X}_s,$$ \noindent where $$\hat{X}_s = \left( \begin{array} {cccc} \frac{1}2 & -\xi_1 &&\\ \xi_1 &0 &\ddots&\\ &\ddots &\ddots &-\xi_{s-1}\\ & &\xi_{s-1} &0\\ \hline &&&\xi_s \end{array} \right) ,$$ \noindent with the $\xi_j$ defined by (\ref{xij}). \end{lem} \medskip The following result then holds true. \begin{theo}[Isospectral Property of HBVMs]\label{mainres} For all $k\ge s$ and for any choice of the abscissae $\{\tau_i\}$ such that the quadrature defined by the weights $\{\omega_i\}$ is exact for polynomials of degree $2s-1$, the nonzero eigenvalues of the matrix $A$ in (\ref{AMAT}) coincide with those of the matrix of the Gauss-Legendre method of order $2s$. \end{theo} \begin{proof} For $k=s$, the abscissae $\{\tau_i\}$ have to be the $s$ Gauss-Legendre nodes, so that HBVM$(s,s)$ reduces to the Gauss Legendre method of order $2s$, as outlined at the end of Section~\ref{hbvms}. When $k>s$, from the orthonormality of the basis, see (\ref{orto}), and considering that the quadrature with weights $\{\omega_i\}$ is exact for polynomials of degree (at least) $2s-1$, one easily obtains that (see (\ref{OIP})--(\ref{IDPO})) $${\cal P}_s^T\Omega{\cal P}_{s+1} = \left( I_s ~ {\bf 0}\right),$$ \noindent since, for all ~$i=1,\dots,s$,~ and ~$j=1,\dots,s+1$: $$\left({\cal P}_s^T\Omega{\cal P}_{s+1}\right)_{ij} = \sum_{\ell=1}^k \omega_\ell P_i(t_\ell)P_j(t_\ell)=\int_0^1 P_i(t)P_j(t)\mathrm{d} t = \delta_{ij}.$$ \noindent By taking into account the result of Lemma~\ref{intleg}, one then obtains: \begin{eqnarray}\nonumber A{\cal P}_{s+1} &=& {\cal I}_s {\cal P}_s^T\Omega{\cal P}_{s+1} = {\cal I}_s \left(I_s~{\bf 0}\right) ={\cal P}_{s+1} \hat{X}_s \left(I_s~{\bf 0}\right) = {\cal P}_{s+1}\left(\hat{X}_s~{\bf 0}\right)\\ &=& {\cal P}_{s+1} \left( \begin{array} {cccc|c} \frac{1}2 & -\xi_1 && &0\\ \xi_1 &0 &\ddots& &\vdots\\ &\ddots &\ddots &-\xi_{s-1}&\vdots\\ & &\xi_{s-1} &0&0\\ \hline &&&\xi_s&0 \end{array} \right) ~\equiv~ {\cal P}_{s+1}\widetilde X_s, \label{tXs} \end{eqnarray} \noindent with the $\{\xi_j\}$ defined according to (\ref{xij}). Consequently, one obtains that the columns of ${\cal P}_{s+1}$ constitute a basis of an invariant (right) subspace of matrix $A$, so that the eigenvalues of $\widetilde X_s$ are eigenvalues of $A$. In more detail, the eigenvalues of $\widetilde X_s$ are those of $X_s$ (see (\ref{Xs})) and the zero eigenvalue. Then, also in this case, the nonzero eigenvalues of $A$ coincide with those of $X_s$, i.e., with the eigenvalues of the matrix defining the Gauss-Legendre method of order $2s$.~\mbox{$\Box$}\end{proof} \section{HBVMs and Runge-Kutta collocation methods}\label{coll} By using the previous results and notations, now we further elucidate the existing connections between HBVMs and Runge-Kutta collocation methods. We shall continue to use an orthonormal basis $\{P_j\}$, along which the underlying {\em extended collocation} polynomial $\sigma(t)$ is expanded, even though the arguments could be generalized to more general bases, as sketched below. On the other hand, the distribution of the internal abscissae can be arbitrary. Our starting point is a generic collocation method with $k$ stages, defined by the tableau \begin{equation} \label{collocation_rk} \begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & \mathcal A \\ \hline &\omega_1\, \ldots ~ \omega_k \end{array} \end{equation} \noindent where, for $i,j=1,\dots,k$: $$\mathcal A= (\alpha_{ij})\equiv\left(\int_0^{\tau_i} \ell_j(x) \mathrm{d}x \right), \qquad \omega_j=\int_0^{1} \ell_j(x) \mathrm{d}x,$$ \noindent $\ell_j(\tau)$ being the $j$th Lagrange polynomial of degree $k-1$ defined on the set of abscissae $\{\tau_i\}$. Given a positive integer $s\le k$, we can consider a basis $\{p_1(\tau), \dots, p_s(\tau)\}$ of the vector space of polynomials of degree at most $s-1$, and we set \begin{equation} \label{P} \hat{\cal P}_s = \left( \begin{array} {cccc} p_1(\tau_1) & p_2(\tau_1) & \cdots & p_s(\tau_1) \\ p_1(\tau_2) & p_2(\tau_2) & \cdots & p_s(\tau_2) \\ \vdots & \vdots & & \vdots \\ p_1(\tau_k) & p_2(\tau_k) & \cdots & p_s(\tau_k) \end{array} \right) _{k \times s} \end{equation} \noindent (note that $\hat{\cal P}_s$ is full rank since the nodes are distinct). The class of Runge-Kutta methods we are interested in is defined by the tableau \begin{equation} \label{hbvm_rk} \begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & A \equiv \mathcal A \hat{\cal P}_s \Lambda_s \hat{\cal P}_s^T \Omega\\ \hline &\omega_1\, \ldots \ldots ~ \omega_k \end{array} \end{equation} \noindent where $\Omega={\rm diag}(\omega_1,\dots,\omega_k)$ (see (\ref{OIP})) and $\Lambda_s={\rm diag}(\eta_1,\dots,\eta_s)$; the coefficients $\eta_j$, $j=1,\dots,s$, have to be selected by imposing suitable consistency conditions on the stages $\{y_i\}$ (see, e.g., \cite{BIT1}). In particular, when the basis is orthonormal, as we shall assume hereafter, then matrix $\hat{\cal P}_s$ reduces to matrix ${\cal P}_s$ in (\ref{OIP})--(\ref{IDPO}), $\Lambda_s = I_s$, and consequently (\ref{hbvm_rk}) becomes \begin{equation} \label{hbvm_rk1} \begin{array}{c|c}\begin{array}{c} \tau_1\\ \vdots\\ \tau_k\end{array} & A \equiv \mathcal A {\cal P}_s {\cal P}_s^T \Omega\\ \hline &\omega_1\, \ldots \ldots ~ \omega_k \end{array} \end{equation} We note that the Butcher array $A$ has rank which cannot exceed $s$, because it is defined by {\em filtering} $\mathcal A$ by the rank $s$ matrix ${\cal P}_s {\cal P}_s^T \Omega$. The following result then holds true, which clarifies the existing connections between classical Runge-Kutta collocation methods and HBVMs. \begin{theo}\label{collhbvm} Provided that the quadrature formula defined by the weights $\{\omega_i\}$ is exact for polynomials at least $2s-1$ (i.e., the Runge-Kutta method defined by the tableau (\ref{hbvm_rk1}) satisfies the usual simplifying assumption $B(2s)$), then the tableau (\ref{hbvm_rk1}) defines a HBVM$(k,s)$ method based at the abscissae $\{\tau_i\}$. \end{theo} \underline{Proof}\quad Let us expand the basis $\{P_1(\tau),\dots,P_s(\tau)\}$ along the Lagrange basis $\{\ell_j(\tau)\}$, $j=1,\dots,k$, defined over the nodes $\tau_i$, $i=1,\dots,k$: $$ P_j(\tau)=\sum_{r=1}^k P_j(\tau_r) \ell_r(\tau), \qquad j=1,\dots,s.$$ \noindent It follows that, for $i=1,\dots,k$ and $j=1,\dots,s$: $$\int_0^{\tau_i} P_j(x) \mathrm{d}x = \sum_{r=1}^k P_j(\tau_r) \int_0^{\tau_i} \ell_r(x) \mathrm{d}x = \sum_{r=1}^k P_j(\tau_r) \alpha_{ir},$$ \noindent that is (see (\ref{OIP})--(\ref{IDPO}) and (\ref{collocation_rk})), \begin{equation}\label{APeqI} {\cal I}_s = \mathcal A {\cal P}_s. \end{equation} \noindent By substituting (\ref{APeqI}) into (\ref{hbvm_rk1}), one retrieves that tableau (\ref{rk}), which defines the method HBVM$(k,s)$. This completes the proof.~\mbox{$\Box$} \medskip The resulting Runge-Kutta method \eqref{hbvm_rk1} is then energy conserving if applied to polynomial Hamiltonian systems \eqref{hamilode} when the degree of $H(y)$, is lower than or equal to a quantity, say $\nu$, depending on $k$ and $s$. As an example, when a Gaussian distribution of the nodes $\{\tau_i\}$ is considered, one obtains (\ref{knu}). \begin{rem}[{\bf About Simplecticity}]\label{symplectic} The choice of the abscissae $\{\tau_1,\dots,\tau_k\}$ at the Gaussian points in $[0,1]$ has also another important consequence, since, in such a case, the collocation method (\ref{collocation_rk}) is the Gauss method of order $2k$ which, as is well known, is a {\em symplectic method}. The result of Theorem~\ref{collhbvm} then states that, for any $s\le k$, the HBVM$(k,s)$ method is related to the Gauss method of order $2k$ by the relation: $$A = {\cal A} ({\cal P}_s{\cal P}_s^T\Omega),$$ \noindent where the {\em filtering matrix} $({\cal P}_s{\cal P}_s^T\Omega)$ essentially makes the Gauss method of order $2k$ ``work'' in a suitable subspace. \end{rem} It seems like the price paid to achieve such conservation property consists in the lowering of the order of the new method with respect to the original one \eqref{collocation_rk}. Actually this is not true, because a fair comparison would be to relate method \eqref{rk}--\eqref{hbvm_rk1} to a collocation method constructed on $s$ rather than on $k$ stages, since the resulting nonlinear system turns out to have dimension $s$, as shown in \cite{BIT}. This computational aspect is fully elucidated in a companion paper \cite{blend}, devoted to the efficient implementation of HBVMs, where the Isospectral Property of the methods is fully exploited for this purpose. \subsection{An alternative proof for the order of HBVMs} We conclude this section by observing that the order $2s$ of an HBVM$(k,s)$ method, under the hypothesis that \eqref{collocation_rk} satisfies the usual simplifying assumption $B(2s)$, i.e., the quadrature defined by the weights $\{\omega_i\}$ is exact for polynomials of degree at least $2s-1$, may be stated by using an alternative, though equivalent, procedure to that used in the proof of \cite[Corollary\,2]{BIT} (see also \cite[Theorem\,2]{BIT1}). Let us then define the $k \times k$ matrix ${\cal P}\equiv {\cal P}_k$ (see (\ref{OIP})--(\ref{IDPO})) obtained by ``enlarging'' the matrix ${\cal P}_s$ with $k-s$ columns defined by the normalized shifted Legendre polynomials $P_j(\tau)$, $j=s+1,\dots,k$, evaluated at $\{\tau_i\}$, i.e., $${\cal P}= \left( \begin{array} {ccc} P_1(\tau_1) & \dots &P_k(\tau_1)\\ \vdots & &\vdots\\ P_1(\tau_k) & \dots &P_k(\tau_k) \end{array} \right) .$$ \noindent By virtue of property $B(2s)$ for the quadrature formula defined by the weights $\{\omega_i\}$, it satisfies $$ {\cal P}^T \Omega {\cal P} = \left( \begin{array} {ll} I_s & O \\ O & R \end{array} \right) , \qquad R\in\mathbb R^{k-s\times k-s}. $$ \noindent This implies that ${\cal P}$ satisfies the property $T(s,s)$ in \cite[Definition\,5.10 on page 86]{HW}, for the quadrature formula $(\omega_i,\tau_i)_{i=1}^k$. Therefore, for the matrix $A$ appearing in \eqref{hbvm_rk1} (i.e., (\ref{rk}), by virtue of Theorem~\ref{collhbvm}), one obtains: \begin{equation} \label{rk_leg1} {\cal P}^{-1} A {\cal P} = {\cal P}^{-1} \mathcal A {\cal P} \left( \begin{array} {ll} I_s \\ & O \end{array} \right) = \left( \begin{array} {ll} \widetilde{X}_s \\ & O \end{array} \right) , \end{equation} \noindent where $\widetilde X_s$ is the matrix defined in (\ref{tXs}). Relation \eqref{rk_leg1} and \cite[Theorem\,5.11 on page 86]{HW} prove that method (\ref{hbvm_rk1}) (i.e., HBVM$(k,s)$) satisfies $C(s)$ and $D(s-1)$ and, hence, its order is $2s$. \section{Conclusions}\label{fine} In this paper, we have shown that the recently introduced class of HBVMs$(k,s)$, when recast as Runge-Kutta method, have the matrix of the corresponding Butcher tableau with the same nonzero eigenvalues which, in turn, coincides with those of the matrix of the Butcher tableau of the Gauss method of order $2s$, for all $k\ge s$ such that $B(2s)$ holds. Moreover, HBVM$(k,s)$ defined at the Gaussian nodes $\{\tau_1,\dots,\tau_k\}$ on the interval $[0,1]$ are closely related to the Gauss method of order $2k$ which, as is well known, is a symplectic method. An alternative proof of the order of convergence of HBVMs is also provided.
2024-02-18T23:40:22.988Z
2010-02-23T20:30:22.000Z
algebraic_stack_train_0000
2,220
4,943
proofpile-arXiv_065-10855
\section{Introduction} The search for variable stars in IC\,1613 started at the beginning of the 1930's with the seminal work of Baade, Hubble, and Mayall, although it was published only later in \citet{san71}. The discovery of 37 definite Cepheids allowed Baade to calculate the first distance to IC\,1613 based on the period-luminosity (PL) relation. The analysis of the faint Cepheids in the same photographic plates was completed by \citet{car90} and the PL relation extended to periods as short as about two days. Candidate RR~Lyrae stars were also observed in IC\,1613 from ground- \citep{sah92} and space-based imaging \citep{dol01}, confirming the presence of a bona fide old population. More recent surveys by \citet[and references therein]{man01} and the OGLE collaboration \citep{uda01} of a central $\sim$14$\arcmin$x14$\arcmin$ field led to the discovery of 128 and 138 Cepheids, respectively, although the small size of the telescopes used by both groups limited the sample to relatively bright, long-period Cepheids. While all the Cepheids with periods longer than about 10~days in IC\,1613 have most likely been discovered, very few have been found at the faint end of the PL relation \citep[e.g.,][]{dol01}. Here we present the analysis of very deep {\it Hubble Space Telescope} ({\it HST}) ACS data with the aim of searching for short-period variable stars. These data are part of a larger set obtained in the context of the LCID project\footnotemark[9] (C. Gallart et al. 2010, in preparation). \footnotetext[9]{Local Cosmology from Isolated Dwarfs: http://www.iac.es/project/LCID/.} The depth and quality of the data also allow the reconstruction of the full star formation history (SFH) of this field located just outside of the core radius. The study of the stellar populations and SFH from the color-magnitude diagram (CMD) fitting technique will be presented in a companion paper (E. Skillman et al. 2010, in preparation). The structure of the present paper follows closely that of \citet[hereafter Paper~I]{ber09} in which we analysed the properties of the variables in the dSph galaxies Cetus and Tucana. A summary of the observations and data reduction is presented in \S~\ref{sec:2}, while \S~\ref{sec:3} and \S~\ref{sec:4} deal with the identification of variable stars and the sample completeness, respectively. In \S~\ref{sec:5} we describe the sample of RR~Lyrae stars and the Cepheids are presented in \S~\ref{sec:6}. The eclipsing binaries and other candidate variables are presented in \S~\ref{sec:7} to \ref{sec:9}. Our list of variables is cross-identified with the previous catalogs covering this field in \S~\ref{sec:10}. In \S~\ref{sec:11} we use the properties of the RR~Lyrae stars and Cepheids to estimate the distance to IC\,1613 and compare it with the values of the literature. In the last section we discuss the results and present our conclusions. \defcitealias{ber09}{Paper~I} \begin{figure* \epsscale{1.1} \plotone{f01_lo.eps} \figcaption{Color-magnitude diagrams of IC\,1613 for the ACS ({\it left}) and WFPC2 ({\it right}) fields, where the confirmed and candidate variables have been overplotted, as labeled in the inset. An isochrone from the BaSTI library \citep[Z=0.0003, 13\,Gyr,][]{pie04} is overlaid in the right panel to show the location of the old main-sequence turn-off. \label{fig:1}} \end{figure*} \input{tab1} \section{Observations and Data Reduction}\label{sec:2} \subsection{Primary ACS Imaging} This work is based on observations obtained with the ACS onboard the {\it HST} of a field located about 5$\arcmin$ West of the center of IC\,1613. The field location was chosen to be far enough out in radius to minimize crowding effects for high quality photometry, yet close enough in to maximize the total number of stars observed. As the goal of these observations was to reach the oldest main sequence turn-offs with good photometric accuracy, 24 {\it HST} orbits were devoted to this galaxy. These were collected over about 2.4 consecutive days between 2006 August 28 and 30. Each orbit was split in two $\sim$1200 seconds exposures in F475W and F814W for an optimal sampling of the light curves. The complete observing log is presented in Table~\ref{tab1}. The DAOPHOT/ALLFRAME suite of programs \citep{ste94} was used to obtain the instrumental photometry of the stars on the individual, non-drizzled images provided by the {\it HST} pipeline. Additionally, we used the pixel area maps and data quality masks to correct for the variations of the pixel areas on the sky around the field and to flag bad pixels. Standard calibration was carried out as described in \citet{sir05}, taking into account the updated zero-points of \citet{mac07} following the lowering of the Wide Field Channel temperature setpoint in July 2006. We refer the reader to \citet{mon10} for a detailed description of the data reduction and calibration. The final CMD is presented in Fig.~\ref{fig:1} ({\it left panel}), where the $(F475W+F814W)/2 \sim V$ filter combination was chosen for the ordinate axis so that the horizontal-branch (HB) appears approximately horizontal. The light-curves of the Cepheids and RR~Lyrae stars were converted to Johnson BVI magnitudes using the transformations given in \citetalias{ber09}. Since 12 of our Cepheids were also observed by the OGLE team in V and I (see \S~\ref{sec:10}), we could check the accuracy of our transformations. Ten out of the 12 stars in common have well sampled light-curves and accurately determined periods, although 6 of these only have V observations in the OGLE database because of their relatively low luminosities. We find excellent agreement at all phases between the two photometry sets for 7 of the 10 usable stars. The remaining three present a shift in magnitude which ranges from $\sim$0.1 at maximum light to $\sim$0.5 for the faintest points. The comparison of the finding charts shows that this difference is due to contamination by nearby bright stars that are not resolved in the ground-based data. To illustrate this, in Fig.~\ref{fig:2} we compare the finding-charts and photometry of the OGLE database with ours for two Cepheids (V106 and V154). For each Cepheid, we show 15$\arcsec$x15$\arcsec$ stamps from the OGLE I-band and our F814W images, as well as the V and I light-curves from both photometries. While both Cepheids are located in rather crowded fields, the stars close to V154 are relatively faint compared to the Cepheid. The ground-based and {\it HST} light-curves are therefore overlapping perfectly. On the other hand, in the case of V106 the neighbours have a luminosity similar to that of the Cepheid, thus affecting the mean magnitude and amplitude of the OGLE light-curve. This test shows that our photometry in the Johnson bands, although transformed from a very different photometric system, is very reliable and can be safely used to analyse the properties of the variable stars, both in terms of magnitude and amplitude. This is emphasized in \S~\ref{sec:11} where the distance to IC\,1613 is calculated from Johnson luminosities and for which we find excellent agreement with previous measurements based on other methods. \input{tab2_lo} \subsection{Parallel WFPC2 Imaging} \begin{deluxetable*}{ccc|ccc|ccc} \input{tab3_lo} \end{deluxetable*} \begin{deluxetable*}{cccccccccccccccccc} \input{tab4_lo} \end{deluxetable*} IC\,1613 was also observed with the WFPC2 in the F450W and F814W bands as parallel exposures to the primary ACS observations. This provided the opportunity to analyze a second field with the same number of observations and a similar exposure time. The orientation of the ACS field was chosen such that the parallel WFPC2 field would sample a region further from the center of the galaxy ($\sim$11$\arcmin$) in order to study population gradients. The images from the Wide Field chips were reduced individually as described in \citet{tur97}, and calibrated following the instructions from the {\it HST} Data Handbook for WFPC2.\footnotemark[10] Given the small number of stars on the Planetary Camera, the photometry of this chip was performed on the stacked images only. \footnotetext[10]{http://www.stsci.edu/instruments/wfpc2/Wfpc2\_dhb/wfpc2\_ ch52.html} The resulting CMD for the whole WFPC2 field-of-view is shown in the right panel of Fig.~\ref{fig:1}, where a 13~Gyr old isochrone from the BaSTI library \citep{pie04} has been overplotted assuming $A_B$=0.123 and $A_I$=0.056 \citep{sch98} for the F450W and F814W bands, respectively, and a dereddened distance modulus of 24.40 (see \S~\ref{sec:11.3}). Unfortunately, due to the lower sensitivity of the instrument with respect to the ACS, especially in the F450W band, the CMD does not reach the oldest main-sequence turn-offs. However, the lack of bright main-sequence stars in this outer field reveals the presence of a blue horizontal-branch component, typical of old, low metallicity stellar populations. \section{Identification of Variable Stars}\label{sec:3} The candidate variables were extracted from the photometric catalogs using the variability index of \citet{wel93}. This process yielded $\sim$780 candidates in the primary field. A preliminary check of the light-curve and position on the CMD, together with a careful inspection of the stacked image, allowed us to discard false detections due to cosmic-ray hits, chip defects or stars located under the wings of bright stars. This left us with 259 candidates; these are shown in Fig.~\ref{fig:1} overplotted on the CMD using their intensity-averaged magnitudes when available (see below). The individual F475W and F814W measurements for all of the candidate variables are listed in Table~\ref{tab2}, while the transformed BVI magnitudes are given in Table~\ref{tab3}. The period search was performed on the suspected variables through Fourier analysis \citep{hor86} taking into account the information from both bands simultaneously, then refined interactively by modifying the period until a tighter light curve was obtained. For each variable, datapoints with error bars larger than 3-$\sigma$ above the mean error bar size were rejected through sigma clipping with five iterations. As the period-finding program is interactive, it was possible to selectively reject more or less datapoints depending on the light curve quality before recalculating the periodogram. However, except in few particular cases, we found that the period-search was not affected by a few bad points. Given the short timebase of the observations ($<$3~days), the periods are given with three significant figures only. The accuracy of period determination was estimated from the degradation of the quality of the light curves when applying small period offsets. It mainly depends on the period itself and on the time interval covered by observations, and ranges from about 0.001 day for the shorter period RR~Lyrae stars to few hundredths of a day for the longest period Cepheids. The classification of the candidates was based on their light-curve morphology and position in the CMD. We found 90 RR~Lyrae stars, 49 Cepheids, 38 eclipsing binaries, and five candidate $\delta$-Scuti stars. A significant number of variable candidates was also found along the main sequence. For most of these, however, we could not find a period that would produce a smooth light-curve. In the following, we will simply refer to these variables as main-sequences variables (MSV), although most of them are probably eclipsing binaries. To obtain the amplitudes and intensity-averaged magnitudes of the monoperiodic variables in the instability strip, we fitted the light-curves with a set of templates based on the set of \citet[][see \citetalias{ber09}]{lay99}. The amplitudes of the double-mode RR Lyrae stars were measured from a low-order Fourier fit to the light-curve phased with the primary period after prewhitening of the secondary period. The mean magnitude and color of the variables for which we could not find a period are weighted averages from the ALLFRAME photometry and are therefore only approximate. Table~\ref{tab4} summarizes the properties of the confirmed variable stars in the ACS field. The first two columns give the identification number and type of variability, while the next two list the equatorial coordinates (J2000.0). Columns (5) and (6) give the primary period in days, i.e., the first-overtone period in the case of the RR$d$, and the logarithm of this period. The intensity-averaged magnitudes $\langle F475W \rangle$ and $\langle F814W \rangle$, and color $\langle F475W \rangle- \langle F814W \rangle$ are given in columns (7), (9), and (11). The amplitudes in the F475W and F814W bands measured from the template fits are listed in the eighth and tenth columns. The last six columns alternately list the intensity-averaged magnitudes and amplitudes in the Johnson B, V, and I bands. Approximate values are listed in italics. A list of the remaining candidates, providing the coordinates and approximate magnitudes, is given in Table~\ref{tab5}. They are labeled as MSV, long-period variables (LPV), classical instability strip variables (ISV), and RGB variables (RGV) based on their location on the CMD. The same procedure was followed with the WFPC2 photometry, in which we found 11 variables: nine RR~Lyrae stars and two Cepheids. However, the low signal-to-noise of the individual measurements produced rather noisy light-curves, and some low amplitude variables might have been missed. Given the small number of variables in the parallel field and the generally lower quality of their photometry and inferred parameters, we only give their coordinates and approximate parameters in Table~\ref{tab6} for completeness. These variables will not be taken into account when calculating average properties of the IC\,1613 variable star population. \begin{deluxetable}{lcccccccccc} \input{tab5} \end{deluxetable} \begin{deluxetable*}{lcccccccccc} \input{tab6} \end{deluxetable*} \begin{figure \epsscale{1.0} \plotone{f02a_lo.eps} \plotone{f02b_rgb.eps} \plotone{f02c_lo.eps} \plotone{f02d_rgb.eps} \figcaption{Comparison of the finding charts and Johnson photometry from the OGLE database with ours for V154 ({\it top}) and V106 ({\it bottom}). The finding charts are 15$\arcsec$ on a side. The OGLE light curves are represented as open symbols, while our photometry is shown as filled symbols. For clarity, the V-band datapoints have been shifted downward by 0.2 mag. Note the good agreement of the ground-based and {\it HST} light-curve in the first case. The second is an example of ground-based photometry affected by unresolved bright companions, which affects the light-curve morphology and mean-magnitude. \label{fig:2}} \end{figure} \section{Completeness and Areal Coverage}\label{sec:4} In this section, we analyze the effects of stellar crowding and signal-to-noise (SNR) limitations, temporal sampling, and spatial coverage on the completeness of our Cepheids and RR~Lyrae stars samples. The high spatial resolution of the ACS and the depth of our data imply that incompleteness will become noticeable well below the HB. Artificial-star tests (E. Skillman et al. 2010, in preparation) indicate that the completeness is higher than 98\% at (F475W+F814W)/2 $\sim$ 25.0. Therefore, down to these magnitudes only variables with amplitude of the order of the error bars at this magnitude ($\sim$0.1) might have been missed. However, variables fainter than the HB (e.g., SX Phoenicis and/or $\delta$-Scuti) have probably been missed due to crowding and low SNR. In addition, even though these variables are present in this galaxy (see \S~\ref{sec:8}), the relatively long exposure time smoothing out the variations in luminosity and the rather slow temporal sampling hampered the detection of these short-period variables. We also estimate the completeness due to temporal sampling by carrying out numerical simulations in the same way as described in \citetalias{ber09}. The detection probability is presented in Fig.~\ref{fig:3}. In the upper panel, the maximum period that is displayed is limited by the observational timebase ($\sim$3 days). Note that even though it is possible to detect variable stars with periods longer than the observation time-base, it is not feasible to determine their periods. The Figure also shows that the probability to detect a variable star in the period range of RR~Lyrae stars ($\sim$ 0.2--0.8 day) is basically one. On the other hand, a significant dip is present at P$\sim$1 day, due to the observations being gathered at the same time of the day each day. While it might not prevent detecting variables at this period (most likely Cepheids) if their amplitude is larger than about $\sim$0.1, it seriously limits our capacity to find an accurate period (e.g., V150 in Fig.~\ref{fig:8}, V030 in \S~\ref{sec:6.4}). \begin{figure \epsscale{1.1} \plotone{f03.eps} \figcaption{Probability of detecting variable stars in IC\,1613 as a function of period, for periods between about 1~hr and 5~days ({\it top}), and close-up view for periods between about 1~hr and 1 day ({\it bottom}). \label{fig:3}} \end{figure} \begin{figure* \epsscale{1.1} \plotone{f04_lo.eps} \figcaption{Spatial distribution of stars in the ACS and WFPC2 fields, where the variable stars have been overplotted as labeled in the inset. Solid- and dashed-line ellipses represent the core radius \citep[$r_c=4.5\arcmin \pm 0.9$,][]{bat07} and twice the core radius. \label{fig:4}} \end{figure*} \begin{figure* \epsscale{0.9} \plotone{f05a.eps} \figcaption{Light-curves of the RR~Lyrae variables of the ACS field in the F475W ({\it black}) and F814W ({\it grey}) bands, phased with the period in days shown in the lower right corner of each panel. Photometric error bars are shown. The open circles show bad data points, i.e., with errors larger than 3-$\sigma$ above the mean error of a given star, which were not used in the calculation of the period and mean magnitudes. {\it [Figure~\ref{fig:5} is presented in its entirety in the electronic edition of the Astrophysical Journal]}. \label{fig:5}} \end{figure*} \begin{figure \epsscale{1.15} \plotone{f06.eps} \figcaption{{\it Top}: Period-amplitude diagram for the RR Lyrae stars. Circles, triangles and squares represent RR$ab$, RR$c$, and RR$d$ (plotted with their overtone periods) respectively. The dashed line is a fit to the period-amplitude of the RR$ab$ after rejecting the points further than 1.5-$\sigma$ ({\it dotted lines}). The light and dark greyed areas represent the $\pm$1.5-$\sigma$ limits of Cetus and Tucana, respectively, from \citetalias{ber09}. {\it Bottom}: Period histogram for the RR Lyrae stars of the ACS field. RR$ab$ and RR$c$ are shown as the light and dark gray histograms, respectively, while the thick black histogram represent the {\it fundamentalized} RR Lyrae stars (RR$ab$, RR$c$, and RR$d$). The solid gray line is a double-Gaussian fit to the {\it fundamentalized} histogram. \label{fig:6}} \end{figure} As expected from the large extent of the galaxy compared to the field-of-view of the ACS, the main factor preventing us from obtaining a complete census of RR~Lyrae stars and Cepheids in IC\,1613 is the spatial coverage. Figure~\ref{fig:4} shows the distribution of stars from our ACS and WFPC2 fields on top of ellipses representing the core radius \citep[$r_c=4.5\arcmin \pm 0.9$,][]{bat07} and twice the core radius. \citet{bat07} estimated a tidal radius of $24.4\arcmin \pm 0.3$, ellipticity $\varepsilon = 1-b/a=0.19 \pm 0.02$, and position angle of 87$\degr$, in good agreement with the values found by \citet[][$PA=80\degr$, $\varepsilon=0.15$]{ber07}. Thus, the ACS field covers only about 1/160th of the area within the tidal radius. If we assume that the distribution of variable stars follows that of the overall population, we can estimate the total number of RR~Lyrae stars and Cepheids within the tidal radius as described in \citetalias{ber09}. Basically, we adopt the shape and orientation of the isodensity contours from \citet{bat07} to divide our sample of variable stars into elliptical annuli. The area of these annuli was obtained through Monte-Carlo sampling, from which we could calculate the density profile of the variable stars. Using this profile with the core and tidal radii of \citet{bat07} as input to equation~(21) of \citet{kin62}, we estimate that there should be about 2100 RR~Lyrae stars and 1400 Cepheids within the tidal radius of IC\,1613. Therefore, our ACS sample represents about 4\% of the total number of RR~Lyrae stars and Cepheids. \section{RR Lyrae stars}\label{sec:5} \subsection{ACS sample} In our ACS field we detected 90 RR Lyrae stars, all of them new discoveries (see \S~\ref{sec:5.3}). From the periods and light-curve shapes, we identified 61 of them pulsating in the fundamental mode (RR$ab$), 24 in the first-overtone mode (RR$c$), and five in both modes simultaneously (RR$d$). The light curves of all the RR~Lyrae stars are shown in Fig.~\ref{fig:5}. We find the mean period of the RR$ab$ and RR$c$ to be 0.611 and 0.334 day, respectively. These values, as well as the fraction of first-overtone $f_c=N_c/(N_{ab}+N_c)=0.28$, place IC\,1613 in the gap between the Oosterhoff type~I and type~II groups (Oo-I and Oo-II). IC\,1613 therefore joins the majority of dwarf galaxies in the Local Group as an Oosterhoff intermediate galaxy, the only real exceptions among the dSphs with well-sampled HB to date being Ursa Minor \citep{nem88} and Bo\"otes \citep{dal06}. \begin{figure \epsscale{1.2} \plotone{f07_rgb_lo.eps} \figcaption{Zoom-in on the HB of IC\,1613, where the RR Lyrae variables have been overplotted. Open circles, open triangles, and filled squares represent RR$ab$, RR$c$, and RR$d$, respectively. The long-dashed, thick lines represent, from top to bottom, the Z=0.0001, 0.0003, and 0.001 ZAHB, while the light gray, dark gray, and black lines are the corresponding evolutionary tracks with masses as labeled. Overplotted on them, the crosses indicate intervals of 10~Myr. The vertical dashed lines roughly delimit the instability strip. \label{fig:7}} \end{figure} In Fig.~\ref{fig:6} we show the period-amplitude diagram ({\it top}) and the period distribution ({\it bottom}) of the RR~Lyrae stars. The top panel shows that the distribution of the RR$ab$ of IC\,1613 in period-amplitude space is very similar to that of Tucana, both in terms of slope and dispersion around the fit. The slight shift towards shorter periods of the RR$ab$ stars of IC\,1613, compared to those of Tucana, might be explained by the higher metallicity of IC\,1613. \begin{figure* \epsscale{0.9} \plotone{f08a.eps} \figcaption{Light-curves of all the Cepheid variables in the F475W ({\it black}) and F814W ({\it grey}) bands. Photometric error bars are shown. The open circles show the data points with errors larger than 3-$\sigma$ above the mean error of a given star. {\it [Figure~\ref{fig:8} is presented in its entirety in the electronic edition of the Astrophysical Journal]}. \label{fig:8}} \end{figure*} However, the distribution of {\it fundamentalized} periods appears to be bimodal, as shown by the fit of two Gaussian profiles ({\it solid grey line}) in the bottom panel, with peaks at P=0.478 and 0.614 day. Given the separation between the two peaks and the small uncertainty on the periods ($\la$0.001~day), it seems unlikely to be caused by stochastic effects only. To check the reliability of the observed bimodality, we applied the KMM test of \citet{ash94} to the sample of RR$ab$ periods. The algorithm estimates the improvement of a two-group fit over a single Gaussian, and returned a $p$-value of 0.075. This can be interpreted as the rejection of the single Gaussian model at a confidence level of 92.5\%, and therefore indicates that a bimodal distribution is a statistically significant improvement over the single Gaussian. Note that the distribution of periods of the RR$ab$ alone is also bimodal, although the small number of RR$ab$ in the short-period peak does not exclude a result due to small number statistics. In addition, the location of the peaks (at P=0.494 and 0.615 day) does not seem to correspond to the superposition of Oo-I and Oo-II samples, as is the case in NGC\,1835 \citep[P$\sim$0.54 and 0.65 day;][]{sos03}. The bimodality might nontheless be due to the presence of two distinct old populations, as was already observed in the case of Tucana \citep{ber08}. Figure~\ref{fig:7} presents a zoom-in of the CMD centered on the HB. Zero-age horizontal-branch (ZAHB) loci and tracks from the scaled-solar models of the BaSTI library \citep{pie04} are also shown, plotted assuming a reddening of E(B$-$V)=0.025 from \citet{sch98}, and a distance modulus of 24.49. This distance takes into account the true distance modulus (m$-$M)$_0$=24.44 determined in \S~\ref{sec:11.3} based on these data and a shift of $+$0.05 to correct for the updated electron-conduction opacities \citep{cas07}. This Figure shows that more than half of the RR~Lyrae stars are concentrated in the faintest $\sim$0.1 mag of the instability strip (IS), while the other half is spread over $\sim$0.3 mag above. From the evolutionary tracks, it appears that most of the variables belong to the fainter, more-metal rich sample; these stars tend to have shorter periods. The brighter variables, on the other hand, appear to be related either to the Z=0.0001 population, or to stars with slightly higher metallicity (Z$\sim$0.0003) that have evolved off the blue horizontal-branch visible on the WFPC2 CMD in Fig.~\ref{fig:1}. In any case, the range of luminosity covered by the HB stars within the IS cannot be explained by a monometallic population, even taking into account evolution off the ZAHB. Given that RR~Lyrae stars are old stars ($\ga$10~Gyr), this implies a relatively quick chemical enrichment in the first few billion years of the formation of the galaxy. \subsection{WFPC2 sample} In the parallel WFPC2 field we discovered nine RR Lyrae stars (4 RR$ab$, 4 RR$c$ and 1 RR$d$), even though the low signal-to-noise ratio of the observations did not allow us to measure their periods as accurately as for the variables in the ACS field-of-view. Based on the small samples of RR~Lyrae stars, we find $\langle$P$_{ab}\rangle$=0.59$\pm$0.03 and $\langle$P$_c\rangle$=0.35$\pm$0.01 for the fundamental and first-overtone pulsators, respectively, which is in rough agreement with the values found for the RR~Lyrae stars in the ACS field-of-view. We provide their coordinates and approximate properties for completeness in Table~\ref{tab6}. \begin{figure \epsscale{1.17} \plotone{f09.eps} \figcaption{Period-Wesenheit diagram for Cepheids in IC\,1613 ({\it see text for details}). The dash-dotted and dashed lines are linear fits to the fundamental and first-overtone Cepheids, respectively. The individual Cepheids mentioned in \S~\ref{sec:6.1}, \ref{sec:6.2}, \ref{sec:6.3}, and \ref{sec:11.1.1} are labeled. \label{fig:9}} \end{figure} \begin{figure \epsscale{1.28} \plotone{f10_rgb_lo.eps} \figcaption{Period-luminosity diagrams for Cepheids in IC\,1613. Our Cepheids are shown as open circles, while the symbols used for the OGLE Cepheids are as labeled in the inset. \label{fig:10}} \end{figure} \subsection{Comparison with known RR~Lyrae stars in IC\,1613}\label{sec:5.3} Prior to our study, two surveys for variable stars found candidate RR~Lyrae stars in IC\,1613, although none of them in common with those presented here as the pointings did not overlap. Using the 4-shooter on the Hale-5m telescope, \citet{sah92} found 15 candidate RR~Lyrae stars in a 8$\arcmin \times$8$\arcmin$ field of view located 12$\arcmin$ to the west of the galaxy center. From their Table~3, we calculated a mean period for the RR$ab$ of 0.60$\pm$0.02 day, which is in good agreement with our value. We note, however, that all their RR~Lyrae stars are of the $ab$ type and on average appear brighter by about 0.3 mag than expected from the distance and metallicity of IC\,1613 (see also \S~\ref{sec:11}). While some RR$c$ stars could have been missed due to their low amplitude and the relatively noisy light-curves, some of them might have been misidentified as RR$ab$ because of the small number of datapoints. Alternatively, given that crowding seems to be insignificant in this field---according to the finding charts---and that their RR$ab$ span a range of luminosity of over a magnitude, we suspect that some of their RR~Lyrae candidates could actually be short-period Cepheids. \begin{figure \epsscale{1.2} \plotone{f11.eps} \figcaption{The peculiar Cepheids V054 ({\it top}) and V037 ({\it bottom}), plotted as a function of Julian date ({\it left}) and phase ({\it right}). \label{fig:11}} \end{figure} The analysis of WFPC2 data by \citet{dol01} in a field 7.4$\arcmin$ southwest from the center, on the other hand, returned 13 RR~Lyrae candidates. Assuming that the likely and possible overtone pulsators are actual RR$c$, their field contains 9 RR$ab$ and 4 RR$c$ with mean periods of $\sim$0.58$\pm$0.02 and $\sim$0.37$\pm$0.03 day, respectively. This is very similar to the values we found for our WFPC2 field. The slightly shorter mean period for the RR$ab$, when compared to our ACS sample, may be due to the difficulty of detecting the low amplitude, longer period variables because of the lower quality of the data. In any case, these samples at larger radii are not large enough for a robust comparison with the ACS sample in order to reveal potential gradients in the properties of the RR~Lyrae stars as was done for the Tucana dwarf in \citet{ber08}. In the case of Tucana, the ACS field of view covered a larger fraction of the radius of the galaxy, allowing a radial gradient study within the ACS field alone. \section{Cepheids}\label{sec:6} In the search for variable stars presented here we found 44 variables with properties of Cepheids: about 2 mag brighter than the HB, in or close to the IS, and P$\ga$0.6 days. Their light-curves are shown in Fig.~\ref{fig:8}. Another five were found for which we could not determine their period and/or intensity-averaged mean magnitude due to inadequate temporal sampling. These are discussed in \S~\ref{sec:6.4}. Figure~\ref{fig:9} shows the period$-W_{814}$ diagram for the Cepheids in IC\,1613. Because Cepheids from the different modes of pulsation have a different magnitude at a given period, their location in the period-luminosity (PL) diagrams can be used to differentiate their type. These PL relations are fundamental in that they represent the most robust method to calculate the distance to galaxies within the nearby universe. To reduce the scatter due to interstellar extinction and the intrinsic dispersion due to the finite width of the instability-strip, and to obtain a more secure classification, we calculated the Wesenheit, or reddening-free, magnitude introduced by \citet{van68}. In the {\it HST} bands used here, the relation is of the form: \begin{equation} W_{814} = F814W - 0.945 (F475W-F814W) \end{equation} where the coefficient 0.945 comes from the standard interstellar extinction curve dependence of the F814W magnitude on E(F475W$-$F814W), from \citet{sch98}. Figure~\ref{fig:9} shows two almost parallel sequences, and correspond to the fundamental (F, {\it open circles}) and first-overtone (FO, {\it filled circles}) Cepheids, respectively. A few outliers are marked as open triangles (V006 and V091) and a filled square (V126), and are discussed in \S~\ref{sec:6.2} and \ref{sec:6.3}, respectively. From this plot, we find that 25 Cepheids are pulsating in the fundamental mode, while 16 fall on the first-overtone sequence. Figure~\ref{fig:10} shows the PL diagram for the Johnson VI bands obtained as described in \citetalias{ber09}, as well as in the W$_I$ band as defined in \citet[$W_I = I - 1.55 (V-I)$]{uda99c}. The new Cepheids of IC\,1613 ({\it open circles}) are shown overplotted on the classical Cepheids of the Large and Small Magellanic Clouds (LMC \& SMC) and of IC\,1613 from the OGLE collaboration \citep[and references therein]{uda01} as labeled in the inset. The apparent magnitudes were converted to absolute magnitude assuming a distance modulus of 18.515$\pm$0.085 to the LMC \citep{cle03}, and a distance offset of 0.51 of the SMC relative to the LMC \citep{uda99c}. The Cepheids of IC\,1613 were shifted according to the distance determined in \S~\ref{sec:11}. It shows that the properties of the Cepheids discovered in our field are in excellent agreement with the properties of OGLE Cepheids, therefore confirming their classical Cepheid nature. However, one can see that the Cepheids in IC\,1613 (both ours and OGLE) are mainly located toward the short-period extremity of the fundamental and overtone sequences when compared to the Cepheids in the Magellanic Clouds. Studies of the variable stars in the very metal poor dwarf irregular galaxies Leo~A \citep{dol02} and Sextans~A \citep{dol03} also revealed large numbers of short period Cepheids. This agrees with the suggestion by \citet{bon03} that ``the minimum mass that performs the blue loop [decreases with decreasing metallicity, which] means that metal-poor stellar systems such as IC\,1613 should produce a substantial fraction of short-period classical Cepheids.'' Interestingly, the IC\,1613 fundamental-mode Cepheids perfectly follow the various PL relationships defined by the Magellanic Clouds Cepheids over the whole range of periods, from $\sim$1 to $\sim$40 days (see Fig.~\ref{fig:10}). This suggests that the metallicity dependence of these relations, if any, must be very small at such low metallicities. \subsection{The peculiar Cepheids V037 \& V054}\label{sec:6.1} Two of the Cepheids falling on the first-overtone sequence in Fig.~\ref{fig:9} present very peculiar light-curves. These are shown again in Fig.~\ref{fig:11}, plotted as a function of Julian date ({\it left}) and phase ({\it right}). In the top left panel, the amplitude of V054 seems to be decreasing at each consecutive maximum. The dispersion at maximum light on the phased light-curves is reminiscent of the light-curves of RR$d$ stars, although in this particular case the periodogram does not present the double-peak features characteristic of double-mode stars. However, the rather long main period of this variable (0.633 day vs. $\sim$0.39 for RR$d$) limited the number of observed cycles to $\la$4 and might explain the lack of resolution in the periodogram. In addition, the position of V054 on the PL diagrams, at the shorter period end, is in excellent agreement with the double-mode Cepheids pulsating simultaneously in the fundamental and first-overtone mode presented in \citet{sos08a}. In the case of V037, we found that the only period giving a smooth light-curve is P=1.44 days. However, as shown in the bottom right panel of Fig.~\ref{fig:11}, this produces a light-curve with a secondary maximum at phase $\sim$0.6. If this period is confirmed to be true, this would be the second Cepheid with such a light-curve in IC\,1613 after V39 discovered by Baade \citep[see][]{san71}. Note, however, that more recent observations and analysis of V39 \citep[e.g.,][]{man01} favored a period only half that given by \citet[P=14.350 vs. 28.72 days]{san71} associated with a long term modulation of $\sim$1100 days, which could correspond to an extremely distant W~Vir star in the outer halo of the MW blended with a long-period variable in IC\,1613. In our case, the short timebase of our observations definitely rules out the possibility of long term modulation. In addition, a shorter period for V037 cannot phase the light-curve properly unless a strong modulation of the period itself is taken into account, which is very unlikely on such short timescales. Given that its position in the CMD and PL diagrams is exactly that expected from a Cepheid, in the following we will assume that V037 is a bona fide classical Cepheid. \begin{figure \epsscale{1.29} \plotone{f12.eps} \figcaption{Sample light-curves for eclipsing binaries in IC\,1613. The three brightest ({\it top}), three intermediate magnitude ({\it middle}), and the three faintest ({\it bottom}) binaries are shown. \label{fig:12}} \end{figure} \subsection{Second-Overtone candidates V006 \& V091}\label{sec:6.2} In the diagrams shown in Figs.~\ref{fig:9} \& \ref{fig:10}, two Cepheids lie above the first-overtone sequence in all panels. In particular in Fig.~\ref{fig:10}, they follow the sequence representing the second-overtone pulsators found by the OGLE collaboration in the SMC. Their nearly sinusoidal light-curve with very low amplitude in both bands ($\la$0.14 in F475W, $\sim$0.07 in F814W), as well as their location well within the IS, make them very strong second-overtone candidates. These are the first ones detected beyond the Magellanic Clouds. Second-overtone classical Cepheids are very rare objects, as witnessed by the few firm candidates known to date: 13 out of 2049 classical Cepheids---0.42\%---in the SMC \citep{uda99a}, and 14 out of 3361---0.63\%---in the LMC \citep{sos08a}. However, theoretical models by \citet{ant97} showed that these Cepheids are more likely to be found in low metallicity systems, which is in qualitative agreement with the fraction of second-overtone Cepheids in our IC\,1613 sample, about 4\%. \subsection{V126: a blended Cepheid?}\label{sec:6.3} The light-curve morphology, luminosity and period of V126 are all characteristic of a classical Cepheid. On the other hand, its color is 0.5 mag bluer than the blue edge of the IS, and therefore appears below the fundamental mode pulsator sequence in the period-Wesenheit diagram shown in Fig.~\ref{fig:9}. Although it appears isolated on the stacked images, the most likely hypothesis is that it is a bona fide Cepheid blended with an unresolved main-sequence star. The low amplitude of its luminosity variation in both bands relative to the other Cepheids with similar periods seems to support this conclusion. \subsection{Other candidates}\label{sec:6.4} Five Cepheids were observed for which we could not accurately determine their period and/or mean magnitude due to inadequate temporal sampling. We use the weighted mean magnitude given by ALLFRAME as their approximate magnitudes. They are shown as open stars in Fig.~\ref{fig:1}. V030 has a period $\sim$0.98 day, and was therefore observed at the same phase each day, at minimum light. Most of the rising phase and peak are missing so it was not possible to fit a template light-curve and measure accurately its mean magnitude. From its approximate magnitude, its location on the PL diagram is consistent with the fundamental mode Cepheids. V090 has relatively low-amplitude (A$_{475}$$\sim$0.15, A$_{814}$$\sim$0.11) and period 2$\la$P$\la$4 days. From its location on the PL diagram it is a probable first-overtone. V124 is very bright and was previously known from ground-based observations (see \S~\ref{sec:10}). Given its very long period ($\sim$26 days), our observations cover less than 10\% of a cycle so we did not try to obtain the mean magnitude from our data. It is shown in Fig.~\ref{fig:10} as the fundamental mode OGLE Cepheid at log P$=$1.41. V137 was also present in the OGLE field, although their quoted period of 1.376~days does not provide a satisfactory fit to our data. Even though we could not obtain an accurate period for this variable, the 1.4-mag decrease in the F475W band over the first two days and the beginning of the third, and the subsequent increase implies a period in the range 2.7$\la$P$\la$3.3 days, which would make it a fundamental mode Cepheid. V151 is one of the brightest Cepheids of our sample, and thus has a period longer than our observational timebase. Unfortunately, it is missing from the OGLE catalog so we can only constrain the period to the range 2.5$\la$P$\la$3.1 days from the light-curve morphology. It places it on the PL relation of fundamental mode Cepheids. \section{Eclipsing Binary Stars}\label{sec:7} A large number of variable star candidates were found in the main-sequence, as can be expected from the fact that it is well populated from the turn-offs to the B giants. Most of the variables for which we could find a period are eclipsing binary stars. The remaining ones have properties of high-amplitude $\delta$-Scuti stars (HADS) and are discussed in the following section, while the candidates for which the light-curves could not be phased are presented in \S~\ref{sec:9}. Note that all of these stars are newly discovered variables. In the ACS field-of-view, we found 38 eclipsing binaries between `V'$\sim$20.5 and $\sim$26.5. At fainter magnitudes, the signal-to-noise ratio of the individual datapoints was too low to estimate the period and/or confirm variability. The light-curves of the three brightest, three intermediate magnitude, and the three faintest binaries are shown in Fig.~\ref{fig:12}. Given the small number of datapoints and relatively low SNR of most of the candidates, some periods might actually be multiples of the true periods. We thus did not try to assign them a particular type of binary (detached or contact)---although the variety of light-curve morphologies indicate that members of both kinds appear to be present---nor check for their membership in IC\,1613. Bright eclipsing binaries have been used to determine distances to LG galaxies with claimed accuracy better than about 5\% (e.g., M31: \citealt{rib05}; M33: \citealt{bon06}). Unfortunately, the stars need to be bright enough (V$_{max}<$20) to accomplish this in a reasonable time with current observing facilities. Even though some of our eclipsing binaries have accurate periods and deep eclipses, none of them is brighter than V$\sim$20.4. \begin{figure \epsscale{1.1} \plotone{f13_lo.eps} \figcaption{Selection criteria for the $\delta$-Scuti candidates. {\it Top:} Detail of the CMD, where the possible candidates are overplotted with larger symbols. The dash-dotted lines indicate the approximate location of the classical instability strip fitted on these data. {\it Bottom:} F814W versus F475W amplitudes, where the pulsating variables and binary stars have a different slope. Pluses, open circles, and open stars are for the RR$ab$, RR$c$, and Cepheids, respectively, while the open diamonds represent the binaries with good light-curves. The dotted and dash-dotted lines are linear fits to the pulsating variables and binaries, respectively. In both panels the $\delta$-Scuti candidates satisfying both requirements are shown as filled stars, and the rejected candidates as filled diamonds. \label{fig:13}} \end{figure} \section{Candidate $\delta$-Scuti stars}\label{sec:8} Among the candidate variables fainter than the HB that appeared to be periodic variables, most of them were found to be binaries and are described in the previous section. For some of these variables, however, it was possible to obtain a relatively smooth, pulsation-like light-curve (i.e., with a single minimum) with a short period. This was the case for 13 variables, which were flagged as possible $\delta$-Scuti stars. To further constrain their nature, additional parameters were taken into account. First, we required that their color be within the classical IS, defined approximately by the position of the Cepheids and RR~Lyrae stars. The top panel of Fig.~\ref{fig:13} shows a zoom-in of the CMD showing the location of the IS, where the 13 possible $\delta$-Scuti are shown as filled symbols. Eight of these fall within or close to the boundaries of the IS. Given that the pulsations of the $\delta$-Scuti stars, like the Cepheids and RR~Lyrae stars are driven by the $\kappa$-mechanism, the second criteria was that the amplitude ratio had to be in agreement with that of the other pulsating variables. The bottom panel of Fig.~\ref{fig:13} shows the amplitude in F814W versus the amplitude in F475W for Cepheids, RR~Lyrae stars, eclipsing binaries with good light-curves, and the possible $\delta$-Scuti stars. Only 5 satisfy this requirement, and are shown as filled stars in both panels. They also present relatively smooth light-curves with very short periods ($\la$0.2 days). Their pulsational properties are summarized in Table~\ref{tab4}. \begin{figure \epsscale{1.28} \plotone{f14.eps} \figcaption{Sample light-curves for MSVs in IC\,1613, as a function of HJD. As in Fig.~\ref{fig:12}, the three brightest ({\it top}), three intermediate magnitude ({\it middle}), and the three faintest ({\it bottom}) MSV candidates are shown. Note the different vertical scale for the bottom panels. \label{fig:14}} \end{figure} \section{Other Variables}\label{sec:9} In addition to the classical IS variables and eclipsing binaries presented above, we detected another 77 candidate variables throughout the CMD, for which we could not determine the period because of inadequate temporal sampling. Most of these candidate variables are located on the MS, although five LPV candidates are also present at or close to the TRGB. While the majority of the MS candidates are most likely eclipsing binaries, at least in the brightest part of the MS might there be pulsating variables of the $\beta$ Cephei or Slowly Pulsating B stars types \citep[see, e.g.,][]{dia08}. A sample of the brightest, intermediate, and the faintest light-curves is shown in Fig.~\ref{fig:14}. \section{Previously Known Variables}\label{sec:10} The variable stars in our ACS field in IC\,1613 that were already known prior to this work are presented in Table~\ref{tab7}. We give their index and period as measured from our data, as well as their identifications and periods in previous catalogs; most of them were discovered during the wide-field survey by the OGLE collaboration. Because of the relatively shallow magnitude limits of the other surveys, all are luminous Cepheids with relatively large amplitude. Two of them, V124 and V172, were already included in Baade's catalog \citep{san71} as V11 and V35, respectively. These are the two brightest Cepheids of our sample, although the very long period of the former prevented us from measuring its intensity-averaged magnitude and period. Another candidate variable that was flagged as a possible Cepheid in \citet{san71} and judged to be an irregular variable in \citet{car90}, V41 is constant within $\sim$0.02 and 0.05 mag in our F475W and F814W data and was not included in our catalog. Two other Cepheids appeared in \citet{man01} with periods similar to the ones found here within 0.01 day, while three LPV/Irr variables of their catalog appear constant in our data (V0762C, V1598C, and V1667C). However, all have very long periods ($\ga$70 days) and/or low amplitude (A$_{Wh}$=0.20) and their variation could have easily been missed in our data due to the short observing run. The remaining variables were previously observed by OGLE \citep{uda01}. For most of them, the periods determined from our data agree well with theirs. However, the 1.376 day period given by OGLE for V137 cannot phase our data. Our sampling over 3 consecutive days clearly covers only about one cycle, so the period must be very close to P=3 days. By comparing the number of Cepheids observed by us and the OGLE team in the same field, we can obtain a rough estimate of the completeness of their survey. We found 49 Cepheids over the area of the ACS camera, while their sensitivity limited them to 12 detections. Assuming that the density of Cepheids is uniform in the central region of IC\,1613, this means that about 500 Cepheids should be present in the 14$\arcmin$x14$\arcmin$ field observed by the OGLE collaboration. Given the extent of the galaxy, this value is roughly in agreement with the total number estimated in \S~\ref{sec:4}. \input{tab7} \section{Distance}\label{sec:11} In \citetalias{ber09}, we used the photometric and pulsational properties of the RR~Lyrae stars to measure the distances to the dSphs Cetus and Tucana, and showed that the values we obtained are in excellent agreement with the distances obtained previously with independent methods. We therefore obtain here a RR~Lyrae-based distance for IC\,1613 in the same way. Additionally, given the significant number of Cepheids we found in IC\,1613 and the high quality of their light-curves, together with their firm classification as fundamental, {first-,} or second-overtone pulsators, it is possible to calculate accurate distances from their properties as well. In this section, we use several methods adopted in the literature to calculate the distance based on the photometric and pulsational properties of the Cepheids and RR~Lyrae stars. In all cases, the intensity-averaged mean magnitudes in the Johnson bands are used. \subsection{Cepheids}\label{sec:11.1} \subsubsection{Period-Wesenheit relation for Fundamental and First-Overtone Cepheids}\label{sec:11.1.1} The main distance indicator for star forming galaxies in the nearby universe is the relation between the period and the luminosity of their Cepheids, even though the slope and zero-point of this relation might be slightly dependent on metallicity \citep[e.g.,][]{ken98,sak04,san08}. However, from the analysis of nonlinear convective pulsations models, \citet{fio07} showed that the I-band period-luminosity is not expected to show metallicity dependence at metallicities lower than that of the LMC (Z=0.008), and that the effect of metal-content on the P--M$_V$ relation is negligible below Z=0.004. Given the low metallicity of the young stars in IC\,1613 (Z$\sim$0.003), we will therefore assume that it is safe to estimate its distance by comparison with the properties of the Cepheids of the Magellanic Clouds. Here we chose to derive a new P--$W_I$ (see \S~\ref{sec:6}) relation based on the Cepheids of both MCs instead of using the relations available in the literature for the following reasons: i) first, the domain of validity for the PL relations of the literature starts at about P=2.5 days \citep[log P = 0.4; see e.g.,][]{uda99c,fou07}, while all our well-measured Cepheids have periods shorter than this value; and ii) the period distribution of the Cepheids in the LMC and SMC are very different from each other, with the peaks in the distribution at P$\sim$3.2 and 2.1 days for the fundamental and first-overtone Cepheids in the LMC, and at P$\sim$1.6 and 1.3 days, respectively, in the SMC. While the period distribution of the Cepheids in IC\,1613 (with the peaks at P$\sim$1.6 and 1.1 days) are very similar to those in the SMC, the distances to other galaxies are usually provided relative to a given LMC distance. In addition, the PL relation of the SMC presents a rather large dispersion due to the inclination of the galaxy with respect to the line of sight. We thus combined the Cepheids of both Magellanic Clouds assuming $\Delta$(m$-$M)$_0$=0.51$\pm$0.03 \citep{uda99c} to improve the period coverage. This also has the advantage of cancelling any small difference in slope that might be present between the PL relations of the LMC and SMC. In addition, most of the Cepheids we observed in IC\,1613 are located at the short-period end of the distribution, so any non-linearity in the PL relation would lead to a different distance had the Cepheids had longer periods. \citet{nge08} showed that the PL relation in the Wesenheit magnitude $W_I$ is linear over the period range covered by the LMC Cepheids, while it is not in the V\&I bands. Finally, the use of $W_I$ also limits the effect of interstellar reddening and thus reduces the scatter in the relation. We calculate the P--$W_I$ relation from linear regression fits to the combined MC Cepheids from OGLE-II observations \citep{uda99b,uda99d} over the whole range of periods after rejecting the outliers through sigma-clipping (2.5-$\sigma$, 5 iterations). Assuming an LMC distance of (m$-$M)$_0$=18.515$\pm$0.085 \citep{cle03}, we found: W$_{I,F}$=$-$3.435($\pm$0.007) log $P_F$ $-$ 2.540($\pm$0.006), and W$_{I,FO}$=$-$3.544($\pm$0.013) log $P_{FO}$ $-$ 3.067($\pm$0.007) \noindent for the fundamental and first-overtone Cepheids, respectively, with a standard deviation of 0.096 in both cases. The slope of our fundamental P--$W_I$ relation is marginally different from the slope found by other authors (e.g., W$_{I,F}$/log\,$P_F$=3.313$\pm$0.008, \citealt{nge08}; W$_{I,F}$/log\,$P_F$=3.320$\pm$0.011, \citealt{fou07}) based on the OGLE Cepheids of the LMC only, and may be an indication that this relation deviates from linearity at periods shorter than about 0.3 days. On the other hand, it is intermediate between the slopes found for the LMC and the values obtained for the Cepheids of the MW \citep[e.g.,][W$_{I,F}$/log\,$P_F$=3.477$\pm$0.074]{fou07}. \begin{figure \epsscale{1.25} \plotone{f15.eps} \figcaption{Distribution of RR Lyrae in the $\langle V \rangle$-log P plane, where the predicted edges of the instability strip of \citet{cap00} have been overplotted ({\it see text for details}). Filled circles and triangles represent RR$ab$ and RR$c$, respectively. \label{fig:15}} \end{figure} We applied the P--$W_I$ relations derived above to the fundamental and first-overtone Cepheids of IC\,1613 shown in Fig.~\ref{fig:9} (excluding the two Cepheids located at the midpoint between the dash-dotted and dashed lines, V008 and V093), and found (m$-$M)$_0$=24.50$\pm$0.11 and 24.47$\pm$0.12, respectively. The good agreement between these two values indicates that first-overtone Cepheids can be safely used as distance indicators in the case where the pulsating mode of the Cepheids can be unambiguously identified. We also calculated the distance to IC\,1613 using the PL relation for fundamental-mode Cepheids given by \citet{nge08} to check if the small differences in slope and zero-point described above would have a significant effect, but found a very similar value ((m$-$M)$_0$=24.55$\pm$0.12) within the uncertainties. \input{tab8} \subsubsection{Second-Overtone Cepheids and the Period-Luminosity-Color relation} From their theoretical work on second-overtone (SO) Cepheids, \citet{bon01} noted that these variables follow a period-luminosity-color relation of the form: \begin{displaymath} M_V = - 3.961 - 3.905 log P + 3.250 (V - I). \end{displaymath} Given the low amplitude of the luminosity variations and the very small temperature width of the SO IS, the standard deviation of this relation is only 0.004. It is thus possible to use it to determine distances with good accuracy. From the SO Cepheids observed in the SMC by \citet{uda99a} and assuming E(B$-$V)=0.054, the authors found a distance of (m$-$M)$_{0,SMC}$=19.11$\pm$0.08 to the SMC through this relation. Applying this relation to the two SO Cepheids described in \S~\ref{sec:6.2}, we find a distance modulus of (m$-$M)$_0$=24.54$\pm$0.11. Note, however, that the distance to the SMC used in the previous section to obtain the period-W$_I$ relationship was 19.03, assuming an LMC distance of 18.515$\pm$0.085 and a distance moduli difference of 0.51$\pm$0.03. Taking into account this offset in zero-point, we find that the true distance modulus to IC\,1613 from SO Cepheids is 24.46$\pm$0.11, in better agreement with the mean value determined below. \begin{figure \epsscale{1.2} \plotone{f16.eps} \figcaption{Summary of the distance moduli from the present work and the literature. The reddened moduli as given in the referenced articles are shown as pluses, while the moduli corrected for a common reddening of E(B$-$V)=0.025 and LMC distance modulus (m$-$M)$_0$=18.515$\pm$0.085 are shown as filled circles. The vertical lines indicate the mean and standard deviation ($\sigma$=0.071) of these measurements (excluding the \citet{sah92} outlier) at (m$-$M)$_0$=24.400$\pm$0.014 (statistical). \label{fig:16}} \end{figure} \subsection{RR~Lyrae stars}\label{sec:11.2} Similar to what we did in \citetalias{ber09}, we also calculate the distance to IC\,1613 using two methods based on the properties of the RR~Lyrae stars, namely the luminosity-metallicity relation, which arises from the knowledge that the intrinsic luminosity of HB stars mainly depends on their metallicity, and the period-luminosity-metallicity (PLM) relation, based on the theoretical location of the instability strip in the period-luminosity plane. The luminosity-metallicity relation we used in \citetalias{ber09} has the form: \begin{displaymath} M_V = 0.866(\pm0.085) + 0.214(\pm0.047) [Fe/H]. \end{displaymath} To calculate the mean magnitude of the RR~Lyrae stars, we used only the stars for which we could determine accurate intensity-averaged mean magnitudes---RR$ab$ and RR$c$---and obtained $\langle V \rangle$=24.99$\pm$0.01. Assuming a mean metallicity of Z=0.0005$\pm$0.0002 (i.e., [Fe/H]=$-$1.6$\pm$0.2 assuming Z$_\sun$=0.0198 \citep{gre93} and [Fe/H]=log Z+1.70 $-$ log(0.638 f + 0.362) \citep{sal93} with the $\alpha-$enhancement factor f set to zero) for the old population from the results of the SFH derived in Skillman et al.\ (2010, in preparation), we find a luminosity for the HB of M$_V$=0.52$\pm$0.12. The uncertainty was quantified through Monte-Carlo simulations. This gives a distance modulus of 24.47$\pm$0.12, or (m$-$M)$_0$=24.39$\pm$0.12 after correcting for reddening. The second method we used to determine the distance modulus consists of matching the PLM relation at the first-overtone blue edge (FOBE) of the IS from \citet[their eq. 3]{cap00} to the observed RR$c$. Basically, the theoretical limits of the IS are shifted in magnitude until the FOBE coincides with the observed distribution of first-overtone RR~Lyrae stars. Figure~\ref{fig:15} shows the position of the IS ({\it dashed lines}) overplotted on the distribution of observed RR~Lyrae stars in the $\langle V \rangle$--log~P plane. Adopting the mean metallicity given above (Z=0.0005), we derive a dereddened distance modulus of (m$-$M)$_0$=24.36, for which \citet{cap00} give an intrinsic dispersion of $\sigma _V$=0.07 mag due to uncertainties associated with the various ingredients of the model. Combined with the errors in metallicity, mean magnitude, and period, we estimate a total uncertainty in this measure of the distance of $\sim$0.1. \subsection{Results}\label{sec:11.3} Since the distances derived in \S~\ref{sec:11.1} and \ref{sec:11.2} are consistent with each other and have similar uncertainties, we simply averaged them to obtain our best distance estimate and obtained a dereddened distance modulus of 24.44 ($\sigma$=0.054), corresponding to a distance of 770~kpc. A summary of the previous distance determination from the literature is given in Table~\ref{tab8}. The columns give the reddened distance modulus, the assumed or calculated reddening and LMC distance modulus, the observation band, and the method used to estimate the distance. We compare these values in Fig.~\ref{fig:16}, where the reddened moduli as given in the referenced articles are shown as pluses. To make this comparison more meaningful, we also show the distance moduli corrected for a common reddening of E(B$-$V)=0.025 \citep{sch98} and LMC distance modulus (m$-$M)$_0$=18.515$\pm$0.085 \citep{cle03}. The reddening correction was done following the extinction law of \citet{car89} with R$_V$=3.1. The figure shows that all the measurements, independent of the method, are in excellent agreement with each other once set on the same scale. From these values (excluding the \citet{sah92} outlier), we find a mean dereddened distance of (m$-$M)$_0$=24.400$\pm$0.014, where the uncertainty only includes the standard deviation of the different measurements. The vertical lines in Fig.~\ref{fig:16} indicate the mean and standard deviation ($\sigma$=0.071) of these values. Therefore, we believe that the largest source of uncertainty on the distance to IC\,1613 is systematic rather than statistical, and mostly lies in the calibration of the distance to the LMC. \section{Discussion and Conclusions}\label{sec:12} We have presented the results of a new search for variable stars in IC\,1613 based on high quality {\it HST} images. In the ACS field, we found 259 candidate variables, including 90 RR~Lyrae stars, 49 Cepheids, and 38 eclipsing binaries, as well as nine RR~Lyrae stars and two additional Cepheids in the parallel WFPC2 field. Only thirteen of these variables were known prior to this study, all of them Cepheids. We find that the mean periods of the RR$ab$ and RR$c$ stars, as well as the fraction of overtone pulsators, place this galaxy in the intermediate regime between the Oosterhoff types, as was already observed for the vast majority of Local Group dwarf galaxies. From the comparison with Magellanic Clouds Cepheids through the period-luminosity diagram, we find that all our Cepheids are bona fide classical Cepheids, while the Cepheids in the dSph Cetus and Tucana discussed in Paper~I were classified as anomalous Cepheids. The lack of classical Cepheids in the latter dwarfs is not surprising given the absence of significant star formation more recent than about 8-10~Gyr \citep{mon10}. On the other hand, the reason for the apparent lack of anomalous Cepheids in IC\,1613 is not clear. The star formation histories calculated from deep CMDs \citep[2010, in preparation]{ski03} indicate that it formed stars fairly constantly over the past 13~Gyr, so unless the formation of anomalous Cepheids is extremely sensitive to the metal content, anomalous Cepheids should also be present in IC\,1613. \citet{gal04} showed that both types of Cepheids were present in comparable numbers in the transition type galaxy Phoenix, and possibly also in Leo A and Sextans A, all of them having very low metallicity (Z$\la$0.0008). These authors suggest that the only requirements for the presence of both types of Cepheids in a galaxy are low enough metallicity (Z$\sim$0.0004) and star formation activity at all ages. At the other end of the dwarf galaxy metallicity spectrum, 83 anomalous Cepheids were discovered in the LMC \citep{sos08b}, compared to the 3361 classical Cepheids that are present in the same catalog \citep{sos08a}. This implies about 2.4 anomalous Cepheid per 100 classical Cepheid at the metallicity of the LMC (Z$\sim$0.008). If this fraction is representative for low metallicity galaxies, about one anomalous Cepheid could be expected in our ACS field-of-view of IC\,1613. However, \citet{dol02} showed how low metallicity galaxies contain many more short-period classical Cepheids than higher metallicity galaxies at a given star formation rate due to morphology of the blue loops: at low metallicity, the blue loops extend further to the blue and thus cross the instability strip at fainter magnitudes. As a result of the stellar initial mass function, the fainter Cepheids are also more numerous. Therefore, the apparent lack of anomalous Cepheids in our field might simply be due to small number statistics. Given the low density of anomalous Cepheids expected in IC\,1613, more data covering a large field-of-view are needed to check if anomalous Cepheids are actually present in this galaxy. Finally, we used the properties of the RR~Lyrae stars and Cepheids to estimate the distance to IC\,1613, and found excellent agreement with the values previously determined. Combining all the measurements after correction for a common reddening and reference LMC distance, we find a true distance modulus to this galaxy of (m$-$M)$_0$=24.400$\pm$0.014 (statistical), corresponding to 760~kpc. \acknowledgments {\it Facility:} \facility{HST (ACS, WFPC2)} The authors would like to thank the anonymous referee for useful comments. Support for this work was provided by a Marie Curie Early Stage Research Training Fellowship of the European Community's Sixth Framework Programme (contract MEST-CT-2004-504604), the IAC (grant 310394), the Education and Science Ministry of Spain (grants AYA2004-06343 and AYA2007-3E3507), and NASA through grant GO-10505 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
2024-02-18T23:40:23.053Z
2010-02-22T20:35:21.000Z
algebraic_stack_train_0000
2,225
11,092
proofpile-arXiv_065-10908
\section{Introduction} \label{sec:Introduction} Given the existing evidence for the triviality of the Higgs sector~\cite{Aizenman:1981du,Frohlich:1982tw,Luscher:1988uq, Hasenfratz:1987eh, Kuti:1987nr, Hasenfratz:1988kr,Gockeler:1992zj} of the Standard Model, the latter theory can only be considered as an effective description of Nature valid at most up to some cutoff scale $\Lambda$. The Higgs sector is thus intrinsically connected with a finite, but unknown cutoff parameter $\Lambda$ that cannot be removed. Beyond that threshold an extension of the theory will finally be required. Apriori, the size of this scale $\Lambda$, at which the Standard Model would need such an extension, is unspecified. However, the potential discovery of the Higgs boson at the LHC (as well as its non-discovery together with corresponding exclusion limits) can shed light on this open question. This can, for instance, be achieved by comparing the experimentally revealed Higgs boson mass or its exclusion limits, respectively, with the cutoff-dependent upper and lower Higgs boson mass bounds arising in the Higgs sector of the Standard Model. Besides the obvious interest in narrowing the interval of possible Higgs boson masses consistent with phenomenology, the latter observation was the main motivation for the great efforts spent on the determination of cutoff-dependent upper and lower Higgs boson mass bounds. In perturbation theory such bounds have been derived from the criterion of the Landau pole being situated beyond the cutoff of the theory~(see e.g.~\cite{Cabibbo:1979ay, Dashen:1983ts, Lindner:1985uk}), from unitarity requirements~(see e.g.~\cite{Dicus:1992vj, Lee:1977eg, Marciano:1989ns}) and from vacuum stability considerations~(see e.g.~\cite{Cabibbo:1979ay, Linde:1975sw,Weinberg:1976pe,Linde:1977mm, Sher:1988mj, Lindner:1988ww}), as reviewed in \Ref{Hagiwara:2002fs}. However, the validity of the perturbatively obtained upper Higgs boson mass bounds is unclear, since the corresponding perturbative calculations had to be performed at rather large values of the renormalized quartic coupling constant. The latter remark thus makes the upper Higgs boson mass bound determination an interesting subject for non-perturbative investigations, such as the lattice approach. The main objective of lattice studies of the pure Higgs and Higgs-Yukawa sector of the electroweak Standard Model has therefore been the non-perturbative determination of the cutoff dependence of the upper Higgs boson mass bounds~\cite{Hasenfratz:1987eh, Kuti:1987nr, Hasenfratz:1988kr,Bhanot:1990ai,Holland:2003jr,Holland:2004sd}. There are two main developments that warrant the reconsideration of these questions. First, with the advent of the LHC, we are to expect that the mass of the Standard Model Higgs boson, if it exists, will be revealed experimentally. Second, there is, in contrast to the situation of earlier investigations of lattice Higgs-Yukawa models~\cite{Smit:1989tz,Shigemitsu:1991tc,Golterman:1990nx,book:Jersak,book:Montvay,Golterman:1992ye,Jansen:1994ym}, which suffered from their inability to restore chiral symmetry in the continuum limit while lifting the unwanted fermion doublers at the same time, a consistent formulation of a Higgs-Yukawa model with an exact lattice chiral symmetry~\cite{Luscher:1998pq} based on the Ginsparg-Wilson relation~\cite{Ginsparg:1981bj}. This new development allows to maintain the chiral character of the Higgs-fermion coupling structure of the Standard Model on the lattice while simultaneously lifting the fermion doublers, thus eliminating manifestly the main objection to the earlier investigations. The interest in lattice Higgs-Yukawa models has therefore recently been renewed~\cite{Bhattacharya:2006dc,Giedt:2007qg,Poppitz:2007tu,Gerhold:2007yb,Gerhold:2007gx,Fodor:2007fn,Gerhold:2009ub,Gerhold:2010wy}. In particular, the phase diagram of the new, chirally invariant Higgs-Yukawa model has been discussed analytically by means of a large $N_f$ calculation~\cite{Fodor:2007fn, Gerhold:2007yb} as well as numerically by direct Monte-Carlo computations~\cite{Gerhold:2007gx}. Moreover, the lower Higgs boson mass bounds derived in this lattice model have been presented in \Ref{Gerhold:2009ub}. A comprehensive review of these results can be found in \Ref{Gerhold:2010wy}. In the present paper we intend to determine the dependence of the upper Higgs boson mass bound on the cutoff parameter $\Lambda$ by direct Monte-Carlo calculations. In \sects{sec:modelDefinition}{sec:SimStratAndObs} we begin this venture by introducing the considered chirally invariant lattice Higgs-Yukawa model and discussing the actual simulation strategy, respectively. Details about the determination of the properties of the Goldstone and Higgs boson, in particular their renormalized masses, are then given in \sects{sec:GoldPropAnalysisUpperBound}{sec:HiggsPropAnalysisUpperBound}. As a crucial step towards the final determination of the upper mass bound we confirm in \sect{sec:DepOfHiggsMassonLargeLam} that the largest Higgs boson masses are indeed obtained at infinite bare quartic coupling, as expected from perturbation theory. We then present our results on the cutoff dependence of the upper Higgs boson mass bound in \sect{chap:ResOnUpperBound} and examine also the encountered finite volume effects. Eventually, the lattice data on the Higgs boson mass bounds are extrapolated to the infinite volume limit, yielding then our final result already presented in \fig{fig:UpperMassBoundFinalResult}. \section{The $\mbox{SU(2)}_L\times\mbox{U(1)}_Y\mbox{ }$ lattice Higgs-Yukawa model} \label{sec:modelDefinition} The model that will be considered in the following, is a four-dimensional, chirally invariant $SU(2)_L \times U(1)_Y$ lattice Higgs-Yukawa model based on the Neuberger overlap operator~\cite{Neuberger:1997fp,Neuberger:1998wv}, aiming at the implementation of the chiral Higgs-fermion coupling structure of the pure Higgs-Yukawa sector of the Standard Model reading \begin{equation} \label{eq:StandardModelYuakwaCouplingStructure} L_Y = y_b \left(\bar t, \bar b \right)_L \varphi b_R +y_t \left(\bar t, \bar b \right)_L \tilde\varphi t_R + c.c., \end{equation} with $\tilde \varphi = i\tau_2\varphi^*$, $\tau_i$ being the Pauli matrices, and $y_{t,b}$ denoting the bare top and bottom Yukawa coupling constants. In this model the consideration is restricted to the top-bottom doublet $(t,b)$ interacting with the complex Higgs doublet $\varphi$, which is a reasonable simplification, since the Higgs dynamics is dominated by the coupling to the heaviest fermions (apart from its self-coupling). The fields contained within the lattice model are thus the scalar field $\varphi$, encoded here however in terms of the four-component, real scalar field $\Phi$ for the purpose of a convenient lattice notation, as well as $N_f$ top-bottom doublets represented by eight-component spinors $\psi^{(i)}\equiv (t^{(i)}, b^{(i)})$, $i=1,...,N_f$. In this approach the chiral character of the targeted coupling structure in \eq{eq:StandardModelYuakwaCouplingStructure} will be preserved on the lattice by constructing the fermionic action $S_F$ from the Neuberger overlap operator $\Dov$ acting on the aforementioned fermion doublets. The overlap operator is given as \begin{eqnarray} \label{eq:DefOfNeuberDiracOp} {\mathcal D}^{(ov)} &=& \rho \left\{1+\frac{ A}{\sqrt{ A^\dagger A}} \right\}, \quad A = {\mathcal D}^{(W)} - \rho, \quad 0 < \rho < 2r \end{eqnarray} where $\rho$ is a free, dimensionless parameter within the specified constraints that determines the radius of the circle formed by the entirety of all eigenvalues of $\Dov$ in the complex plane. The operator ${\mathcal D}^{(W)}$ denotes here the Wilson Dirac operator defined as \begin{equation} \label{eq:DefOfWilsonOperator} {\mathcal D}^{(W)} = \sum\limits_\mu \gamma_\mu \nabla^s_\mu - \frac{r}{2} \nabla^b_\mu\nabla^f_\mu, \end{equation} where $\nabla^{f,b,s}_\mu$ are the forward, backward and symmetrized lattice nearest neighbor difference operators in direction $\mu$, while the so-called Wilson parameter $r$ is chosen here to be $r=1$ as usual. The overlap operator was proven to be local in a field theoretical sense also in the presence of QCD gauge fields at least if the latter fields obey certain smoothness conditions~\cite{Hernandez:1998et,Neuberger:1999pz}. The locality properties were found to depend on the parameter $\rho$ and the strength of the gauge coupling constant. At vanishing gauge coupling the most local operator was shown to be obtained at $\rho=1$. Here, the notion 'most local' has to be understood in the sense of the most rapid exponential decrease with the distance $|x-y|$ of the coupling strength induced by the matrix elements ${\mathcal D}^{(ov)}_{x,y}$ between the field variables at two remote space-time points $x$ and $y$. For that reason the setting $\rho=1$ will be adopted for the rest of this work. Exploiting the Ginsparg-Wilson relation~\cite{Ginsparg:1981bj} as proposed in \Ref{Luscher:1998pq} one can then write down a chirally invariant $\mbox{SU(2)}_L\times\mbox{U(1)}_Y\mbox{ }$ lattice Higgs-Yukawa model according to \begin{eqnarray} \label{eq:DefOfModelPartitionFunctionOriginal} Z &=& \int \intD{\Phi} \intD{\psi} \intD{\bar\psi} e^{-S_\Phi[\Phi] - S_F[\Phi,\psi,\bar\psi]} \quad \mbox{with} \\ \label{eq:DefYukawaCouplingTerm} S_F[\Phi,\psi,\bar\psi] &=& \sum\limits_{i=1}^{N_f}\, \bar\psi^{(i)} \underbrace{\left[\Dov + P_+ \phi^\dagger \fhs{1mm}\mbox{diag}\left(\hat y_t,\hat y_b\right) \hat P_+ + P_- \fhs{1mm}\mbox{diag}\left(\hat y_t,\hat y_b\right) \phi \hat P_- \right]}_{{\mathcal M}} \psi^{(i)}, \quad\quad\quad \end{eqnarray} where the particular form of the $O(4)$-symmetric purely bosonic action $S_\Phi[\Phi]$ will be given later. It is further remarked that the four-component scalar field $\Phi_x$, defined at the Euclidean site indices $x=(t,\vec x)$ of a $L_s^3\times L_t$-lattice, has been rewritten here as a quaternionic, $2 \times 2$ matrix $\phi_x = \Phi_x^\mu \theta_\mu,\, \theta_0=\mathds{1},\, \theta_j = -i\tau_j$ with $\vec\tau$ denoting the vector of Pauli matrices, acting on the flavour index of the fermion doublets. The so far unspecified left- and right-handed projection operators $P_\pm$ and their lattice modified counterparts $\hat P_{\pm}$ associated to the Neuberger Dirac operator are given as \begin{eqnarray} P_\pm = \frac{1 \pm \gamma_5}{2}, \quad & \hat P_\pm = \frac{1 \pm \hat \gamma_5}{2}, \quad & \hat\gamma_5 = \gamma_5 \left(\mathds{1} - \frac{1}{\rho} \Dov \right). \end{eqnarray} The action in \eq{eq:DefYukawaCouplingTerm} now obeys an exact global $\mbox{SU}(2)_L\times \mbox{U}(1)_Y$ lattice chiral symmetry. For $\Omega_L\in \mbox{SU}(2)$ and $\epsilon\in \relax{\textrm{I\kern-.18em R}}$ the action is invariant under the transformation \begin{eqnarray} \label{eq:ChiralSymmetryTrafo1} \psi\rightarrow U_R \hat P_+ \psi + U_L\Omega_L \hat P_- \psi, &\quad& \bar\psi\rightarrow \bar\psi P_+ \Omega_L^\dagger U_{L}^\dagger + \bar\psi P_- U^\dagger_{R}, \\ \label{eq:ChiralSymmetryTrafo2} \phi \rightarrow U_\phi \phi \Omega_L^\dagger, &\quad& \phi^\dagger \rightarrow \Omega_L \phi^\dagger U_\phi^\dagger \end{eqnarray} with $U_{L,R,\phi} \equiv \exp(i\epsilon Y)$ denoting the respective representations of the global $U(1)_Y$ symmetry group. Employing the explicit form of the hypercharge $Y$ being related to the isospin component $I_3$ and the electric charge $Q$ according to $Y = Q-I_3$, the above $U(1)_Y$ matrices can explicitly be parametrized as \begin{eqnarray} U_L = \left( \begin{array}{*{2}{c}} e^{+i\epsilon/6}&\\ &e^{+i\epsilon/6}\\ \end{array} \right), & U_R = \left( \begin{array}{*{2}{c}} e^{2i\epsilon/3}&\\ &e^{-i\epsilon/3}\\ \end{array} \right), & U_{\phi} = \left( \begin{array}{*{2}{c}} e^{+i\epsilon/2}&\\ &e^{-i\epsilon/2}\\ \end{array} \right),\quad \quad \end{eqnarray} for the case of the considered top-bottom doublet. For clarification it is remarked that the right-handed fields are isospin singlets and have only been written here in form of doublets for the sake of a shorter notation. Note also that in the mass-degenerate case, \textit{i.e.}$\;$ $\hat y_t=\hat y_b$, the above global symmetry is extended to $\mbox{SU}(2)_L\times \mbox{SU}(2)_R$. In the continuum limit the modified projectors $\hat P_\pm$ converge to $P_\pm$ and the symmetry in \eqs{eq:ChiralSymmetryTrafo1}{eq:ChiralSymmetryTrafo2} thus recovers the continuum $\mbox{SU}(2)_L\times \mbox{U}(1)_Y$ global chiral symmetry such that the lattice Higgs-Yukawa coupling becomes equivalent to \eq{eq:StandardModelYuakwaCouplingStructure} when identifying \begin{eqnarray} \varphi_x = C\cdot \left( \begin{array}{*{1}{c}} \Phi_x^2 + i\Phi_x^1\\ \Phi_x^0-i\Phi_x^3\\ \end{array} \right),\quad & \tilde\varphi_x = i\tau_2\varphi^*_x = C\cdot \left( \begin{array}{*{1}{c}} \Phi_x^0 + i\Phi_x^3\\ -\Phi_x^2+i\Phi_x^1\\ \end{array} \right), \quad & y_{t,b} = \frac{\hat y_{t,b}}{C} \quad\quad\quad \end{eqnarray} for some real, non-zero constant $C$. The so far unspecified purely bosonic action $S_{\Phi}$ is chosen here to be the lattice version of the $\Phi^4$-action parametrized in terms of the hopping parameter $\kappa$ and the lattice quartic coupling constant $\hat\lambda$ according to \begin{equation} \label{eq:LatticePhiAction} S_\Phi = -\kappa\sum_{x,\mu} \Phi_x^{\dagger} \left[\Phi_{x+\mu} + \Phi_{x-\mu}\right] + \sum_{x} \Phi^{\dagger}_x\Phi_x + \hat\lambda \sum_{x} \left(\Phi^{\dagger}_x\Phi_x - N_f \right)^2, \end{equation} which is a convenient parametrization for the actual numerical computations. However, this form of the lattice action is fully equivalent to the lattice action in continuum notation \begin{eqnarray} \label{eq:BosonicLatticeHiggsActionContNot} S_{\varphi}[\varphi] &=& \sum\limits_{x,\mu} \frac{1}{2} \nabla^f_\mu \varphi^\dagger_x \nabla^f_\mu \varphi_x + \sum\limits_{x} \frac{1}{2} m_0^2 \varphi^\dagger_x\varphi_x +\sum\limits_{x} \lambda\left(\varphi_x^\dagger \varphi_x \right)^2, \end{eqnarray} given in terms of the bare mass $m_0$, the bare quartic coupling constant $\lambda$, and the lattice derivative operator $\nabla^f_\mu$. The aforementioned connection can be established through a rescaling of the scalar field $\Phi$ and the involved coupling constants according to \begin{equation} \label{eq:RelationBetweenHiggsActions} \varphi_x = \sqrt{2\kappa} \left( \begin{array}{*{1}{c}} \Phi_x^2 + i\Phi_x^1\\ \Phi_x^0-i\Phi_x^3\\ \end{array} \right), \quad \lambda=\frac{\hat\lambda}{4\kappa^2}, \quad m_0^2 = \frac{1 - 2N_f\hat\lambda-8\kappa}{\kappa}, \quad y_{t,b} = \frac{\hat y_{t,b}}{\sqrt{2\kappa}}. \end{equation} Finally, the potential appearance of a sign problem in the framework of the introduced Higgs-Yukawa model shall be briefly addressed. In the mass-degenerate case, \textit{i.e.}$\;$ for $y_t=y_b$, one finds that $\det({\mathcal M})\in \relax{\textrm{I\kern-.18em R}}$, since all eigenvalues of ${\mathcal M}$ come in complex conjugate pairs according to \begin{equation} V{\mathcal M} V^\dagger = {\mathcal M}^*, \quad \mbox{with} \quad V = \gamma_0\gamma_2\gamma_5\tau_2. \end{equation} This is in contrast to the general case with $y_t\neq y_b$, where the above relation no longer holds. Throughout this work we will therefore only consider the aforementioned mass-degenerate scenario, where the top and bottom quarks are assumed to have equal masses, to certainly exclude any complex-valued phase of the fermion determinant. This, however, still leaves open the possibility of an alternating sign of $\det({\mathcal M})$. We have therefore explicitly monitored the sign of $\det({\mathcal M})$ but did never encounter any sign alteration in our actually performed Monte-Carlo computations, meaning that the numerical calculations in the mass-degenerate case are perfectly sane. A more detailed discussion of the phase of the fermion determinant in the non-degenerate case can be found in \Ref{Gerhold:2010wy}. \section{Simulation strategy and considered observables} \label{sec:SimStratAndObs} The eventual aim of this work is the non-perturbative determination of the cutoff-dependent upper bound of the Higgs boson mass. The general strategy that will be applied for that purpose is to scan through the whole space of bare model parameters searching for the largest Higgs boson mass attainable within the pure Higgs-Yukawa sector at a fixed value of the cutoff, while being in consistency with phenomenology. This will be done by numerically evaluating the finite lattice model of the Higgs-Yukawa sector introduced in the preceding section and extrapolating the obtained results to the infinite volume limit. The crucial idea is that the aforementioned requirement of reproducing phenomenology restricts the freedom in the choice of the bare model parameters $m_0^2, y_{t,b}, \lambda$. For that purpose we exploit here the phenomenological knowledge of the renormalized quark masses and the renormalized vacuum expectation value of the scalar field (vev). For the reasons given in the previous section, however, the top and bottom quarks will be considered to be mass-degenerate. Throughout this work $m_{t}/a \equiv m_{b}/a =\GEV{175}$ and $v_r/a = \GEV{246}$ will be assumed. Here $m_t$, $m_b$, and $v_r$ are the renormalized top and bottom quark masses as well as the renormalized vev in dimensionless lattice units, while $a$ denotes the lattice spacing. The aforementioned three conditions leave open an one-dimensional freedom in the bare parameters, which can be parametrized in terms of the bare quartic self-coupling constant $\lambda$. However, aiming at the upper Higgs boson mass bounds, this remaining freedom can be fixed, since it is expected from perturbation theory that the lightest Higgs boson masses are obtained at vanishing self-coupling constant $\lambda=0$, while the heaviest masses are attained at infinite coupling constant $\lambda=\infty$, respectively. That this conjecture actually holds also in the non-perturbative regime of the model, \textit{i.e.}$\;$ at large values of $\lambda$, is explicitly demonstrated in \sect{sec:DepOfHiggsMassonLargeLam}, allowing then to restrict the search for the upper mass bound to the setting $\lambda=\infty$. Furthermore, the model has to be evaluated in the broken phase, \textit{i.e.}$\;$ at $\langle \varphi\rangle \neq 0$, to respect the observation of spontaneous symmetry breaking, however close to a second order phase transition to a symmetric phase to allow for arbitrarily large correlation lengths as required in any attempt of pushing the cutoff parameter to arbitrarily large values. However, in the given lattice model the expectation value $\langle \varphi \rangle$ would always be identical to zero due to the symmetries in \eqs{eq:ChiralSymmetryTrafo1}{eq:ChiralSymmetryTrafo2}. The problem is that the lattice averages over {\it all} ground states of the theory, not only over that one which Nature has selected in the broken phase. To study the mechanism of spontaneous symmetry breaking nevertheless, one usually introduces an external current $J$, selecting then only one particular ground state. This current is finally removed after taking the thermodynamic limit, leading then to the existence of symmetric and broken phases with respect to the order parameter $\langle \varphi\rangle$ as desired. An alternative approach, which was shown to be equivalent in the thermodynamic limit~\cite{Hasenfratz:1989ux,Hasenfratz:1990fu,Gockeler:1991ty}, is to define the vacuum expectation value (vev) $v$ as the expectation value of the {\textit{rotated}} field $\varphi^{rot}$ given by a global transformation of the original field $\varphi$ according to \begin{equation} \label{eq:GaugeRotation} \varphi^{rot}_x = U[\varphi] \varphi_x \end{equation} with the $\mbox{SU}(2)$ matrix $U[\varphi]$ selected for each configuration of field variables $\{\varphi_x\}$ such that \begin{equation} \label{eq:GaugeRotationRequirement} \sum\limits_x \varphi_x^{rot} = \left( \begin{array}{*{1}{c}} 0\\ \left|\sum\limits_x \varphi_x \right|\\ \end{array} \right). \end{equation} Here we use this second approach. According to the notation in \eq{eq:BosonicLatticeHiggsActionContNot}, which already includes a factor $1/2$, the relation between the vev $v$ and the expectation value of $\varphi^{rot}$ is then given as \begin{equation} \label{eq:DefOfVEV} \langle \varphi^{rot} \rangle = \left( \begin{array}{*{1}{c}} 0\\ v\\ \end{array} \right). \end{equation} In this setup the unrenormalized Higgs mode $h_x$ and the Goldstone modes $g^1_x,g^2_x,g^3_x,$ can then directly be read out of the rotated field according to \begin{equation} \label{eq:DefOfHiggsAndGoldstoneModes} \varphi_x^{rot} = \left( \begin{array}{*{1}{c}} g_x^2 + ig_x^1\\ v + h_x - i g_x^3\\ \end{array} \right). \end{equation} The great advantage of this approach is that no limit procedure $J\rightarrow 0$ has to be performed, which simplifies the numerical evaluation of the model tremendously. The physical scale of the lattice computation, \textit{i.e.}$\;$ the inverse lattice spacing $a^{-1}$, can then be determined by comparing the renormalized vev $v_r=v/\sqrt{Z_G}$ measured on the lattice with its phenomenologically known value according to \begin{eqnarray} \label{eq:FixationOfPhysScale} 246\, \mbox{GeV} &=& \frac{v_r}{a} \equiv \frac{v}{\sqrt{Z_G}\cdot a}, \end{eqnarray} where $Z_G$ denotes the Goldstone renormalization constant. The cutoff parameter $\Lambda$ of the underlying lattice regularization, which is directly associated to the lattice spacing $a$, can then be defined as \begin{equation} \label{eq:DefOfCutoffLambda} \Lambda = a^{-1}. \end{equation} Of course, this definition is not unique and other authors use different definitions, for instance $\Lambda=\pi/a$ motivated by the value of the momenta at the edge of the Brillouin zone. However, since the quantities that actually enter any lattice calculation are rather the lattice momenta $\tilde p_\mu=sin(p_\mu)$ instead of the momenta $p_\mu$, which are connected through the application of a sine function, it seems natural to choose the definition of the cutoff $\Lambda$ given in \eq{eq:DefOfCutoffLambda}. Next, the extraction technique for the Goldstone renormalization constant entering \eq{eq:FixationOfPhysScale} needs to be determined. In the Euclidean continuum the Goldstone and Higgs renormalization constants, more precisely their inverse values $Z^{-1}_G$ and $Z^{-1}_H$, are usually defined as the real part of the derivative of the inverse Goldstone and Higgs propagators in momentum space with respect to the continuous squared momentum $p_c^2$ at some scale $p_c^2=-\mu_G^2$ and $p_c^2=-\mu_H^2$, respectively. The restriction to the real part is introduced to make this definition applicable also in the case of an unstable Higgs boson, where the massless Goldstone modes induce a branch cut with discontinuous complex contributions to the propagator at negative values of $p_c^2$. This is the targeted definition that shall also be adopted to the later lattice calculations. On the lattice, however, the propagators are only defined at the discrete lattice momenta $p_\mu=2\pi n_\mu/L_{s,t}$, $n_\mu = 0, \hdots, L_{s,t}-1$ according to \begin{eqnarray} \tilde G_H(p) &=& \langle \tilde h_p \tilde h_{-p}\rangle, \\ \tilde G_G(p) &=& \frac{1}{3}\sum\limits_{\alpha=1}^3 \langle \tilde g^\alpha_p \tilde g^\alpha_{-p}\rangle, \end{eqnarray} where the Higgs and Goldstone fields in momentum representation read \begin{eqnarray} \tilde h_p = \frac{1}{\sqrt{V}}\sum\limits_x e^{-ipx} h_x &\mbox{ and }& \tilde g^\alpha_p = \frac{1}{\sqrt{V}}\sum\limits_x e^{-ipx} g^\alpha_x \end{eqnarray} with $V=L_s^3\cdot L_t$ denoting the lattice volume. Computing the derivative of the lattice propagators is thus not a well-defined operation. Moreover, the lattice propagators are not even functions of $p^2$, since rotational invariance is explicitly broken by the discrete lattice structure. To adopt the above described concept to the lattice nevertheless, some lattice scheme has to be introduced that converges to the continuum definitions of $Z_G$ and $Z_H$ in the limit $a\rightarrow 0$. Here, the idea is to use some analytical fit formulas $f_{G}(p)$, $f_{H}(p)$ derived from renormalized perturbation theory in the Euclidean continuum to approximate the measured lattice propagators $\tilde G_G(p)$ and $\tilde G_H(p)$ at small momenta $\hat p^2<\gamma$ (with $\hat p^2_\mu \equiv 4\sin^2(p_\mu/2)$) for some appropriate value of $\gamma$ such that the discretization errors are acceptable. The details of this fit procedure are discussed in \sects{sec:GoldPropAnalysisUpperBound}{sec:HiggsPropAnalysisUpperBound}. One can then define the analytically continued lattice propagators as \begin{eqnarray} \tilde G_G^{c}(p_c) = f_{G}(p_c) &\mbox{ and }& \tilde G_H^{c}(p_c) = f_{H}(p_c). \end{eqnarray} In the on-shell scheme the targeted Goldstone and Higgs renormalization constants $Z_G$ and $Z_H$ can then be defined (implicitly assuming an appropriate mapping $p_c\leftrightarrow p_c^2$) as \begin{eqnarray} Z^{-1}_G(\mu^2_G) &=& \derive{}{p_c^2} \mbox{Re}\left(\left[\tilde G^{c}_G(p_c^2)\right]^{-1}\right)\Big|_{p_c^2 = -\mu^2_G},\\ Z^{-1}_H(\mu^2_H) &=& \derive{}{p_c^2} \mbox{Re}\left(\left[\tilde G^{c}_H(p_c^2)\right]^{-1}\right)\Big|_{p_c^2 = -\mu^2_H}, \end{eqnarray} with $\mu^2_G=m^2_G$ and $\mu^2_H=m^2_H$, where the underlying physical masses $m_G$, $m_H$ are given by the poles of the respective propagators on the second Riemann sheet. To adopt this definition to the introduced lattice scheme we define the Higgs boson mass $m_H$, its decay width $\Gamma_H$, and the mass $m_G$ of the stable Goldstone bosons through \begin{eqnarray} \label{eq:DefOfHiggsAndGoldstoneMassByPole} \left[\tilde G^{c}_{H,II}(im_H+\Gamma_H/2,0,0,0)\right]^{-1} = 0, &\mbox{ and }& \left[\tilde G^{c}_G(im_G,0,0,0)\right]^{-1} = 0, \end{eqnarray} where $\tilde G^{c}_{H,II}(p_c)$ denotes the analytical continuation of $\tilde G^{c}_{H}(p_c)$ onto the second Riemann sheet. Extracting the Higgs boson mass $m_H$ and its decay width $\Gamma_H$ from simulation data according to this definition would, however, require an explicit analytical continuation of the Higgs propagator onto the second Riemann sheet, which is beyond our ambitions in this study. Following the proposal in \Ref{Luscher:1988uq} the Goldstone and Higgs renormalization factors are rather determined at the scales $\mu_{G}^2 = m^2_{Gp}$ and $\mu_{H}^2 = m^2_{Hp}$ given by the masses $m_{Hp}$ and $m_{Gp}$, which will be referred to in the following as propagator masses in contrast to the pole masses $m_H$ and $m_G$. We thus define \begin{eqnarray} \label{eq:DefOfRenormalFactors} Z_G \equiv Z_G(m^2_{Gp}) &\mbox{ and }& Z_H \equiv Z_H(m^2_{Hp}), \end{eqnarray} where the propagator masses $m_{Hp}$, $m_{Gp}$ are defined through a vanishing real-part of the inverse propagators according to \begin{eqnarray} \label{eq:DefOfPropagatorMinkMass} \mbox{Re}\left(\left[\tilde G_G^{c}(p_c^2)\right]^{-1}\right)\Big|_{p_c^2 = -m^2_{Gp}} = 0 &\mbox{ and }& \mbox{Re}\left(\left[\tilde G_H^{c}(p_c^2)\right]^{-1}\right)\Big|_{p_c^2 = -m^2_{Hp}} = 0. \end{eqnarray} The reasoning for selecting these latter definitions of the Higgs and Goldstone masses is, that the required analytical continuation in the case of the Higgs propagator is much more robust, since it only needs to extend the measured lattice propagator to purely negative values of $p_c^2$ in contrast to the situation resulting from the definition in \eq{eq:DefOfHiggsAndGoldstoneMassByPole}. It is remarked here that the Goldstone propagator mass $m_{Gp}$ was only introduced for the sake of an uniform notation, since $m_{G}$ is identical to $m_{Gp}$, due to the Goldstone bosons being stable particles. As for the unstable Higgs boson, however, one finds that the discrepancy between the pole mass $m_H$ and the propagator mass $m_{Hp}$ is directly related to the size of the decay width $\Gamma_H$. In the weak coupling regime of the theory the two mass definitions $m_H$ and $m_{Hp}$ can thus be considered to coincide up to small perturbative corrections, due to a vanishing decay width in that limit. For the pure $\Phi^4$-theory the deviation between $m_{Hp}$ and $m_H$ has explicitly been worked out in renormalized perturbation theory~\cite{Luscher:1988uq}. In infinite volume the finding is \begin{equation} \label{eq:ConnectionMhMhp} m_H = m_{Hp} \cdot \left(1+\frac{\pi^2}{288} (n-1)^2 \left[\frac{4!\cdot\lambda_r}{16\pi^2}\right]^2 + O(\lambda_r^3) \right), \end{equation} where $\lambda_r$ denotes the renormalized quartic self-coupling constant and $n$ is the number of components of the scalar field $\Phi$, \textit{i.e.}$\;$ $n=4$ for the here considered case. This calculation was performed in the pure $\Phi^4$-theory, thus neglecting any fermionic degrees of freedom, and for exactly massless Goldstone particles. However, one learns from this result that the definition of $m_{Hp}$ in \eq{eq:DefOfPropagatorMinkMass} as the Higgs boson mass is very reasonable at least for sufficiently small values of the renormalized coupling constants. The actual discrepancy between $m_H$ and $m_{Hp}$ as obtained by direct lattice computations of their respective definitions in \eq{eq:DefOfHiggsAndGoldstoneMassByPole} and \eq{eq:DefOfPropagatorMinkMass} will explicitly be examined in \sect{sec:HiggsPropAnalysisUpperBound} for some physically relevant parameter setups. It will then indeed be found to be negligible with respect to the reachable statistical accuracy. The definition of the renormalized quartic self-coupling constant $\lambda_r$ that was used in the derivation of \eq{eq:ConnectionMhMhp} is \begin{equation} \label{eq:DefOfRenQuartCoupling} \lambda_r = \frac{m^2_{Hp}-m^2_{Gp}}{8v_r^2}, \end{equation} which shall also be taken over to the considered Higgs-Yukawa model. In principle, it would also be possible to determine the renormalized quartic coupling constant $\lambda_r$ through the evaluation of the amputated, connected, one-particle-irreducible four-point function at a specified momentum configuration as it is usually done in perturbation theory. However, the signal to noise ratio of the corresponding lattice observable is suppressed by the lattice volume. It is thus extremely hard to measure the renormalized quartic coupling constant in lattice calculations by means of the direct evaluation of such four-point functions~\cite{Jansen:1988cw}. Instead, the alternative definition of $\lambda_r$ given in \eq{eq:DefOfRenQuartCoupling} will be adopted here. It is further remarked that this definition was shown~\cite{Luscher:1988uq} to coincide with the bare coupling parameter $\lambda$ to lowest order in the pure $\Phi^4$-theory. Regarding the top and bottom quark fields, we are here only interested in the corresponding masses $m_t, m_b$. These can directly be obtained by studying the fermionic time correlation functions $C_f(\Delta t)$ at large Euclidean time separations $\Delta t$, where $f=t,b$ denotes the quark flavour here. On the lattice the latter time correlation functions can be defined as \begin{eqnarray} \label{eq:DefOfFermionTimeSliceCorr} C_f(\Delta t) &=& \frac{1}{L_t\cdot L_s^6} \sum\limits_{t=0}^{L_t-1} \sum\limits_{\vec x, \vec y} \Big\langle 2\,\mbox{Re}\,\mbox{Tr}\,\left(f_{L,t+\Delta t, \vec x}\cdot \bar f_{R,t,\vec y}\right) \Big\rangle, \end{eqnarray} where the left- and right-handed spinors are given through the projection operators according to \begin{eqnarray} \label{eq:DefOfLeftHandedSpinors} \left( \begin{array}{*{1}{c}} t\\ b\\ \end{array} \right)_L = \hat P_- \left( \begin{array}{*{1}{c}} t\\ b\\ \end{array} \right) &\mbox{ and }& (\bar t, \bar b)_R = (\bar t, \bar b) P_-. \end{eqnarray} It is remarked that the given fermionic correlation function would be identical to zero due to the exact lattice chiral symmetry obeyed by the considered Higgs-Yukawa model, if one would not rotate the scalar field $\varphi$ according to \eq{eq:GaugeRotation} as discussed above. This rotation is implicitly assumed in the following. Furthermore, it is pointed out that the full {\textit{all-to-all}} correlator as defined in \eq{eq:DefOfFermionTimeSliceCorr} can be trivially computed. This all-to-all correlator yields very clean signals for the top and bottom quark mass determination. The lacking definition of the renormalized Yukawa coupling constants can now be provided as \begin{eqnarray} \label{eq:DefOfRenYukawaConst} y_{t,r} = \frac{m_t}{v_r} &\mbox{ and }& y_{b,r} = \frac{m_b}{v_r}, \end{eqnarray} reproducing the bare Yukawa coupling constants $y_{t,b}$ at lowest order. According to the presented simulation strategy the aim would thus be to tune the above renormalized Yukawa coupling constants such that their physically known values would be reproduced in the actual lattice computations. However, for having some initial guess for the latter adjustment at hand the tree-level relation \begin{equation} \label{eq:treeLevelTopMass} y_{t,b} = \frac{m_{t,b}}{v_r} \end{equation} will be used throughout this work to set the bare Yukawa coupling constants in the lattice computations. Comparing the physical fermion masses actually generated in these lattice calculations with the targeted ones would then allow to fine tune the Yukawa coupling constants in an iterative refinement approach. However, it turns out that this tree-level fixation ansatz already yields quite satisfactory results regarding the discrepancy between the targeted and the actually observed quark masses with respect to the reached statistical accuracy. \section{Analysis of the Goldstone propagator} \label{sec:GoldPropAnalysisUpperBound} The Goldstone renormalization constant $Z_G$ is required for determining the renormalized vacuum expectation value $v_r$ of the scalar field. It is thus needed for the fixation of the physical scale $a^{-1}$ of a given Monte-Carlo run according to \eq{eq:FixationOfPhysScale}. This renormalization constant has been defined in \eq{eq:DefOfRenormalFactors} through a derivative of the inverse Goldstone propagator. As already pointed out in \sect{sec:SimStratAndObs} computing this derivative requires an analytical continuation $\tilde G_G^{c}(p_c)$ of the discrete lattice propagator, which was proposed to be obtained via a fit of the discrete lattice data. \includeFigSingleMedium{lptdiagramsforpurephi4goldstonepropagator} {fig:GoldstonePropDiagrams} {Illustration of the diagrams that contribute to the continuous space-time Goldstone propagator $\tilde G_G(p_c)$ in the Euclidean pure $\Phi^4$-theory at one-loop order. } {Diagrams contributing to the continuous space-time Goldstone propagator in the pure $\Phi^4$-theory at one-loop order.} The idea here is to construct an appropriate fit function $f_G(p)$ based on a perturbative calculation of the Goldstone propagator $\tilde G_G(p_c)$ in continuous Euclidean space-time. In this study the aforementioned fit function will only play the role of an effective description of the numerical data to allow for the necessary analytical continuation. For its construction we can therefore impose a set of simplifications. In particular, we restrict the consideration here to the pure $\Phi^4$-theory. The reasoning behind this simplification is that the purely bosonic four-point interaction is expected to yield the dominant contributions to the Goldstone propagator in the targeted strong coupling regime with infinite bare $\lambda$ but only moderate values of the bare Yukawa coupling constants. To one-loop order the only momentum dependent contribution to the Goldstone propagator is thus given by the mixed Higgs-Goldstone loop illustrated on the right-hand side of \fig{fig:GoldstonePropDiagrams}, where the system has been assumed to be in the broken phase, as desired. At one-loop order the result for the renormalized Goldstone propagator then reads \begin{eqnarray} \label{eq:GoldstonePropRenPT} \tilde G^{-1}_G(p_c) &=& p_c^2 + m^2_{G} + 8\pi^{-2}\lambda_r^2 v_r^2 \cdot \left[ {\mathcal I}(p_c^2,m_H^2,m_G^2) - {\mathcal I}(-m_{G}^2,m_H^2,m_G^2)\right] \end{eqnarray} where the one-loop contribution ${\mathcal I}(p_c^2, m_H^2, m_G^2)$ is given as \begin{eqnarray} 2{\mathcal I}(p_c^2,m_H^2,m_G^2) &=& \frac{\sqrt{q}}{p_c^2} \cdot \mbox{arctanh}\left( \frac{p_c^2+m^2_{G}-m^2_{H}}{\sqrt{q}} \right) + \frac{m^2_{G} - m^2_{H}}{2p_c^2}\cdot \log\left( \frac{m^2_{H}}{m^2_{G}}\right)\quad \quad \\ &+&\frac{\sqrt{q}}{p_c^2} \cdot \mbox{arctanh}\left(\frac{p_c^2+m^2_{H}-m^2_{G}}{\sqrt{q}}\right) \quad \mbox{with}\nonumber\\ q &=& \left(m^2_{G} - m^2_{H} + p_c^2 \right)^2 + 4m^2_{H} p_c^2. \end{eqnarray} Concerning the singularities of this expression it is noteworthy to add that the given formula can be shown to be finite at $p_c=0$ for $m_G\neq 0$, as desired. In principle, one can directly employ the expression in \eq{eq:GoldstonePropRenPT} as the sought-after fit function $f^{-1}_G(p)$. For clarification it is remarked at this point that instead of fitting the lattice propagator $\tilde G_G(p)$ itself with $f_G(p)$, it is always the inverse propagator $\tilde G_G^{-1}(p)$ that is fitted with $f_G^{-1}(p)\equiv 1/f_G(p)$ in the following. However, for the actual fit procedure of the lattice data a modified version of \eq{eq:GoldstonePropRenPT} is used given as \begin{equation} \label{eq:GoldstoneFitAnsatz} f^{-1}_G(p^2) = \frac{p^2 + \bar m^2_{G} + A\cdot \left[ {\mathcal I}(p^2,\bar m_H^2, \bar m_G^2) - {\mathcal I}(0,\bar m_H^2, \bar m_G^2)\right]}{Z_0}, \end{equation} where an appropriate mapping $p^2\leftrightarrow p$ is implicit and $A$, $Z_0$, $\bar m_G$, $\bar m_H$ are the free fit parameters. Two modifications have been applied here to the original result. Firstly, the constant term ${\mathcal I}(-\bar m_{G}^2,\bar m_H^2,\bar m_G^2)$ in \eq{eq:GoldstonePropRenPT} has been replaced by ${\mathcal I}(0,\bar m_H^2,\bar m_G^2)$ simply for convenience. Since the Goldstone mass is close to zero anyhow, this simplification is insignificant for a practical fit procedure. For clarification it is recalled that in the presented approach the Goldstone mass $m_G$ is actually not determined through the nominal value of the latter fit parameter $\bar m_G$ itself, but through the pole of the resulting analytical continuation $\tilde G_G^{c}(p_c)$ according to \eq{eq:DefOfHiggsAndGoldstoneMassByPole}. This is also indicated by the chosen notation introducing the symbol $\bar m_G$ in addition to the actual Goldstone mass $m_G$. More interestingly, however, a global factor $Z_0$ has been included in the denominator of \eq{eq:GoldstoneFitAnsatz} in the spirit of a renormalization constant. This modification is purely heuristic and its sole purpose is to provide an effective description of the so-far neglected fermionic contributions, which is all we need at this point. Of course, it would be more appropriate to construct a fit ansatz from the renormalized result of the Goldstone propagator derived in the full Higgs-Yukawa sector. This would indeed place the fit procedure on an even better conceptual footing. However, it will turn out, that the given ansatz already works satisfactorily well for our purpose, which is not too surprising, due to the aforementioned dominance of the quartic coupling term in that model parameter space being of physical interest here. More important seems to be the question what part of the lattice Goldstone propagator $\tilde G_G(p)$ one should actually include into the fit procedure. It was already pointed out in \sect{sec:SimStratAndObs} that the consideration of the lattice propagator has to be restricted to small lattice momenta in order to suppress contaminations arising from discretization effects. For that purpose the constant $\gamma$ has been introduced specifying the set of momenta underlying the fit approach according to $\hat p^2 \le \gamma$. In principle, one would want to choose $\gamma$ as small as possible. In practice, however, the fit procedure becomes increasingly unstable when lowering the value of $\gamma$, because less and less data are then included within the fit. In the following example lattice computations, demonstrating the evaluation approach for $Z_G$ and $m_G$, we will consider the settings $\gamma=1$, $\gamma=2$, and $\gamma=4$. To make the discretization effects associated to these not particularly small values of $\gamma$ less prominent in the intended fit procedure, the inverse lattice propagator $\tilde G^{-1}_G(p)$ is actually fitted with $f^{-1}_G(\hat p^2)$ instead of $f^{-1}_G(p^2)$, being a function of the squared lattice momentum $\hat p^2$, which is completely justified in the limit $\gamma\rightarrow 0$. \includeTab{|cccccccc|} { \latVolExp{L_s}{L_t} & $N_f$ & $\kappa$ & $\hat \lambda$ & $\hat y_t$ & $\hat y_b/\hat y_t$ & $v$ & $\Lambda$ \\ \hline \latVol{32}{32} & $1$ & $0.30039$ & $\infty$ & $0.55139$ & $1$ & $ 0.1008(3)\,\,$ & $\GEV{ 2373.0\pm 6.4}$\\ \latVol{32}{32} & $1$ & $0.30400$ & $\infty$ & $0.55038$ & $1$ & $ 0.1547(1)\,\,$ & $\GEV{ 1548.1\pm 1.8}$\\ } {tab:Chap71EmployedRuns} {The model parameters of the Monte-Carlo runs constituting the testbed for the subsequently discussed computation schemes are presented together with the obtained values of the vacuum expectation value $v$ and the cutoff $\Lambda$ determined by \eq{eq:DefOfCutoffLambda}. The degenerate Yukawa coupling constants have been chosen here according to the tree-level relation in \eq{eq:treeLevelTopMass} aiming at the reproduction of the phenomenologically known top quark mass. } {Model parameters of the Monte-Carlo runs constituting the testbed for the considered computation schemes at large quartic coupling constants.} The Goldstone propagators obtained in the lattice calculations specified in \tab{tab:Chap71EmployedRuns} are presented in \fig{fig:GoldstonePropExampleArStrongCoup}. These numerical data of the inverse Goldstone propagator $\tilde G_G^{-1}(p)$ have been fitted with the fit formula $f^{-1}_G(\hat p^2)$ given in \eq{eq:GoldstoneFitAnsatz}. One can observe already from the graphical presentation in \fig{fig:GoldstonePropExampleArStrongCoup} that the considered fit ansatz $f_G(\hat p^2)$ describes the numerical data significantly better than the simple linear fit formula \begin{equation} \label{eq:GoldstoneFitAnsatzLinear} l_G^{-1}(\hat p^2) = \frac{\hat p^2 + m_G^2}{Z_G}, \end{equation} which is additionally considered here for the only purpose of demonstrating the quality of the applied fit ansatz $f_G(\hat p^2)$. To find an optimal setting for the threshold value $\gamma$, the dependence of the fit results on the latter parameter is listed in \tab{tab:GoldstonePropExampleResultsAtStrongCoup}, where the presented Goldstone mass $m_G$ and the renormalization factor $Z_G$ have been obtained according to \eq{eq:DefOfHiggsAndGoldstoneMassByPole} and \eq{eq:DefOfRenormalFactors} from the analytical continuation of the lattice Goldstone propagator given by $\tilde G^{c}_G(p_c) = f_G(p_c)$ and $\tilde G^{c}_G(p_c) = l_G(p_c)$, respectively. At first glance one notices that the linear ansatz $l_G(\hat p^2)$ yields more stable results than $f_G(\hat p^2)$. These results are, however, inconsistent with themselves when varying the parameter $\gamma$. One can also observe in \tab{tab:GoldstonePropExampleResultsAtStrongCoup} that the associated average squared residual per degree of freedom $\chi^2/dof$ significantly differs from one at the selected values of $\gamma$, making apparent that the simple linear fit ansatz is not suited for the reliable determination of the Goldstone propagator properties. \includeFigTrippleDouble {goldstonepropagatorkap030039l32pmax16}{goldstonepropagatorkap030039l32pmax1}{goldstonepropagatorkap030039l32pmaxsmallest} {goldstonepropagatorkap030400l32pmax16}{goldstonepropagatorkap030400l32pmax1}{goldstonepropagatorkap030400l32pmaxsmallest} {fig:GoldstonePropExampleArStrongCoup} {The inverse lattice Goldstone propagators calculated in the Monte-Carlo runs specified in \tab{tab:Chap71EmployedRuns} are presented versus the squared lattice momenta $\hat p^2$ together with the respective fits obtained from the fit approaches $f^{-1}_G(\hat p^2)$ in \eq{eq:GoldstoneFitAnsatz} (red solid line) and $l^{-1}_G(\hat p^2)$ in \eq{eq:GoldstoneFitAnsatzLinear} (blue dashed line) with $\gamma=4.0$. From left to right the three panel columns display the same data zooming in, however, on the vicinity of the origin at $\hat p^2 = 0$. } {Examples of Goldstone propagators at large quartic coupling constants.} \includeTabNoHLines{|c|c|c|c|c|c|c|c|}{ \cline{3-8} \multicolumn{2}{c|}{}& \multicolumn{3}{c|}{fit ansatz $f_G(\hat p^2)$} & \multicolumn{3}{c|}{linear fit ansatz $l_G(\hat p^2)$}\\ \hline $\kappa$ & $\gamma$ & $Z_{G}$ & $m_{G}$ & $\chi^2/dof$ & $Z_{G}$ & $m_{G}$ & $\chi^2/dof$ \\ \hline $0.30039$ & $1.0$ & 0.9380(107) & 0.027(15) & 1.00 & 0.9422(5) & 0.067(2) & 2.61 \\ $0.30039$ & $2.0$ & 0.9431(52) & 0.028(11) & 0.81 & 0.9507(3) & 0.089(2) & 4.79 \\ $0.30039$ & $4.0$ & 0.9457(27) & 0.033(8) & 0.94 & 0.9585(2) & 0.114(2) & 6.19 \\\hline $0.30400$ & $1.0$ & 0.9400(90) & 0.029(10) & 1.41 & 0.9403(4) & 0.066(1) & 4.40 \\ $0.30400$ & $2.0$ & 0.9426(36) & 0.032(7) & 1.07 & 0.9476(2) & 0.084(1) & 6.53 \\ $0.30400$ & $4.0$ & 0.9478(18) & 0.038(4) & 1.06 & 0.9559(1) & 0.111(1) & 9.67 \\\hline } {tab:GoldstonePropExampleResultsAtStrongCoup} {The results on the Goldstone renormalization factor $Z_G$ and the Goldstone mass $m_G$, obtained from the fit approaches $f_G(\hat p^2)$ and $l_G(\hat p^2)$ as defined in \eq{eq:GoldstoneFitAnsatz} and \eq{eq:GoldstoneFitAnsatzLinear}, are listed for several settings of the parameter $\gamma$ together with the corresponding average squared residual per degree of freedom $\chi^2/dof$ associated to the respective fit. The underlying Goldstone lattice propagators have been calculated in the Monte-Carlo runs specified in \tab{tab:Chap71EmployedRuns}. } {Comparison of the Goldstone propagator properties obtained from different extraction schemes at large quartic coupling constants.} In contrast to that the more elaborate fit ansatz $f_G(\hat p^2)$ yields much better values of $\chi^2/dof$ being close to the expected value of one as can be seen in \tab{tab:GoldstonePropExampleResultsAtStrongCoup}. Moreover, the results on the renormalization constant $Z_G$ and the Goldstone mass $m_G$ obtained from this ansatz remain consistent with respect to the specified errors when varying the constant $\gamma$. In the following the aforementioned quantities $Z_G$ and $m_G$ will therefore always be determined by means of the here presented method based on the fit ansatz $f_G(\hat p^2)$ with a threshold value of $\gamma=4$, since this setting yields the most stable results, while still being consistent with the findings obtained at smaller values of $\gamma$. \section{Analysis of the Higgs propagator} \label{sec:HiggsPropAnalysisUpperBound} Concerning the analysis of the Higgs propagator we will follow the same strategy as in the previous section. Examples of the lattice Higgs propagator as obtained in the Monte-Carlo runs specified in \tab{tab:Chap71EmployedRuns} are presented in \fig{fig:HiggsPropExampleAtStrongCoup}. These numerical data have been fitted with the ansatz \begin{eqnarray} \label{eq:HiggsPropFitAnsatz} f^{-1}_H(\hat p^2) &=& \frac{\hat p^2 + \bar m_H^2 + A\cdot \left[ 36 \left( {\mathcal J}(\hat p^2, \bar m_H^2) - D_{H0}\right) + 12 \left( {\mathcal J}(\hat p^2, \bar m_G^2) - D_{G0}\right) \right] }{Z_0}, \quad\quad \end{eqnarray} derived from renormalized perturbation theory in the Euclidean pure $\Phi^4$-theory at one-loop order. The restriction to the pure $\Phi^4$-theory is again motivated by the same arguments already discussed in the preceding section. In the above formula the one-loop contribution ${\mathcal J}(p^2, \bar m^2)$ is defined as \begin{eqnarray} \label{eq:DefOfContEuc1LoopBosContrib} {\mathcal J}(p^2, \bar m^2) &=& \frac{\mbox{arctanh}\left( q \right)}{q}, \quad q = \sqrt{\frac{p^2}{4 \bar m^2 + p^2}}, \end{eqnarray} the constants $D_{H0}$, $D_{G0}$ are given as \begin{eqnarray} D_{H0} &=& {\mathcal J}(0, \bar m_H^2) = 1, \\ D_{G0} &=& {\mathcal J}(0, \bar m_G^2) = 1, \end{eqnarray} and $\bar m_H^2$, $A$, $Z_0$ are the free fit parameters. The parameter $\bar m_G^2$ is not treated as a free parameter here. Instead it is fixed to the value of $m_G$ resulting from the analysis of the Goldstone propagator by the method described in the previous section. The sole purpose of this approach is to achieve higher stability in the considered fit procedure, which otherwise would yield here only unsatisfactory results with respect to the associated statistical uncertainties. Again, one can observe, however less clearly as compared to the previously discussed examples of the case of the Goldstone propagator, that the more elaborate fit ansatz $f_H(\hat p^2)$ describes the lattice data more accurately than the simple linear fit approach \begin{equation} \label{eq:HiggsPropFitAnsatzLinear} l^{-1}_H(\hat p^2) = \frac{\hat p^2 + m_H^2}{Z_H}. \end{equation} This is better observable in the lower row of \fig{fig:HiggsPropExampleAtStrongCoup} than in the upper row, where the differences tend to be rather negligible. The reason why the observed differences between the two fit approaches are less pronounced here, as compared to the situation in the preceding section, simply is, that the threshold value $\gamma$ was chosen here to be $\gamma=1$ which will be motivated below. This setting of $\gamma$ is much smaller than the value underlying the previously discussed examples of the Goldstone propagators and causes the linear fit to come closer to the more elaborate ansatz $f_H(\hat p^2)$. \includeFigTrippleDouble {higgspropagatorkap030039l32pmax16}{higgspropagatorkap030039l32pmax1}{higgspropagatorkap030039l32pmaxsmallest} {higgspropagatorkap030400l32pmax16}{higgspropagatorkap030400l32pmax1}{higgspropagatorkap030400l32pmaxsmallest} {fig:HiggsPropExampleAtStrongCoup} {The inverse lattice Higgs propagators calculated in the Monte-Carlo runs specified in \tab{tab:Chap71EmployedRuns} are presented versus the squared lattice momenta $\hat p^2$ together with the respective fits obtained from the fit approaches $f^{-1}_H(\hat p^2)$ in \eq{eq:HiggsPropFitAnsatz} (red solid line) and $l^{-1}_H(\hat p^2)$ in \eq{eq:HiggsPropFitAnsatzLinear} (blue dashed line) with $\gamma=1.0$. From left to right the three panel columns display the same data zooming in, however, on the vicinity of the origin at $\hat p^2 = 0$. } {Examples of Higgs propagators at large quartic coupling constants.} The Higgs propagator mass $m_{Hp}$ defined in \eq{eq:DefOfPropagatorMinkMass} and the Higgs pole mass $m_H$ together with its associated decay width $\Gamma_H$ given by the pole of the propagator on the second Riemann sheet according to \eq{eq:DefOfHiggsAndGoldstoneMassByPole} can then be obtained by defining the analytical continuation of the lattice propagator as $\tilde G_H^{c}(p_c) = f_H(p_c)$ and $\tilde G_H^{c}(p_c) = l_H(p_c)$, respectively. The results arising from the considered fit procedures are listed in \tab{tab:HiggsPropExampleResultsAtStrongCoup} for several values of the threshold value $\gamma$. However, since the linear function $l_H(p_c)$ can not exhibit a branch cut structure, the pole mass equals the propagator mass and the decay width is identical to zero when applying the linear fit approach. That is the reason why only the Higgs boson mass $m_H$ is presented in the latter scenario. We further remark that the values of $\Gamma_H$ arising along with the determination of $m_H$ are -- as expected -- rather unstable due to the required analytical continuation of the propagator onto the second Riemann sheet. We therefore do not present these numbers here. One observes in \tab{tab:HiggsPropExampleResultsAtStrongCoup} that the Higgs boson masses obtained from the linear fit ansatz $l_H(\hat p^2)$ are again inconsistent with the respective results obtained at varying values of the threshold parameter $\gamma$, thus rendering this latter approach unsuitable for the description of the Higgs propagator. This becomes also manifest through the presented values of the average squared residual per degree of freedom $\chi^2/dof$ associated to the linear ansatz, which are clearly off the expected value of one (with the exception of the case of $\gamma=0.5$). \includeTabNoHLines{|c|c|c|c|c|c|c|}{ \cline{3-7} \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{Fit ansatz $f_H(\hat p^2)$} & \multicolumn{2}{c|}{Fit ansatz $l_H(\hat p^2)$} \\ \hline $\kappa$ & $\gamma$ & $m_{Hp}$ & $m_{H}$ & $\chi^2/dof$ & $m_{H}$ & $\chi^2/dof$ \\ \hline 0.30039 & 0.5 & 0.253(2) & 0.296(83) & 1.17 & 0.253(2) & 1.13\\ 0.30039 & 1.0 & 0.252(2) & 0.253(2) & 1.20 & 0.261(2) & 1.62\\ 0.30039 & 2.0 & 0.246(2) & 0.249(2) & 1.09 & 0.273(1) & 2.58\\ \hline 0.30400 & 0.5 & 0.405(3) & 0.406(3) & 1.43 & 0.399(2) & 1.75\\ 0.30400 & 1.0 & 0.409(1) & 0.410(1) & 1.16 & 0.409(1) & 2.23\\ 0.30400 & 2.0 & 0.409(1) & 0.412(1) & 1.27 & 0.423(1) & 4.63 \\ \hline } {tab:HiggsPropExampleResultsAtStrongCoup} {The results on the Higgs propagator mass $m_{Hp}$ and the Higgs pole mass $m_H$ obtained from the fit approaches $f_H(\hat p^2)$ in \eq{eq:HiggsPropFitAnsatz} and $l_H(\hat p^2)$ in \eq{eq:HiggsPropFitAnsatzLinear} are listed for several settings of the parameter $\gamma$ together with the corresponding average squared residual per degree of freedom $\chi^2/dof$ associated to the respective fit. For the linear fit ansatz $l_H(\hat p^2)$ only the pole mass is presented, since one finds $m_{Hp}\equiv m_H$ when constructing the analytical continuation $\tilde G_H^c(p_c)$ through $l_H(p_c)$. The underlying Higgs lattice propagators have been calculated in the Monte-Carlo runs specified in \tab{tab:Chap71EmployedRuns}. } {Comparison of the Higgs propagator properties obtained from different extraction schemes at large quartic coupling constants.} Again the situation is very different in case of the more elaborate fit ansatz $f_H(\hat p^2)$ yielding significantly smaller values of $\chi^2/dof$. The presented results on the propagator mass $m_{Hp}$ as well as the pole mass $m_H$ are also in much better agreement with the corresponding values obtained at varied threshold parameter $\gamma$. Moreover, the values of $m_{Hp}$ and $m_H$ are consistent with each other with respect to the given errors, finally justifying the identification of the Higgs boson mass with the propagator mass $m_{Hp}$. From the findings presented in \tab{tab:HiggsPropExampleResultsAtStrongCoup} one can conclude that selecting the threshold value to be $\gamma=1$ for the analysis of the Higgs propagator is a very reasonable choice, which leads to consistent and satisfactory results. This is the setting that will be used for the subsequent investigation of the upper Higgs boson mass bounds to determine the propagator mass $m_{Hp}$. It is further remarked that the here chosen value of $\gamma$ is much smaller than the value $\gamma=4$ selected in the preceding section for the analysis of the Goldstone propagator. While this large setting worked well in the latter scenario, it leads to less consistent results in the here considered case and has therefore been excluded from the given presentation. \section{Dependence of the Higgs boson mass on the bare coupling constant $\lambda$} \label{sec:DepOfHiggsMassonLargeLam} We now turn to the question whether the largest Higgs boson mass is indeed obtained at infinite bare quartic coupling constant for a given set of quark masses and a given cutoff $\Lambda$ as one would expect from perturbation theory. Since perturbation theory may not be trustworthy in the regime of large bare coupling constants, the actual dependence of the Higgs boson mass on the bare quartic coupling constant $\lambda$ in the scenario of strong interactions is explicitly checked here by means of direct Monte-Carlo calculations. The final answer of what bare coupling constant produces the largest Higgs boson mass will then be taken as input for the investigation of the upper mass bound in the subsequent section. For this purpose some numerical results on the Higgs propagator mass $m_{Hp}$ are plotted versus the bare quartic coupling constant $\lambda$ in \fig{fig:StrongCoulingHiggsMassDepOnQuartCoup}a. The presented data have been obtained for a cutoff that was intended to be kept constant at approximately $\Lambda\approx \GEV{1540}$ by an appropriate tuning of the hopping parameter, while the degenerate Yukawa coupling constants were fixed according to the tree-level relation in \eq{eq:treeLevelTopMass} aiming at the reproduction of the top quark mass. One clearly observes that the Higgs boson mass monotonically rises with increasing values of the bare coupling constant $\lambda$ until it finally converges to the $\lambda=\infty$ result, which is depicted by the horizontal line in the presented plot. From this result one can conclude that the largest Higgs boson mass is indeed obtained at infinite bare quartic coupling constant, as expected. The forthcoming search for the upper mass bound will therefore be restricted to the scenario of $\lambda=\infty$. \includeFigDouble{lambdascanhiggsmassatlambdalarge}{lambdascanrenlamcoupatlambdalarge} {fig:StrongCoulingHiggsMassDepOnQuartCoup} {The Higgs boson mass $m_{Hp}$ and the renormalized quartic coupling constant $\lambda_r$ are shown versus the bare coupling constant $\lambda$ in panels (a) and (b), respectively. These results have been obtained in direct Monte-Carlo calculations on a \lattice{16}{32} with the degenerate Yukawa coupling constants fixed according to the tree-level relation in \eq{eq:treeLevelTopMass} aiming at the reproduction of the top quark mass. The hopping parameter was tuned with the intention to hold the cutoff constant, while the actually obtained values of $\Lambda$ fluctuate here between $\GEV{1504}$ and $\GEV{1549}$. The horizontal lines depict the corresponding results at infinite bare coupling constant $\lambda=\infty$, and the highlighted bands mark the associated statistical uncertainties. } {Dependence of the Higgs boson mass $m_{Hp}$ and the renormalized quartic coupling constant $\lambda_r$ on the bare quartic coupling constant $\lambda$.} Furthermore, the corresponding behaviour of the renormalized quartic coupling constant $\lambda_r$ as defined in \eq{eq:DefOfRenQuartCoupling} is presented in \fig{fig:StrongCoulingHiggsMassDepOnQuartCoup}b. As expected one observes a monotonically rising dependence of $\lambda_r$ on the bare coupling constant $\lambda$, eventually converging to the $\lambda=\infty$ result depicted by the horizontal line. \section{Results on the upper Higgs boson mass bound} \label{chap:ResOnUpperBound} We now turn to the actually intended non-perturbative determination of the cutoff-dependent upper Higgs boson mass bound $m_H^{up}(\Lambda)$. Given the knowledge about the dependence of the Higgs boson mass on the bare quartic self-coupling constant $\lambda$ the search for the desired upper mass bound can safely be restricted to the scenario of an infinite bare quartic coupling constant, \textit{i.e.}$\;$ $\lambda=\infty$. Moreover, we will restrict the investigation here to the mass degenerate case with $y_t=y_b$, since the fermion determinant $\det({\mathcal M})$ can be proven to be real in this scenario as discussed in \sect{sec:modelDefinition}. Concerning the cutoff parameters $\Lambda$ that are reachable with the intended lattice calculations, a couple of restrictions limit the range of the accessible energy scales. On the one hand all particle masses have to be small compared to $\Lambda$ to avoid unacceptably large cutoff effects, on the other hand all masses have to be large compared to the inverse lattice side lengths to bring the finite volume effects to a tolerable level. As a minimal requirement we demand here that all particle masses $\hat m\in\{m_t, m_b, m_H \}$ in lattice units fulfill \begin{eqnarray} \label{eq:RequirementsForLatMass} \hat m < 0.5& \quad \mbox{and} \quad & \hat m\cdot L_{s,t}>2, \end{eqnarray} which already is a rather loose condition in comparison with the common situation in QCD, where one usually demands at least $\hat m \cdot L_{s,t}>3$. In this model, however, the presence of massless Goldstone modes is known to induce {\it algebraic} finite size effects, which is why it is not meaningful to impose a much stronger constraint in \eq{eq:RequirementsForLatMass}, since the quantity $\hat m\cdot L_{s,t}$ only controls the strength of the exponentially suppressed finite size effects caused by the massive particles. Employing a top mass of $\GEV{175}$ and a Higgs boson mass of roughly $\GEV{700}$, which will turn out to be justified after the upper mass bound has eventually been established, it should therefore be feasible to reach energy scales between $\GEV{1400}$ and $\GEV{2800}$ on a \latticeX{32}{32}{.} For the purpose of investigating the cutoff dependence of the upper mass bound a series of direct Monte-Carlo calculations has been performed with varying hopping parameters $\kappa$ associated to cutoffs covering approximately the given range of reachable energy scales. At each value of $\kappa$ the Monte-Carlo computation has been rerun on several lattice sizes to examine the respective strength of the finite volume effects, ultimately allowing for the infinite volume extrapolation of the obtained lattice results. In addition, a corresponding series of Monte-Carlo calculations has been performed in the pure $\Phi^4$-theory, which will finally allow to address the question for the fermionic contributions to the upper Higgs boson mass bound. The model parameters underlying these two series of lattice calculations are presented in \tab{tab:SummaryOfParametersForUpperHiggsMassBoundRuns}. \includeTab{|ccccccccc|} { $\kappa$ & $L_s$ & $L_t$ & $N_f$ & $\hat \lambda$ & $\hat y_t$ & $\hat y_b/\hat y_t$ & $1/v$ & $\Lambda$ \\ \hline 0.30039 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0.55139 & 1 & $\approx 7.7$ & $\GEV{\approx 2370 }$\\ 0.30148 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0.55239 & 1 & $\approx 6.5$ & $\GEV{\approx 1990 }$\\ 0.30274 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0.55354 & 1 & $\approx 5.6$ & $\GEV{\approx 1730 }$\\ 0.30400 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0.55470 & 1 & $\approx 5.0$ & $\GEV{\approx 1550 }$\\ \hline 0.30570 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0 & -- & $\approx 9.0$ & $\GEV{\approx 2810 }$\\ 0.30680 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0 & -- & $\approx 7.1$ & $\GEV{\approx 2220 }$\\ 0.30780 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0 & -- & $\approx 6.2$ & $\GEV{\approx 1910 }$\\ 0.30890 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0 & -- & $\approx 5.5$ & $\GEV{\approx 1700 }$\\ 0.31040 & 12,16,20,24,32 & 32 & 1 & $\infty$ & 0 & -- & $\approx 4.9$ & $\GEV{\approx 1500 }$\\ } {tab:SummaryOfParametersForUpperHiggsMassBoundRuns} {The model parameters of the Monte-Carlo runs underlying the subsequent lattice calculation of the upper Higgs boson mass bound are presented. In total, a number of 45 Monte-Carlo runs have been performed for that purpose. The available statistics of generated field configurations $N_{Conf}$ varies depending on the respective lattice volume. In detail we have $N_{Conf}\approx 20,000$ for $12\le L_s\le 16$, $N_{Conf}\approx10,000-15,000$ for $L_s = 20$, $N_{Conf}\approx8,000-16,000$ for $L_s=24$, $N_{Conf}\approx 3,000-5,000$ for $L_s=32$. The numerically determined values of $1/v$ and $\Lambda$ are also approximately given. These numbers vary, of course, depending on the respective lattice volumes and serve here only for the purpose of a rough orientation. The degenerate Yukawa coupling constants in the upper four rows have been chosen according to the tree-level relation in \eq{eq:treeLevelTopMass} aiming at the reproduction of the phenomenologically known top quark mass. In the other cases it is exactly set to zero recovering the pure $\Phi^4$-theory.} {Model parameters of the Monte-Carlo runs underlying the lattice calculation of the upper Higgs boson mass bound.} However, before discussing the obtained lattice results, it is worthwhile to recall what behaviour of the considered observables is to be expected from the knowledge of earlier lattice investigations. For the pure $\Phi^4$-theory and neglecting any double-logarithmic contributions the cutoff dependence of the Higgs boson mass as well as the renormalized quartic coupling constant has been found in \Refs{Luscher:1987ay,Luscher:1987ek,Luscher:1988uq} to be of the form \begin{eqnarray} \label{eq:StrongCouplingLambdaScalingBeaviourMass} \frac{m_{Hp}}{a} &=& A_m \cdot \left[\log(\Lambda^2/\mu^2) + B_m \right]^{-1/2}, \\ \label{eq:StrongCouplingLambdaScalingBeaviourLamCoupling} \lambda_r &=& A_\lambda \cdot \left[\log(\Lambda^2/\mu^2) + B_\lambda \right]^{-1}, \end{eqnarray} where $\mu$ denotes some unspecified scale, and $A_{m,\lambda}\equiv A_{m,\lambda}(\mu)$, $B_{m,\lambda}\equiv B_{m,\lambda}(\mu)$ are constants. Since this result has been established in the pure $\Phi^4$-theory, it is thus worthwhile to ask whether these scaling laws still hold in the considered Higgs-Yukawa model including the coupling to the fermions. In that respect it is remarked that the same functional dependence has also been observed in other analytical studies, for instance in \Ref{Fodor:2007fn} examining a Higgs-Yukawa model in continuous Euclidean space-time based, however, on an one-component Higgs field. In that study the running of the renormalized coupling constants with varying cutoff has been investigated by means of renormalized perturbation theory in the large $N_f$-limit. Furthermore, the scaling behaviour of the renormalized Yukawa coupling constant has also been derived. It was found to be \begin{eqnarray} \label{eq:StrongCouplingLambdaScalingBeaviourYCoupling} y_r &=& A_y \cdot \left[\log(\Lambda^2/\mu^2) + B_y \right]^{-1/2}, \end{eqnarray} where $A_y\equiv A_y(\mu)$ and $B_y\equiv B_y(\mu)$ are again so far unspecified constants and $y_r$ stands here for the renormalized top and bottom Yukawa coupling constants $y_{t,r}$ and $y_{b,r}$, respectively, as defined in \eq{eq:DefOfRenYukawaConst}. Now, the numerically obtained Higgs boson masses $m_{Hp}$ resulting in the above specified lattice calculations are finally presented in \fig{fig:UpperHiggsCorrelatorBoundFiniteVol}, where panel (a) refers to the full Higgs-Yukawa model while panel (b) displays the corresponding results in the pure $\Phi^4$-theory. To illustrate the influence of the finite lattice volume those results, belonging to the same parameter sets, differing only in the underlying lattice size, are connected by dotted lines to guide the eye. From these findings one learns that the model indeed exhibits strong finite volume effects when approaching the upper limit of the defined interval of reachable cutoffs, as expected. \includeFigDouble{higgsmassvscutoffatinfinitecouplinglatunits}{higgsmassvscutoffatinfinitecouplinglatunitspurephi4} {fig:UpperHiggsCorrelatorBoundFiniteVol} {The Higgs propagator mass $m_{Hp}$ is presented in units of the vacuum expectation value $v$ versus $1/v$. These results have been determined in the direct Monte-Carlo calculations specified in \tab{tab:SummaryOfParametersForUpperHiggsMassBoundRuns}. Those runs with identical parameter sets differing only in the underlying lattice volume are connected via dotted lines to illustrate the effects of the finite volume. The dashed curves depict the fits of the lattice results according to the finite size expectation in \eq{eq:RenormHiggsMassFitFormula} as explained in the main text. Panel (a) refers to the full Higgs-Yukawa model, while panel (b) shows the corresponding results of the pure $\Phi^4$-theory. } {Dependence of the Higgs propagator mass on $1/v$ at infinite bare quartic coupling constant.} In \fig{fig:UpperHiggsCorrelatorBoundFiniteVol}a one sees that the Higgs boson mass seems to increase with the cutoff $\Lambda$ on the smaller lattice sizes. This, however, is only a finite volume effect. On the larger lattices the Higgs boson mass decreases with growing $\Lambda$ as expected from the triviality property of the Higgs sector. In comparison to the results obtained in the pure $\Phi^4$-theory shown in \fig{fig:UpperHiggsCorrelatorBoundFiniteVol}b the aforementioned finite size effects, being of order $\proz{10}$ here, are much stronger and can thus be ascribed to the influence of the coupling to the fermions. This effect directly arises from the top quark being the lightest physical particle in the here considered scenario. At this point it is worthwhile to ask whether the observed finite volume effects can also be understood by some quantitative consideration. For the weakly interacting regime this could be achieved, for instance, by computing the constraint effective potential~\cite{Fukuda:1974ey,O'Raifeartaigh:1986hi} (CEP) in terms of the bare model parameters for a given finite volume as discussed in \Ref{Gerhold:2009ub}, which then allowed to predict the numerical lattice data for given bare model parameters. In contrast to that scenario the same calculation is not directly useful in the present situation, since the underlying (bare) perturbative expansion would break down due to the bare quartic coupling constant being infinite here. This problem can be cured by parametrizing the four-point interaction in terms of the renormalized quartic coupling constant. Starting from the definition of $\lambda_r$ in \eq{eq:DefOfRenQuartCoupling} one can directly derive an estimate for the Higgs boson mass in terms of $\lambda_r$ according to \begin{eqnarray} m_{He}^2 &=& 8\lambda_r v^2 -\frac{1}{v} \derive{}{\breve v} U_F[\breve v]\Bigg|_{\breve v = v} + \derive{^2}{\breve v^2} U_F[\breve v]\Bigg|_{\breve v = v} \\ \label{eq:DefOfFermionicContToEffPot} U_{F}[\breve v] &=& \frac{-2N_f}{L_s^3\cdot L_t}\cdot \sum\limits_{p} \log\left|\nu^+(p) + y_t \breve v \left(1-\frac{1}{2\rho}\nu^+(p)\right) \right|^2\nonumber\\ &+& \frac{-2N_f}{L_s^3\cdot L_t}\cdot \sum\limits_{p} \log\left|\nu^+(p) + y_b \breve v \left(1-\frac{1}{2\rho}\nu^+(p)\right) \right|^2. \label{eq:FermionEffectivePot} \end{eqnarray} which respects all contributions of order $O(\lambda_r)$. It is remarked that the above contribution $U_F[\breve v]$ contains all fermionic loops in the background of a constant scalar field and has already been discussed in \Ref{Gerhold:2009ub}, while the underlying definition of the eigenvalues $\nu^\pm(p)$ of the free overlap operator with $\pm \mbox{Im}(\nu^\pm(p))\ge 0$ has been taken from \Ref{Gerhold:2007yb}. Combining the above result with the expected scaling law given in \eq{eq:StrongCouplingLambdaScalingBeaviourLamCoupling} a crude estimate for the observed behaviour of the Higgs boson mass presented in \fig{fig:UpperHiggsCorrelatorBoundFiniteVol} can be established according to \begin{eqnarray} \label{eq:RenormHiggsMassFitFormula} m_{He}^2 &=& \frac{8v^2 A'_\lambda}{\log(v^{-2}) + B'_\lambda} -\frac{1}{v} \derive{}{\breve v} U_F[\breve v]\Bigg|_{\breve v = v} + \derive{^2}{\breve v^2} U_F[\breve v]\Bigg|_{\breve v = v}, \end{eqnarray} where double-logarithmic terms have been neglected and $A'_\lambda$, $B'_\lambda$ are so far unspecified parameters. Since the value of the renormalized quartic coupling constant $\lambda_r$ is not known apriori, the idea is here to use the result in \eq{eq:RenormHiggsMassFitFormula} as a fit ansatz with the free fit parameters $A'_\lambda$ and $B'_\lambda$ to fit the observed finite volume behaviour of the Higgs boson mass presented in \fig{fig:UpperHiggsCorrelatorBoundFiniteVol}. These lattice data have been given in units of the vacuum expectation value $v$, plotted versus $1/v$, to allow for the intended direct comparison with the analytical finite volume expression in \eq{eq:RenormHiggsMassFitFormula}. The resulting fits are depicted by the dashed curves in \fig{fig:UpperHiggsCorrelatorBoundFiniteVol}, where the free parameters $A'_\lambda$ and $B'_\lambda$ have independently been adjusted for every presented series of constant lattice volume in the full Higgs-Yukawa model and the pure $\Phi^4$-theory, respectively. Applying the above fit ansatz simultaneously to all available data does not lead to satisfactory results, since the renormalized quartic coupling constant itself also depends significantly on the underlying lattice volume, as will be seen later in this section. One can then observe in \fig{fig:UpperHiggsCorrelatorBoundFiniteVol} that this fit approach can describe the actual finite volume cutoff dependence of the presented Higgs boson mass satisfactorily well, unless the vacuum expectation value $v$ becomes too small. In that case the model does no longer exhibit the expected (infinite volume) critical behaviour in \eqs{eq:StrongCouplingLambdaScalingBeaviourMass}{eq:StrongCouplingLambdaScalingBeaviourLamCoupling} which the derivation of the above fit ansatz was built upon. Staying away from that regime, however, the observed finite volume behaviour of the Higgs boson mass can be well understood by means of the analytical expression in \eq{eq:RenormHiggsMassFitFormula}. To obtain the desired upper Higgs boson mass bounds $m_H^{up}(\Lambda)$ these finite volume results have to be extrapolated to the infinite volume limit and the renormalization factor $Z_G$ has to be properly considered. For that purpose the finite volume dependence of the Monte-Carlo results on the renormalized vev $v_r=v/\sqrt{Z_G}$ and the Higgs boson mass $m_{Hp}$, as obtained for the two scenarios of the full Higgs-Yukawa model and the pure $\Phi^4$-theory, is explicitly shown in \fig{fig:FiniteVolumeEffectsOfUpperHiggsMassBoundVEVandMH}. One sees in these plots that the finite volume effects are rather mild at the largest investigated hopping parameters $\kappa$ corresponding to the lowest considered values of the cutoff $\Lambda$, while the renormalized vev as well as the Higgs boson mass itself vary strongly with increasing lattice size $L_s$ at the smaller presented hopping parameters, as expected. It is well known from lattice investigations of the pure $\Phi^4$-theory~\cite{Hasenfratz:1989ux,Hasenfratz:1990fu,Gockeler:1991ty} that the vev as well as the mass receive strong contributions from the Goldstone modes, inducing finite volume effects of algebraic form starting at order $O(L_s^{-2})$. The next non-trivial finite volume contribution was shown to be of order $O(L_s^{-4})$. In \fig{fig:FiniteVolumeEffectsOfUpperHiggsMassBoundVEVandMH} the obtained data are therefore plotted versus $1/L_s^2$. Moreover, the aforementioned observation justifies to apply the linear fit ansatz \begin{equation} \label{eq:LinFit} f^{(l)}_{v,m}(L_s^{-2}) = A^{(l)}_{v,m} + B^{(l)}_{v,m}\cdot L_s^{-2} \end{equation} to extrapolate these data to the infinite volume limit, where the free fitting parameters $A^{(l)}_{v,m}$ and $B^{(l)}_{v,m}$ with the subscripts $v$ and $m$ refer to the renormalized vev $v_r$ and the Higgs boson mass $m_{Hp}$, respectively. \includeFigDoubleDoubleHere{higgsmassvscutoffatinfinitecouplingfinitevolumeeffectsvev}{higgsmassvscutoffatinfinitecouplingfinitevolumeeffectsvevpurephi4} {higgsmassvscutoffatinfinitecouplingfinitevolumeeffectshiggsmass}{higgsmassvscutoffatinfinitecouplingfinitevolumeeffectshiggsmasspurephi4} {fig:FiniteVolumeEffectsOfUpperHiggsMassBoundVEVandMH} {The dependence of the renormalized vev $v_r = v/\sqrt{Z_G}$ and the Higgs propagator mass $m_{Hp}$ on the squared inverse lattice side length $1/L^2_s$ is presented in the upper and the lower panel rows, respectively, as determined in the direct Monte-Carlo calculations specified in \tab{tab:SummaryOfParametersForUpperHiggsMassBoundRuns}. Panels (a) and (c) show the results for the full Higgs-Yukawa model, while panels (b) and (d) refer to the pure $\Phi^4$-theory. In all plots the dashed curves display the parabolic fits according to the fit ansatz in \eq{eq:ParaFit}, while the solid lines depict the linear fits resulting from \eq{eq:LinFit} for the two lower threshold values $L_s'=16$ (red) and $L_s'=20$ (black). } {Dependence of the renormalized vev $v_r = v/\sqrt{Z_G}$ and the Higgs propagator mass $m_{Hp}$ on the squared inverse lattice side length $1/L^2_s$ at infinite bare quartic coupling constant.} \includeTab{|c|c|c|c|c|}{ \multicolumn{5}{|c|}{Vacuum expectation value $v$} \\ \hline $\kappa$ & $A^{(l)}_v$, $L'_s=16$ & $A^{(l)}_v$, $L'_s=20$ & $A^{(p)}_v$ & $v_r$ \\ \hline $\,0.30039\,$ & $\, 0.1004(3) \, $ & $\, 0.1003(6) \, $ & $\, 0.1004(5) \, $ & $\, 0.1004(5)(1)$ \\ $\,0.30148\,$ & $\, 0.1215(5) \, $ & $\, 0.1209(6) \, $ & $\, 0.1216(8) \, $ & $\, 0.1213(6)(4)$ \\ $\,0.30274\,$ & $\, 0.1410(1) \, $ & $\, 0.1408(1) \, $ & $\, 0.1408(1) \, $ & $\, 0.1409(1)(1)$ \\ $\,0.30400\,$ & $\, 0.1579(2) \, $ & $\, 0.1575(1) \, $ & $\, 0.1576(2) \, $ & $\, 0.1577(2)(2)$ \\ $\,0.30570\,$ & $\, 0.0857(4) \, $ & $\, 0.0852(4) \, $ & $\, 0.0848(2) \, $ & $\, 0.0852(3)(5)$ \\ $\,0.30680\,$ & $\, 0.1099(4) \, $ & $\, 0.1094(2) \, $ & $\, 0.1089(1) \, $ & $\, 0.1097(3)(5)$ \\ $\,0.30780\,$ & $\, 0.1282(3) \, $ & $\, 0.1278(1) \, $ & $\, 0.1277(2) \, $ & $\, 0.1279(2)(3)$ \\ $\,0.30890\,$ & $\, 0.1443(5) \, $ & $\, 0.1438(5) \, $ & $\, 0.1436(4) \, $ & $\, 0.1439(5)(4)$ \\ $\,0.31040\,$ & $\, 0.1634(2) \, $ & $\, 0.1630(1) \, $ & $\, 0.1625(2) \, $ & $\, 0.1630(2)(5)$ \\ \hline \multicolumn{5}{|c|}{Higgs propagator mass $m_{Hp}$} \\ \hline $\kappa$ & $A^{(l)}_m$, $L'_s=16$ & $A^{(l)}_m$, $L'_s=20$ & $A^{(p)}_m$ & $m_{Hp}$ \\ \hline $\,0.30039\,$ & $\, 0.2356(41)\, $ & $\, 0.2382(70)\, $ & $\, 0.2344(67)\, $ & $\, 0.2361(61)(19)$ \\ $\,0.30148\,$ & $\, 0.2943(29)\, $ & $\, 0.2908(39)\, $ & $\, 0.2928(40)\, $ & $\, 0.2926(36)(18)$ \\ $\,0.30274\,$ & $\, 0.3524(20)\, $ & $\, 0.3510(38)\, $ & $\, 0.3489(23)\, $ & $\, 0.3508(28)(18)$ \\ $\,0.30400\,$ & $\, 0.4042(14)\, $ & $\, 0.4030(25)\, $ & $\, 0.4018(15)\, $ & $\, 0.4030(19)(12)$ \\ $\,0.30570\,$ & $\, 0.1964(10)\, $ & $\, 0.1971(16)\, $ & $\, 0.1940(25)\, $ & $\, 0.1958(18)(16)$ \\ $\,0.30680\,$ & $\, 0.2633(42)\, $ & $\, 0.2568(20)\, $ & $\, 0.2552(30)\, $ & $\, 0.2584(32)(41)$ \\ $\,0.30780\,$ & $\, 0.3130(17)\, $ & $\, 0.3110(14)\, $ & $\, 0.3087(7) \, $ & $\, 0.3109(13)(22)$ \\ $\,0.30890\,$ & $\, 0.3589(17)\, $ & $\, 0.3568(3) \, $ & $\, 0.3552(10)\, $ & $\, 0.3570(12)(19)$ \\ $\,0.31040\,$ & $\, 0.4145(8) \, $ & $\, 0.4139(14)\, $ & $\, 0.4105(15)\, $ & $\, 0.4130(13)(20)$ \\ } {tab:ResultOfUpperHiggsMassFiniteVolExtrapolation1} {The results of the infinite volume extrapolations of the Monte-Carlo data of the renormalized vev $v_r$ and the Higgs boson mass $m_{Hp}$ are presented as obtained from the parabolic ansatz in \eq{eq:ParaFit} and the linear approach in \eq{eq:LinFit} for the considered lower threshold values $L_s'=16$ and $L_s'=20$. The final results on $v_r$ and $m_{Hp}$, displayed in the very right column, are determined here by averaging over the parabolic and the two linear fit approaches. An additional, systematic uncertainty of these final results is specified in the second pair of brackets taken from the largest observed deviation among all respective fit results.} {Infinite volume extrapolation of the Monte-Carlo data for the renormalized vev $v_r$ and the Higgs boson mass $m_{Hp}$ at infinite bare quartic coupling constant.} To take the presence of higher order terms in $1/L_{s}^{2}$ into account only the largest lattice sizes are included into this linear fit. Here, we select all lattice volumes with $L_s\ge L'_s$. As a consistency check, testing the dependence of the resulting infinite volume extrapolations on the choice of the fit procedure, the lower threshold value $L'_s$ is varied. The respective results are listed in \tab{tab:ResultOfUpperHiggsMassFiniteVolExtrapolation1}. Moreover, the parabolic fit ansatz \begin{equation} \label{eq:ParaFit} f^{(p)}_{v,m}(L_s^{-2}) = A^{(p)}_{v,m} + B^{(p)}_{v,m}\cdot L_s^{-2} + C^{(p)}_{v,m}\cdot L_s^{-4} \end{equation} is additionally considered. It is applied to the whole range of available lattice sizes. The deviations between the various fitting procedures with respect to the resulting infinite volume extrapolations of the considered observables can then be considered as an additional, systematic uncertainty of the obtained values. The respective fit curves are displayed in \fig{fig:FiniteVolumeEffectsOfUpperHiggsMassBoundVEVandMH} and the corresponding infinite volume extrapolations of the renormalized vev and the Higgs boson mass, which have been obtained as the average over all presented fit results, are listed in \tab{tab:ResultOfUpperHiggsMassFiniteVolExtrapolation1}. The sought-after cutoff-dependent upper Higgs boson mass bound, and thus the main result of this paper, already presented in \fig{fig:UpperMassBoundFinalResult}, can then directly be obtained from the latter infinite volume extrapolation. The bounds arising in the full Higgs-Yukawa model and the pure $\Phi^4$-theory are jointly presented in \fig{fig:UpperMassBoundFinalResult}a. In both cases one clearly observes the expected decrease of the upper Higgs boson mass bound with rising cutoff $\Lambda$. Moreover, the obtained results can very well be fitted with the expected cutoff dependence given in \eq{eq:StrongCouplingLambdaScalingBeaviourMass}, as depicted by the dashed and solid curves in \fig{fig:UpperMassBoundFinalResult}a, where $A_m$, $B_m$ are the respective free fit parameters. Concerning the effect of the fermion dynamics on the upper Higgs boson mass bound one finds in \fig{fig:UpperMassBoundFinalResult}a that the individual results on the Higgs boson mass in the full Higgs-Yukawa model and the pure $\Phi^4$-theory at single cutoff values $\Lambda$ are not clearly distinguishable from each other with respect to the associated uncertainties. Respecting all presented data simultaneously by considering the aforementioned fit curves also does not lead to a much clearer picture, as can be observed in \fig{fig:UpperMassBoundFinalResult}a, where the uncertainties of the respective fit curves are indicated by the highlighted bands. At most, one can infer a mild indication from the presented results, being that the inclusion of the fermion dynamics causes a somewhat steeper descent of the upper Higgs boson mass bound with increasing cutoff $\Lambda$. A definite answer regarding the latter effect, however, remains missing here due to the size of the statistical uncertainties. The clarification of this issue would require the consideration of higher statistics as well as the evaluation of more lattice volumes to improve the reliability of the above infinite volume extrapolations. On the basis of the latter fit results one can extrapolate the presented fit curves to very large values of the cutoff $\Lambda$ as illustrated in \fig{fig:UpperMassBoundFinalResult}b. It is intriguing to compare these large cutoff extrapolations to the results arising from the perturbative consideration of the Landau pole presented, for instance, in \Ref{Hagiwara:2002fs}. One observes good agreement with that perturbatively obtained upper mass bound even though the data presented here have been calculated in the mass degenerate case and for $N_f=1$. This, however, is not too surprising according to the observed relatively mild dependence of the upper mass bound on the fermion dynamics. For clarification it is remarked that a direct quantitative comparison between the aforementioned perturbative and numerical results has been avoided here due to the different underlying regularization schemes. With growing values of $\Lambda$, however, the cutoff dependence becomes less prominent, thus rendering such a direct comparison increasingly reasonable in that limit. \includeFigTriple{higgsmassvscutoffatinfinitecouplingfinitevolumeeffectsrenlambdacoup}{higgsmassvscutoffatinfinitecouplingfinitevolumeeffectsrenlambdacouppurephi4}{higgsmassvscutoffatinfinitecouplingfinitevolumeeffectstopmass} {fig:FiniteVolumeEffectsOfUpperHiggsMassBoundLamR} {The dependence of the renormalized quartic coupling constant $\lambda_r$ as well as the top quark mass $m_t$ on the squared inverse lattice side length $1/L^2_s$ is presented as calculated in the direct Monte-Carlo calculations specified in \tab{tab:SummaryOfParametersForUpperHiggsMassBoundRuns}. Panels (a) and (c) show the results for the full Higgs-Yukawa model, while panel (b) refers to the pure $\Phi^4$-theory. In all plots the dashed curves display the parabolic fits according to the fit ansatz in \eq{eq:ParaFit}, while the solid lines depict the linear fits resulting from \eq{eq:LinFit} for the two lower threshold values $L_s'=16$ (red) and $L_s'=20$ (black). } {Dependence of the renormalized quartic coupling constant $\lambda_r$ and the top quark mass $m_t$ on the squared inverse lattice side length $1/L^2_s$ at infinite bare quartic coupling constant.} Furthermore, the question for the cutoff dependence of the renormalized quartic coupling constant $\lambda_r$ and -- in the case of the full Higgs-Yukawa model -- the top quark mass with its associated value of the renormalized Yukawa coupling constant $y_{t,r}$ shall be addressed. For that purpose we follow exactly the same steps as above. The underlying finite volume lattice results on the renormalized quartic coupling constant and the top quark mass are fitted again with the parabolic and the linear fit approaches in \eq{eq:LinFit} and \eq{eq:ParaFit} as presented in \fig{fig:FiniteVolumeEffectsOfUpperHiggsMassBoundLamR}. \includeTabHERE{|c|c|c|c|c|}{ \multicolumn{5}{|c|}{Renormalized quartic coupling constant $\lambda_r$} \\ \hline $\kappa$ & $A^{(l)}_\lambda$, $L'_s=16$ & $A^{(l)}_\lambda$, $L'_s=20$ & $A^{(p)}_\lambda$ & $\lambda_r$ \\ \hline $\,0.30039\,$ & $\,0.6827(280)\, $ & $\,0.7043(460)\, $ & $\,0.6775(452)\, $ & $\, 0.6882(406)(134)$ \\ $\,0.30148\,$ & $\,0.7291(118)\, $ & $\,0.7116(66) \, $ & $\,0.7166(134)\, $ & $\, 0.7191(110)(88)$ \\ $\,0.30274\,$ & $\,0.7791(79) \, $ & $\,0.7731(139)\, $ & $\,0.7638(81) \, $ & $\, 0.7720(103)(77)$ \\ $\,0.30400\,$ & $\,0.8164(71) \, $ & $\,0.8074(97) \, $ & $\,0.8047(67) \, $ & $\, 0.8095(79)(59)$ \\ $\,0.30570\,$ & $\,0.6609(182)\, $ & $\,0.6760(288)\, $ & $\,0.6590(288)\, $ & $\, 0.6653(258)(85)$ \\ $\,0.30680\,$ & $\,0.7171(201)\, $ & $\,0.6882(149)\, $ & $\,0.6862(182)\, $ & $\, 0.6972(179)(155)$ \\ $\,0.30780\,$ & $\,0.7482(56) \, $ & $\,0.7414(37) \, $ & $\,0.7346(24) \, $ & $\, 0.7414(41)(68)$ \\ $\,0.30890\,$ & $\,0.7716(47) \, $ & $\,0.7660(17) \, $ & $\,0.7612(34) \, $ & $\, 0.7663(35)(52)$ \\ $\,0.31040\,$ & $\,0.8051(23) \, $ & $\,0.8061(45) \, $ & $\,0.7919(88) \, $ & $\, 0.8010(59)(71)$ \\ \hline \multicolumn{5}{|c|}{Top quark mass $m_{t}$} \\ \hline $\kappa$ & $A^{(l)}_t$, $L'_s=16$ & $A^{(l)}_t$, $L'_s=20$ & $A^{(p)}_t$ & $m_{t}$ \\ \hline $\,0.30039\,$ & $\, 0.0701(2) \, $ & $\, 0.0704(4) \, $ & $\, 0.0704(3) \, $ & $\, 0.0703(3)(2)$ \\ $\,0.30148\,$ & $\, 0.0844(3) \, $ & $\, 0.0843(6) \, $ & $\, 0.0845(4) \, $ & $\, 0.0844(5)(1)$ \\ $\,0.30274\,$ & $\, 0.0983(1) \, $ & $\, 0.0984(2) \, $ & $\, 0.0984(1) \, $ & $\, 0.0984(1)(1)$ \\ $\,0.30400\,$ & $\, 0.1104(1) \, $ & $\, 0.1106(1) \, $ & $\, 0.1105(2) \, $ & $\, 0.1105(1)(1)$ \\ } {tab:ResultOfUpperHiggsMassFiniteVolExtrapolation2} {The results of the infinite volume extrapolations of the Monte-Carlo data of the renormalized quartic coupling constant $\lambda_r$ and the top quark mass $m_{t}$ are presented as obtained from the parabolic ansatz in \eq{eq:ParaFit} and the linear approach in \eq{eq:LinFit} for the considered lower threshold values $L_s'=16$ and $L_s'=20$. The final results on $\lambda_r$ and $m_{t}$, displayed in the very right column, are determined here by averaging over the parabolic and the two linear fit approaches. An additional, systematic uncertainty of these final results is specified in the second pair of brackets taken from the largest observed deviation among all respective fit results.} {Infinite volume extrapolation of the Monte-Carlo data for the renormalized quartic coupling constant $\lambda_r$ and the top quark mass $m_{t}$ at infinite quartic coupling constant.} The corresponding infinite volume extrapolations are listed in \tab{tab:ResultOfUpperHiggsMassFiniteVolExtrapolation2}, where the final extrapolation result is obtained by averaging over all performed fit approaches. An additional systematic error is again estimated from the deviations between the various fit procedures. The sought-after cutoff dependence of the aforementioned renormalized coupling constants can then directly be obtained from the latter infinite volume extrapolations. The respective results are presented in \fig{fig:FinalResultsOnRenCoupAtStrongCoup} and within the achieved accuracy one observes the renormalized coupling parameters to be consistent with the expected decline when increasing the cutoff $\Lambda$ as expected in a trivial theory. Again, the obtained numerical results are fitted with the analytically expected scaling behaviour given in \eq{eq:StrongCouplingLambdaScalingBeaviourLamCoupling} and \eq{eq:StrongCouplingLambdaScalingBeaviourYCoupling}. As already discussed for the case of the Higgs boson mass determination, the individual measurements of $\lambda_r$ in the two considered models at single cutoff values $\Lambda$ are not clearly distinguishable. Respecting the available data simultaneously by means of the aforementioned fit procedures also leads at most to the mild indication that the inclusion of the fermion dynamics results in a somewhat steeper descent of the renormalized quartic coupling constant with rising cutoff $\Lambda$ as compared to the pure $\Phi^4$-theory. A definite conclusion in this matter, however, cannot be drawn at this point due to the statistical uncertainties encountered in \fig{fig:FinalResultsOnRenCoupAtStrongCoup}. Finally, the renormalized Yukawa coupling constant is compared to its bare counterpart depicted by the horizontal line in \fig{fig:FinalResultsOnRenCoupAtStrongCoup}b. Since the latter bare quantity was chosen according to the tree-level relation in \eq{eq:treeLevelTopMass} aiming at the reproduction of the physical top quark mass, one can directly infer from this presentation how much the actually measured top quark mass differs from its targeted value of $\GEV{175}$. Here, one observes a significant discrepancy of up to $\proz{2}$, which can in principle be fixed in follow-up lattice calculations, if desired. According to the observed rather weak dependence of the upper Higgs boson mass bound on the Yukawa coupling constants, however, such an adjustment would not even be resolvable with the here achieved accuracy. \includeFigDouble{infinitevolumeextrapolationupperboundlamren}{infinitevolumeextrapolationupperboundreny} {fig:FinalResultsOnRenCoupAtStrongCoup} {The cutoff dependence of the renormalized quartic and Yukawa coupling constants is presented in panels (a) and (b), respectively, as obtained from the infinite volume extrapolation results in \tab{tab:ResultOfUpperHiggsMassFiniteVolExtrapolation2}. The dashed and solid curves are fits with the respective analytically expected cutoff dependence in \eq{eq:StrongCouplingLambdaScalingBeaviourLamCoupling} and \eq{eq:StrongCouplingLambdaScalingBeaviourYCoupling}. The horizontal line in panel (b) indicates the bare degenerate Yukawa coupling constant underlying the performed lattice calculations. } {Cutoff dependence of the renormalized quartic constant and the renormalized Yukawa coupling constant at infinite bare quartic coupling constant.} \section{Summary and Conclusions} \label{sec:conclusions} The aim of the present work has been the non-perturbative determination of the cutoff-dependent upper mass bound of the Standard Model Higgs boson based on first principle computations, in particular not relying on additional information such as the triviality property of the Higgs-Yukawa sector. The motivation for the consideration of the aforementioned mass bound finally lies in the ability of drawing conclusions on the energy scale $\Lambda$ at which a new, so far unspecified theory of elementary particles definitely has to substitute the Standard Model, once the Higgs boson and its mass $m_H$ will have been discovered experimentally. In that case the latter scale $\Lambda$ can be deduced by requiring consistency between the observed mass $m_H$ and the upper and lower mass bounds $m_{H}^{up}(\Lambda)$ and $m_{H}^{low}(\Lambda)$, intrinsically arising from the Standard Model under the assumption of being valid up to the cutoff scale $\Lambda$. The Higgs boson might, however, very well not exist at all, especially since the Higgs sector can only be considered as an effective theory of some so far undiscovered, extended theory, due to its triviality property. In such a scenario, a conclusion about the validity of the Standard Model can nevertheless be drawn, since the non-observation of the Higgs boson at the LHC would eventually exclude its existence at energies below, lets say, $\TEV{1}$ thanks to the large accessible energy scales at the LHC. An even heavier Higgs boson is, however, definitely excluded without the Standard Model becoming inconsistent with itself according to the results in \sect{chap:ResOnUpperBound} and the requirement that the cutoff $\Lambda$ be clearly larger than the mass spectrum described by that theory. In the case of non-observing the Higgs boson at the LHC after having explored its whole energy range, one can thus conclude on the basis of the latter results, that new physics must set in already at the TeV-scale. For the purpose of establishing the aforementioned cutoff-dependent mass bound, the lattice approach has been employed to allow for a non-perturbative investigation of a Higgs-Yukawa model serving as a reasonable simplification of the full Standard Model, containing only those fields and interactions which are most essential for the Higgs boson mass determination. This model has been constructed on the basis of L\"uscher's proposals in \Ref{Luscher:1998pq} for the construction of chirally invariant lattice Higgs-Yukawa models adapted, however, to the situation of the actual Standard Model Higgs-fermion coupling structure, \textit{i.e.}$\;$ for $\varphi$ being a complex doublet equivalent to one Higgs and three Goldstone modes. The resulting chirally invariant lattice Higgs-Yukawa model, constructed here on the basis of the Neuberger overlap operator, then obeys a global ${SU(2)_\mathrm{L}\times U(1)_\mathrm{Y}}$ symmetry, as desired. The fundamental strategy underlying the determination of the cutoff-dependent upper Higgs boson mass bounds has then been the numerical evaluation of the maximal Higgs boson mass attainable within the considered Higgs-Yukawa model in consistency with phenomenology. The latter condition refers here to the requirement of reproducing the phenomenologically known values of the top and bottom quark masses as well as the renormalized vacuum expectation value $v_r$ of the scalar field, where the latter condition was used here to fix the physical scale of the performed lattice calculations. Owing to the potential existence of a fluctuating complex phase in the non-degenerate case, the top and bottom quark masses have, however, been assumed to be degenerate in this work. Applying this strategy requires the evaluation of the model to be performed in the broken phase, but close to a second order phase transition to a symmetric phase, in order to allow for the adjustment of arbitrarily large cutoff scales, at least from a conceptual point of view. As a first step it has explicitly been confirmed by direct lattice calculations that the largest attainable Higgs boson masses are indeed observed in the case of an infinite bare quartic coupling constant, as suggested by perturbation theory. Consequently, the search for the upper Higgs boson mass bound has subsequently been constrained to the bare parameter setting $\lambda=\infty$. The resulting finite volume lattice data on the Higgs boson mass turned out to be sufficiently precise to allow for their reliable infinite volume extrapolation, yielding then a cutoff-dependent upper bound of approximately $m_{H}^{up}(\Lambda)=\GEV{630}$ at a cutoff of $\Lambda=\GEV{1500}$. These results were moreover precise enough to actually resolve their cutoff dependence as demonstrated in \fig{fig:UpperMassBoundFinalResult}, which is in very good agreement with the analytically expected logarithmic decline, and thus with the triviality picture of the Higgs-Yukawa sector. It is remarked here, that this achievement has been numerically demanding, since the latter logarithmic decline of the upper bound $m_{H}^{up}(\Lambda)$ is actually only induced by subleading logarithmic contributions to the scaling behaviour of the considered model close to its phase transition, which had to be resolved with sufficient accuracy. By virtue of the analytically expected functional form of the cutoff-dependent upper mass bound, which was used to fit the obtained numerical data, an extrapolation of the latter results to much higher energy scales could also be established, being in good agreement with the corresponding perturbatively obtained bounds~\cite{Hagiwara:2002fs}. A direct comparison has, however, been avoided due to the different underlying regularization schemes. The interesting question for the fermionic contribution to the observed upper Higgs boson mass bound has then been addressed by explicitly comparing the latter findings to the corresponding results arising in the pure $\Phi^4$-theory. For the considered energy scales this potential effect, however, turned out to be not very well resolvable with the available accuracy of the lattice data. The performed fits with the expected analytical form of the cutoff dependence only mildly indicate the upper mass bound in the full Higgs-Yukawa model to decline somewhat steeper with growing cutoffs than the corresponding results in the pure $\Phi^4$-theory. To obtain a clearer picture in this respect, higher accuracy of the numerical data and thus higher statistics of the underlying field configurations would be needed. \section*{Acknowledgments} We thank J. Kallarackal for discussions and M. M\"uller-Preussker for his continuous support. We are grateful to the "Deutsche Telekom Stiftung" for supporting this study by providing a Ph.D. scholarship for P.G. We further acknowledge the support of the DFG through the DFG-project {\it Mu932/4-1}. The numerical computations have been performed on the {\it HP XC4000 System} at the {\it Scientific Supercomputing Center Karlsruhe} and on the {\it SGI system HLRN-II} at the {\it HLRN Supercomputing Service Berlin-Hannover}. \bibliographystyle{unsrtOWN}
2024-02-18T23:40:23.272Z
2010-02-23T17:15:52.000Z
algebraic_stack_train_0000
2,236
15,456
proofpile-arXiv_065-11059
\section{Introduction} The question of solving differential equations in explicit terms has lost some importance with the development of numerical methods, and it is also clear that solvable cases are non-generic in the class of all physical models. Although the linear ordinary differential equations are mostly dealt with by series expansion or even used to define new functions, it is still instructive if an explicit solution can be found. Obviously, exact solutions provide deeper quantitative insight into whole classes of solutions (depending on some parameters); and in case of perturbative approach, where the linear equation is just the first approximation, they allow to shift the numerical approach to the next order. Examples include Schr\"odinger like equations \cite{Primitivo,Prim2}, and perturbation equations in cosmology \cite{TS}. When solving the Dirac equation it usually boils down to solving a second order linear differential equation -- be it by an appropriate change of variables, or by a particular ansatz. Further assumptions or symmetries lead to special solutions which can be obtained explicitly, most of which are presented in \cite{Cooper}. Depending on the physical theory one starts with, even though the form of the equations stays the same, the results on solvability differ. This happens because of various definitions of the potential -- for example, in some supersymmetric theories the potential $W = V^2 +V'$ is taken to be a generic polynomial, whereas here we take $V$ to be generic, which leads to restrictions on $U$. For results on integrability of such equations see for instance \cite{Primitivo}. There are also many definitions of ``explicit solutions'' depending on which special functions are considered to be simple enough. Here we will be using the Liouvillian extensions to construct solutions. Since the equation will be of the form \begin{equation} f''(x) = r(x)f(x), \end{equation} where $r(x)$ is rational, the sought solutions will lie in some extension of the field of rational functions over $\mathbb{C}$. The extension will be called Liouvillian if it is composed of a finite number of the following steps: \begin{enumerate} \item Adjoining an element whose derivative is in the field. \item Adjoining an element whose logarithmic derivative is in the field. \item Adjoining an element algebraic in the field. \end{enumerate} These functions include all the elementary ones (polynomials, exponential, logarithm), and also special cases of transcendental functions. The reason for introducing such class is that it naturally follows from the differential Galois theory \cite{Kaplansky} and that for the above equation there is an algorithmic approach to checking for solvability \cite{Kovacic}. To put it generally, when the solutions are Liouvillian, the differential Galois group is solvable, and there exist invariants of the equation for which one can look. This means that, at least in theory, it is possible to check a given class of equations and rule out the existence of any additional exact solutions -- an advantage usually not found in other approaches. Thus, to put it simply, the aim of this letter is to find Liouvillian solutions of the Dirac equation into which a polynomial potential has been incorporated. As we will see, the conditions for such solvability are almost never met. \section{Dirac equation with a potential} We take the one-dimensional form of the Dirac equation \begin{equation} i \partial_t\psi = (\alpha p + \beta m )\psi, \end{equation} where $p$ is the momentum conjugate to the coordinate $x$, i.e. $p = -i\partial_x$, and the Dirac matrices satisfy $\alpha^2=\beta^2=1$, $\{\alpha,\beta\}=0$. This form follows from the standard four-dimensional form, when the bispinor $\psi$ depends on $t$ and $x$ only. It can then be taken as a two component vector $\psi = \left(\psi_1(t,x),\psi_2(t,x)\right)$ for simplicity. Also the time derivative can be eliminated by the ansatz $\psi(t,x) = \exp(-iEt)\psi(x)$, so that the equation is \begin{equation} E\psi = (-i\alpha\partial_x + \beta m )\psi. \end{equation} The inclusion of the potential which is specified by only one scalar function can be performed in two ways. The polynomial potential $V=\lambda x^n+\ldots+\lambda_0$ can be the time component of a four-vector -- like the Coulomb part of the electromagnetic field. The equation would then become \begin{equation} (V+E)\psi = (-i \alpha\partial_x + \beta m )\psi. \label{vcoup} \end{equation} Alternatively, one could introduce scalar coupling, which modifies the $m$ term to \begin{equation} E\psi = \left(-i \alpha\partial_x + \beta (V+m)\right)\psi. \label{scoup} \end{equation} In the first case, the explicit equations are \begin{equation} \begin{aligned} \psi_1' &= (m+V)\psi_1 - E\psi_2,\\ \psi_2' &= -(m+V)\psi_2 + E\psi_1,\label{1ord} \end{aligned} \end{equation} or, as second order equations, \begin{equation} \begin{aligned} \psi_1'' &= (U' + U^2 - E^2)\psi_1,\\ \psi_2'' &= (-U' + U^2 - E^2)\psi_2, \label{2ord_scal} \end{aligned} \end{equation} where $U=m+V$ and the Dirac matrices were taken to be \begin{equation} \alpha = \left(\begin{matrix} 0 & i\\ -i & 0\end{matrix}\right),\;\; \beta = \left(\begin{matrix} 0 & 1\\ 1 & 0\end{matrix}\right). \end{equation} The vector coupling, on the other hand, gives \begin{equation} \begin{aligned} \psi_1'' &= (iU' - U^2 + m^2)\psi_1,\\ \psi_2'' &= (-iU' - U^2 + m^2)\psi_2, \end{aligned} \end{equation} provided that \begin{equation} \alpha = \left(\begin{matrix} 1 & 0\\ 0 & -1\end{matrix}\right),\;\; \end{equation} and $U=V+E$. It is obvious that the vector coupling can be transformed into the scalar one with \begin{equation} V \rightarrow -i V,\; E \rightarrow -i m,\; m \rightarrow i E. \label{corr} \end{equation} \section{The solutions} Let us start with the scalar coupling. Since the spinor components are connected through the first order equations \eqref{1ord} it suffices to check just the first of equations \eqref{2ord_scal}. For polynomial $V$, and thus polynomial $U$ it is straightforward to apply the aforementioned Kovacic algorithm. Because the only coefficient in the equation is an even degree polynomial, only the first case to be considered in depth, and the possible solution will be of the form $P\exp\int\omega\mathrm{d}x$, with a monic polynomial $P$ and a rational function $\omega$. Since the only singular points is the infinity one needs to obtain the expansion $[\sqrt{r}]_{\infty}$ of $r = U'+U^2-E^2$. Because only the polynomial part of the expansion matters, it is fully and uniquely determined by comparing the coefficients of $r$ and $[\sqrt{r}_{\infty}]^2$. Obviously, there are at most $n+1$ terms of the expansion, and the $U^2$ term alone fixes the $n+1$ highest terms of $r$. Thus, $U$ itself is the solution and \begin{equation} r - [\sqrt{r}_{\infty}]^2 =r-U^2=n\lambda x^{n-1} + \ldots -E^2,\label{rminus} \end{equation} If $n\neq1$, the algorithm gives $\omega=U$, and $\deg(P)=0$ with the additional condition that $E^2P=0$. In other words, for $E=0$ the solution is \begin{equation} \psi_1 = \exp{\int(m+V)\mathrm{d}x} = \frac{1}{\psi_2}.\label{sol1} \end{equation} If $n=1$, there is only the zeroth order term in \eqref{rminus} and for $E^2/(2\lambda)\in\mathbb{N}$ we have a whole family of solutions. This is the known case of the so-called Dirac oscillator solvable by Hermite polynomials \cite{Cooper}. For non-zero values of $E$ no other (non-constant) polynomial potential gives Liouvillian solutions. Thanks to the relations \eqref{corr}, we can see that the same holds for the vector coupling, with the appropriate interchange of $E$ and $m$. Namely, for a linear potential one has the Hermite solutions as above, and for $m=0$ the solution is \begin{equation} \psi_1 = \exp{\int i(E+V)\mathrm{d}x} = \frac{1}{\psi_2}.\label{sol2} \end{equation} The above results can be summarised as follows: \begin{theorem} The one-dimensional Dirac equation with a polynomial potential of degree $n>1$ only has Liouvillian solutions when $E=0$ (scalar coupling \eqref{scoup}) or $m=0$ (vector coupling \eqref{vcoup}), given by formulae \eqref{sol1} and \eqref{sol2} respectively. For $n=1$ the solutions are expressible by Hermite polynomials given in \cite{Cooper}, and $n=0$ is equivalent to the standard Dirac equation. \end{theorem} Two remarks are in order. First, note that in the special cases $m=0$ ($E=0$), the solutions given above hold for all potentials, not just polynomial ones as can be checked by direct differentiation. Second, the practical consequences of the outcome are that the spinor is not expressible as a polynomial function of position, as is often the case in solvable models of quantum mechanics. This still leaves the possibility of using new transcendental functions or series expansion approach, but precludes the direct (explicit) computation of norms or other general expressions involving $\psi$. \section{Acknowledgements} This paper was supported by grant No. N N202 2126 33 of Ministry of Science and Higher Education of Poland.
2024-02-18T23:40:23.790Z
2011-01-05T02:01:30.000Z
algebraic_stack_train_0000
2,264
1,503
proofpile-arXiv_065-11067
\section{Introduction}\label{sec:Intro} Previously, most of the works on quantum error-correcting codes were done with the assumption that the channel is symmetric. That is, the various error types were taken to be equiprobable. To be brief, the term \textit{quantum codes} or QECC is henceforth used to refer to quantum error-correcting codes. Recently, it has been established that, in many quantum mechanical systems, the phase-flip errors happen more frequently than the bit-flip errors or the combined bit-phase flip errors. For more details, ~\cite{SRK09} can be consulted. There is a need to design quantum codes that take advantage of this asymmetry in quantum channels. We call such codes \textit{asymmetric quantum codes}. We require the codes to correct many phase-flip errors but not necessarily the same number of bit-flip errors. In this paper we extend the construction of asymmetric quantum codes in~\cite{WFLX09} to include codes derived from classical additive codes under the trace Hermitian inner product. This work is organized as follows. In Section~\ref{sec:Prelims}, we state some basic definitions and properties of linear and additive codes. Section ~\ref{sec:QuantumCodes} provides an introduction to quantum error-correcting codes in general, differentiating the symmetric and the asymmetric cases. In Section~\ref{sec:AsymQECC}, a construction of asymmetric QECC based on additive codes is presented. The rest of the paper focuses on additive codes over ${\mathbb F}_{4}$. Section~\ref{sec:AdditiveF4} recalls briefly important known facts regarding these codes. A construction of asymmetric QECC from extremal or optimal self-dual additive codes is given in Section~\ref{sec:ExtremalSD}. A construction from Hermitian self-orthogonal ${\mathbb F}_{4}$-linear codes is the topic of Section~\ref{sec:HSelfOrtho}. Sections~\ref{sec:Cyclic} and~\ref{sec:BCHCodes} use nested ${\mathbb F}_{4}$-linear cyclic codes for lengths $n \leq 25$ and nested BCH codes for lengths $27 \leq n \leq 51$, respectively, in the construction. New or better asymmetric quantum codes constructed from nested additive codes over ${\mathbb F}_{4}$ are presented in Section~\ref{sec:nestedadditive}, exhibiting the gain of extending the construction to include additive codes. Section~\ref{sec:Conclusion} provides conclusions and some open problems. \section{Preliminaries}\label{sec:Prelims} Let $p$ be a prime and $q = p^f$ for some positive integer $f$. An $[n,k,d]_q$-linear code $C$ of length $n$, dimension $k$, and minimum distance $d$ is a subspace of dimension $k$ of the vector space ${\mathbb F}_q^n$ over the finite field ${\mathbb F}_q=GF(q)$ with $q$ elements. For a general, not necessarily linear, code $C$, the notation $(n,M=|C|,d)_q$ is commonly used. The \textit{Hamming weight} of a vector or a codeword ${\mathbf{v}}$ in a code $C$, denoted by $\mathop{{\rm wt}}_H({\mathbf{v}})$, is the number of its nonzero entries. Given two elements ${\mathbf{u}},{\mathbf{v}} \in C$, the number of positions where their respective entries disagree, written as $\mathop{{\rm dist}}_H({\mathbf{u}},{\mathbf{v}})$, is called the \textit{Hamming distance} of ${\mathbf{u}}$ and ${\mathbf{v}}$. For any code $C$, the \textit{minimum distance} $d = d(C)$ is given by $d = d(C) = \mathop{{\rm min}}\left\lbrace \mathop{{\rm dist}}_H({\mathbf{u}},{\mathbf{v}}): {\mathbf{u}},{\mathbf{v}} \in C,{\mathbf{u}} \neq {\mathbf{v}}\right\rbrace$. If $C$ is linear, then its closure property implies that $d(C)$ is given by the minimum Hamming weight of nonzero vectors in $C$. We follow~\cite{NRS06} in defining the following three families of codes according to their duality types. \begin{defn}\label{def1.1} Let $q=r^2=p^f$ be an even power of an arbitrary prime $p$ with $\overline{x}=x^{r}$ for $x \in {\mathbb F}_{q}$. Let $n$ be a positive integer and ${\mathbf{u}} = (u_1,\ldots,u_n), {\mathbf{v}} = (v_1,\ldots,v_n) \in {\mathbb F}_{q}^n$. \begin{enumerate} \item $\mathbf{q^{\mathop{{\rm H}}}}$ is the family of ${\mathbb F}_{q}$-linear codes of length $n$ with the \textit{Hermitian inner product} \begin{equation}\label{eq:1.1} \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm H}}} := \sum_{i=1}^{n} u_i \cdot v_i^{\sqrt{q}} \text{.} \end{equation} \item $\mathbf{q^{\mathop{{\rm H}}+}}$ \textbf{(even)} is the family of trace Hermitian codes over ${\mathbb F}_{q}$ of length $n$ which are ${\mathbb F}_{r}$-linear, where $r^2=q$ is even. The duality is defined according to the \textit{trace Hermitian inner product} \begin{equation}\label{eq:1.2} \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm tr}}} := \sum_{i=1}^{n} (u_i\cdot v_i^{\sqrt{q}} + u_i^{\sqrt{q}} \cdot v_i) \text{.} \end{equation} \item $\mathbf{q^{\mathop{{\rm H}}+}}$ \textbf{(odd)} is the family of trace Hermitian codes over ${\mathbb F}_{q}$ of length $n$ which are ${\mathbb F}_{r}$-linear, where $r^2=q$ is odd. The duality is defined according to the following inner product, which we will still call \textit{trace Hermitian inner product}, \begin{equation}\label{eq:1.2a} \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm tr}}} := \alpha \cdot \sum_{i=1}^{n} (u_i\cdot v_i^{\sqrt{q}} - u_i^{\sqrt{q}} \cdot v_i) \text{,} \end{equation} where $\alpha \in {\mathbb F}_{q} \setminus \left\lbrace 0 \right\rbrace$ with $\alpha^{r}= -\alpha$. \end{enumerate} \end{defn} \begin{defn}\label{def:1.1a} A code $C$ of length $n$ is said to be a \textit{(classical) additive code} if $C$ belongs to either the family $q^{\mathop{{\rm H}}+}$ (even) or to the family $q^{\mathop{{\rm H}}+}$ (odd). \end{defn} Let $C$ be a code. Under a chosen inner product $*$, the \textit{dual code} $C^{\perp_{*}}$ of $C$ is given by \begin{equation*} C^{\perp_{*}} := \left\lbrace {\mathbf{u}} \in {\mathbb F}_q^n : \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{*} = 0 \text{ for all } {\mathbf{v}} \in C \right\rbrace \text{.} \end{equation*} Accordingly, for a code $C$ in the family $(q^{\mathop{{\rm H}}})$, \begin{equation*} C^{\perp_{\mathop{{\rm H}}}} := \left\lbrace {\mathbf{u}} \in {\mathbb F}_q^n : \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm H}}} = 0 \text{ for all } {\mathbf{v}} \in C \right\rbrace \text{,} \end{equation*} and, for a code $C$ in the family $(q^{\mathop{{\rm H}}+})$ (even) or $(q^{\mathop{{\rm H}}+})$ (odd), \begin{equation*} C^{\perp_{\mathop{{\rm tr}}}} := \left\lbrace {\mathbf{u}} \in {\mathbb F}_q^n : \left\langle {\mathbf{u}},{\mathbf{v}}\right\rangle _{\mathop{{\rm tr}}} = 0 \text{ for all } {\mathbf{v}} \in C \right\rbrace \text{.} \end{equation*} A code is said to be \textit{self-orthogonal} if it is contained in its dual and is said to be \textit{self-dual} if its dual is itself. We say that a family of codes is \textit{closed} if $(C^{\perp_{*}})^{\perp_{*}} = C$ for each $C$ in that family. It has been established~\cite[Ch. 3]{NRS06} that the three families of codes in Definition~\ref{def1.1} are closed. The weight distribution of a code and that of its dual are important in the studies of their properties. \begin{defn}\label{def1.2} The \textit{weight enumerator} $W_C(X,Y)$ of an $(n,M=|C|,d)_q$-code $C$ is the polynomial \begin{equation}\label{WE} W_C(X,Y)=\sum_{i=0}^n A_{i} X^{n-i}Y^{i} \text{,} \end{equation} where $A_{i}$ is the number of codewords of weight $i$ in the code $C$. \end{defn} The weight enumerator of the Hermitian dual code $C^{\perp_{\mathop{{\rm H}}}}$ of an $[n,k,d]_q$-code $C$ is connected to the weight enumerator of the code $C$ via the MacWilliams Equation \begin{equation}\label{eq:MacW} W_{C^{\perp_{\mathop{{\rm H}}}}} (X,Y)= \frac{1}{|C|} W_C(X+(q-1)Y,X-Y) \text{.} \end{equation} In the case of nonlinear codes, we can define a similar notion called the \textit{distance distribution}. The MacWilliams Equation can be generalized to the nonlinear cases as well (see~\cite[Ch. 5]{MS77}). From~\cite[Sect. 2.3]{NRS06} we know that the families $q^{\mathop{{\rm H}}+}$ (even) and $q^{\mathop{{\rm H}}+}$ (odd) have the same MacWilliams Equation as the family $q^{\mathop{{\rm H}}}$. Thus, Equation (\ref{eq:MacW}) applies to all three families. Classical codes are connected to many other combinatorial structures. One such structure is the orthogonal array. \begin{defn}\label{def1.3} Let $S$ be a set of $q$ symbols or levels. An orthogonal array $A$ with $M$ runs, $n$ factors, $q$ levels and strength $t$ with index $\lambda$, denoted by $OA(M,n,q,t)$, is an $M \times n$ array $A$ with entries from $S$ such that every $M \times t$ subarray of $A$ contains each $t$-tuple of $S^t$ exactly $\lambda = \frac{M}{q^t}$ times as a row. \end{defn} The parameter $\lambda$ is usually not written explicitly in the notation since its value depends on $M,q$ and $t$. The rows of an orthogonal array are distinct since the purpose of its construction is to minimize the number of runs in the experiment while keeping some required conditions satisfied. There is a natural correspondence between codes and orthogonal arrays. The codewords in a code $C$ can be seen as the rows of an orthogonal array $A$ and vice versa. The following proposition due to Delsarte (see~\cite[Th. 4.5]{Del73}) will be useful in the sequel. Note that the code $C$ in the proposition is a general code. No linearity is required. The duality here is defined over any inner product. For more on how the dual distance is defined for nonlinear codes, we refer to~\cite[Sec. 4.4]{HSS99}. \begin{prop}\cite[Th. 4.9]{HSS99}\label{OA} If $C$ is an $(n,M=|C|,d)_q$ code with dual distance $d^{\perp}$, then the corresponding orthogonal array is an $OA(M,n,q,d^{\perp}-1)$. Conversely, the code corresponding to an $OA(M,n,q,t)$ is an $(n,M,d)_q$ code with dual distance $d^{\perp} \geq t+1$. If the orthogonal array has strength $t$ but not $t+1$, then $d^{\perp}$ is precisely $t+1$. \end{prop} \section{Quantum Codes}\label{sec:QuantumCodes} We assume that the reader is familiar with the standard error model in quantum error-correction. The essentials can be found, for instance, in~\cite{AK01} and in~\cite{FLX06}. For convenience, some basic definitions and results are reproduced here. Let $\mathbb{C}$ be the field of complex numbers and $\eta=e^{\frac{2\pi \sqrt{-1}}{p}}\in \mathbb{C}$. We fix an orthonormal basis of $\mathbb{C}^{q}$ $$\left\{|v\rangle:v\in \mathbb{F}_{q}\right\}$$ with respect to the Hermitian inner product. For a positive integer $n$, let $V_{n}=(\mathbb{C}^{q})^{\otimes n }$ be the $n$-fold tensor product of $\mathbb{C}^{q}$. Then $V_{n}$ has the following orthonormal basis \begin{equation}\label{basis} \left\{|{\mathbf{c}}\rangle =|c_{1}c_{2}\ldots c_{n}\rangle : {\mathbf{c}}=(c_{1},\ldots,c_{n}) \in \mathbb{F}_{q}^n\right\} \text{,} \end{equation} where $|c_{1}c_{2}\ldots c_{n}\rangle$ abbreviates $|c_{1}\rangle\otimes|c_{2}\rangle\otimes\cdots \otimes |c_{n}\rangle$. For two quantum states $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ in $V_{n}$ with $$|{\mathbf{u}}\rangle=\sum\limits_{{\mathbf{c}}\in \mathbb{F}_{q}^{n}}\alpha({\mathbf{c}})|{\mathbf{c}}\rangle,\quad |{\mathbf{v}}\rangle =\sum\limits_{{\mathbf{c}}\in \mathbb{F}_{q}^{n}}\beta({\mathbf{c}})|{\mathbf{c}}\rangle \quad (\alpha({\mathbf{c}}),\beta({\mathbf{c}})\in \mathbb{C}),$$ the Hermitian inner product of $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ is $$\langle {\mathbf{u}}|{\mathbf{v}}\rangle=\sum\limits_{{\mathbf{c}}\in \mathbb{F}_{q}^{n}} \overline{\alpha({\mathbf{c}})}\beta({\mathbf{c}})\in \mathbb{C},$$ where $\overline{\alpha({\mathbf{c}})}$ is the complex conjugate of $\alpha({\mathbf{c}})$. We say $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ are \textit{orthogonal} if $\langle {\mathbf{u}}|{\mathbf{v}}\rangle=0$. A quantum error acting on $V_{n}$ is a unitary linear operator on $V_{n}$ and has the following form $$e=X({\mathbf{a}})Z({\mathbf{b}})$$ with ${\mathbf{a}}=(a_{1},\ldots,a_{n}),{\mathbf{b}}=(b_{1}, \ldots,b_{n})\in \mathbb{F}_{q}^{n}$. The action of $e$ on the basis (\ref{basis}) of $V_{n}$ is $$e|{\mathbf{c}}\rangle=X(a_{1})Z(b_{1})|c_{1}\rangle \otimes \ldots \otimes X(a_{n})Z(b_{n})|c_{n}\rangle, $$ where $$X(a_{i})|c_{i}\rangle=|a_{i}+c_{i}\rangle, \quad Z(b_{i})|c_{i}\rangle=\eta^{T(b_{i}c_{i})}|c_{i}\rangle$$ with $T:\;\mathbb{F}_{q}\to\mathbb{F}_{p}$ being the trace mapping $$T(\alpha)=\alpha+\alpha^{p}+\alpha^{p^{2}}+\ldots+\alpha^{p^{m-1}},$$ for $q=p^{m}$. Therefore, $$e|{\mathbf{c}}\rangle=\eta^{T({\mathbf{b}}\cdot {\mathbf{c}})}|{\mathbf{a}}+{\mathbf{c}}\rangle,$$ where ${\mathbf{b}}\cdot {\mathbf{c}}=\sum\limits_{i=1}^{n}b_{i}c_{i}\in \mathbb{F}_{q}$ is the usual inner product in $\mathbb{F}_{q}^n$. For $e=X({\mathbf{a}})Z({\mathbf{b}})$ and $e^{'}=X({\mathbf{a}}^{'})Z({\mathbf{b}}^{'})$ with ${\mathbf{a}},{\mathbf{b}}$, and ${\mathbf{a}}^{'},{\mathbf{b}}^{'}\in \mathbb{F}_{q}^{n}$, $$e e^{'}=\eta^{T({\mathbf{a}}\cdot {\mathbf{b}}^{'}-{\mathbf{a}}^{'}\cdot {\mathbf{b}})}e^{'} e.$$ Hence, the set $$E_{n}=\left\{\eta^{\lambda}X({\mathbf{a}})Z({\mathbf{b}}) | 0\leq \lambda\leq p-1, {\mathbf{a}},{\mathbf{b}}\in \mathbb{F}_{q}^{n} \right\}$$ forms a (nonabelian) group, called the \textit{error group} on $V_{n}$. \begin{defn}\label{def2.1} For a quantum error $e=\eta^{\lambda}X({\mathbf{a}})Z({\mathbf{b}})\in E_{n}$, we define the {\it quantum weight} $w_{Q}(e)$, the {\it $X$-weight} $w_{X}(e)$ and the {\it $Z$-weight} $w_{Z}(e)$ of $e$ by \[ \begin{array}{rcl} w_{Q}(e)&=&|\{i: 1\leq i\leq n, (a_{i},b_{i})\neq(0,0)\}| \text{,}\\ w_{X}(e)&=&|\{i: 1\leq i\leq n, a_{i}\neq0\}| \text{,}\\ w_{Z}(e)&=&|\{i: 1\leq i\leq n, b_{i}\neq0\}| \text{.} \end{array} \] \end{defn} Thus, $w_{Q}(e)$ is the number of qudits where the action of $e$ is nontrivial by $X(a_{i})Z(b_{i})\not=I$ (identity) while $w_{X}(e)$ and $w_{Z}(e)$ are, respectively, the numbers of qudits where the $X$-action and the $Z$-action of $e$ are nontrivial. We are now ready to define the distinction between symmetric and asymmetric quantum codes. \begin{defn}\label{def2.2} A \textit{$q$-ary quantum code} of length $n$ is a subspace $Q$ of $V_{n}$ with dimension $K\geq1$. A quantum code $Q$ of dimension $K\geq2$ is said to detect $d-1$ qudits of errors for $d\geq1$ if, for every orthogonal pair $|{\mathbf{u}}\rangle$, $|{\mathbf{v}}\rangle$ in $Q$ with $\langle {\mathbf{u}}|{\mathbf{v}} \rangle=0$ and every $e \in E_{n}$ with $w_{Q}(e)\leq d-1$, $|{\mathbf{u}}\rangle$ and $e|{\mathbf{v}}\rangle$ are orthogonal. In this case, we call $Q$ a \textit{symmetric} quantum code with parameters $((n,K,d))_{q}$ or $[[n,k,d]]_{q}$, where $k=\log_{q}K$. Such a quantum code is called \textit{pure} if $\langle {\mathbf{u}}|e|{\mathbf{v}} \rangle=0$ for any $|{\mathbf{u}}\rangle$ and $|{\mathbf{v}}\rangle$ in $Q$ and any $e \in E_{n}$ with $1\leq w_{Q}(e)\leq d-1$. A quantum code $Q$ with $K=1$ is assumed to be pure. Let $d_{x}$ and $d_{z}$ be positive integers. A quantum code $Q$ in $V_{n}$ with dimension $K\geq2$ is called an \textit{asymmetric quantum code} with parameters $((n,K,d_{z}/d_{x}))_{q}$ or $[[n,k,d_{z}/d_{x}]]_{q}$, where $k=\log_{q}K$, if $Q$ detects $d_{x}-1$ qudits of $X$-errors and, at the same time, $d_{z}-1$ qudits of $Z$-errors. That is, if $\langle {\mathbf{u}}|{\mathbf{v}} \rangle=0$ for $|{\mathbf{u}}\rangle,|{\mathbf{v}}\rangle\in Q$, then $\langle {\mathbf{u}}|e|{\mathbf{v}} \rangle=0$ for any $e \in E_{n}$ such that $w_{X}(e)\leq d_{x}-1$ and $w_{Z}(e)\leq d_{z}-1$. Such an asymmetric quantum code $Q$ is called \textit{pure} if $\langle {\mathbf{u}}|e|{\mathbf{v}} \rangle=0$ for any $|{\mathbf{u}}\rangle,|{\mathbf{v}}\rangle\in Q$ and $e \in E_{n}$ such that $1 \leq w_{X}(e)\leq d_{x}-1$ and $1 \leq w_{Z}(e)\leq d_{z}-1$. An asymmetric quantum code $Q$ with $K=1$ is assumed to be pure. \end{defn} \begin{rem}\label{rem2.3} An asymmetric quantum code with parameters $((n,K,d/d))_{q}$ is a symmetric quantum code with parameters $((n,K,d))_{q}$, but the converse is not true since, for $e\in E_{n}$ with $w_{X}(e)\leq d-1$ and $w_{Z}(e)\leq d-1$, the weight $w_{Q}(e)$ may be bigger than $d-1$. \end{rem} Given any two codes $C$ and $D$, let the notation $\mathop{{\rm wt}}(C \setminus D)$ denote $\mathop{{\rm min}}\left\lbrace \mathop{{\rm wt}}_{H}({\mathbf{u}} (\neq {\mathbf{0}})) : {\mathbf{u}}\in(C \setminus D)\right\rbrace$. The analogue of the well-known CSS construction (see~\cite{CRSS98}) for the asymmetric case is known. \begin{prop}\cite[Lemma 3.1]{SRK09}\label{prop2.4} Let $C_x,C_z$ be linear codes over ${\mathbb F}_q^n$ with parameters $[n,k_x]_q$, and $[n,k_z]_q$ respectively. Let $C_x^\perp \subseteq C_z$. Then there exists an $[[n,k_x +k_z-n,d_{z} /d_{x}]]_q$ asymmetric quantum code, where $d_x =\mathop{{\rm wt}}(C_x \setminus C_z^\perp)$ and $d_z =\mathop{{\rm wt}}(C_z \setminus C_x^\perp)$. \end{prop} The resulting code is said to be \textit{pure} if, in the above construction, $d_x=d(C_x)$ and $d_z=d(C_z)$. \section{Asymmetric QECC from Additive Codes}\label{sec:AsymQECC} The following result has been established recently: \begin{thm}\cite[Th. 3.1]{WFLX09}\label{thm:3.1} \begin{enumerate} \item There exists an asymmetric quantum code with parameters $((n,K,d_z/d_x))_q$ with $K \geq 2$ if and only if there exist $K$ nonzero mappings \begin{equation}\label{eq:3.1} \varphi_i : {\mathbb F}_q^n \rightarrow \mathbb{C} \text{ for } 1\leq i \leq K \end{equation} satisfying the following conditions: for each $d$ such that $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $ and partition of $\left\lbrace 1,2,\ldots,n\right\rbrace $, \begin{equation}\label{eq:3.2} \begin{cases} \left\lbrace 1,2,\ldots,n \right\rbrace = A \cup X \cup Z \cup B \text{,} \\ |A| = d-1,\quad |B| = n+d-d_x-d_z+1 \text{,}\\ |X|=d_x - d,\quad |Z| = d_z - d \text{,} \end{cases} \end{equation} and each ${\mathbf{c}}_A,{\mathbf{c}}_A' \in {\mathbb F}_q^{|A|}$, ${\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}$ and ${\mathbf{a}}_X \in {\mathbb F}_q^{|X|}$, we have the equality \begin{multline}\label{eq:3.3} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \text{,}\\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A',{\mathbf{c}}_X - {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \\ = \begin{cases} 0 &\text{for $i \neq j$,} \\ I({\mathbf{c}}_A,{\mathbf{c}}_A',{\mathbf{c}}_Z,{\mathbf{a}}_X) &\text{for $i = j$,} \end{cases} \end{multline} where $I({\mathbf{c}}_A,{\mathbf{c}}_A',{\mathbf{c}}_Z,{\mathbf{a}}_X)$ is an element of $\mathbb{C}$ which is independent of $i$. The notation $({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)$ represents the rearrangement of the entries of the vector ${\mathbf{c}} \in {\mathbb F}_q^n$ according to the partition of $\left\lbrace 1,2,\ldots,n\right\rbrace $ given in Equation (\ref{eq:3.2}). \item Let $(\varphi_i,\varphi_j)$ stand for $\sum_{{\mathbf{c}}\in {\mathbb F}_q^n} \overline{\varphi_i({\mathbf{c}})}\varphi_j({\mathbf{c}})$. There exists a pure asymmetric quantum code with parameters $((n,K\geq1,d_z/d_x))_q$ if and only if there exist $K$ nonzero mappings $\varphi_i$ as shown in Equation (\ref{eq:3.1}) such that \begin{itemize} \item $\varphi_i$ are linearly independent for $1\leq i\leq K$, i.e., the rank of the $K \times q^n$ matrix $(\varphi_i ({\mathbf{c}}))_{1\leq i\leq K, {\mathbf{c}} \in {\mathbb F}_q^n}$ is $K$; and \item for each $d$ with $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $, a partition in Equation (\ref{eq:3.2}) and ${\mathbf{c}}_A,{\mathbf{a}}_A \in {\mathbb F}_q^{|A|}, {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}$ and ${\mathbf{a}}_X \in {\mathbb F}_q^{|X|}$, we have the equality \end{itemize} \end{enumerate} \begin{multline}\label{eq:3.4} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|}, \\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \\ = \begin{cases} 0 &\text{for $({\mathbf{a}}_A,{\mathbf{a}}_X) \neq ({\mathbf{0}},{\mathbf{0}})$,} \\ \frac{(\varphi_i,\varphi_j)}{q^{d_z-1}} &\text{for $({\mathbf{a}}_A,{\mathbf{a}}_X) = ({\mathbf{0}},{\mathbf{0}})$.} \end{cases} \end{multline} \end{thm} The following result is due to Keqin Feng and Long Wang. It has, however, never appeared formally in a published form before. Since it will be needed in the sequel, we present it here with a proof. \begin{prop}(K.~Feng and L.~Wang)\label{prop:3.2} Let $a,b$ be positive integers. There exists an asymmetric quantum code $Q$ with parameters $((n,K,a/b))_q$ if and only if there exists an asymmetric quantum code $Q'$ with parameters $((n,K,b/a))_q$. $Q'$ is pure if and only if $Q$ is pure. \end{prop} \begin{proof} We begin by assuming the existence of an $((n,K,a/b))_q$ asymmetric quantum code $Q$. Let $\varphi_{i}$ with $1 \leq i \leq K$ be the $K$ mappings given in Theorem~\ref{thm:3.1}. Define the following $K$ mappings \begin{equation}\label{eq:3.5} \begin{aligned} \Phi_i :\quad &{\mathbb F}_q^n \rightarrow \mathbb{C} \text{ for } 1\leq i \leq K \\ &{\mathbf{v}} \mapsto \sum_{{\mathbf{c}} \in {\mathbb F}_{q}^{n}} \varphi_{i}({\mathbf{c}}) \eta^{T({\mathbf{c}} \cdot {\mathbf{v}})}. \end{aligned} \end{equation} Let ${\mathbf{v}}_{A},{\mathbf{b}}_{A} \in {\mathbb F}_{q}^{|A|}, {\mathbf{v}}_{X} \in {\mathbb F}_{q}^{|X|}$, and ${\mathbf{b}}_{Z} \in {\mathbb F}_{q}^{|Z|}$. For each $d$ such that $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $ and a partition of $\left\lbrace 1,2,\ldots,n \right\rbrace $ given in Equation (\ref{eq:3.2}), we show that \begin{multline}\label{eq:3.6} S = \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \overline{\Phi_i ({\mathbf{v}})} \Phi_j({\mathbf{v}}_A+{\mathbf{b}}_A,{\mathbf{v}}_X,{\mathbf{v}}_Z+{\mathbf{b}}_Z,{\mathbf{v}}_B) \\ = \begin{cases} 0 &\text{for $i \neq j$,} \\ I'({\mathbf{v}}_A,{\mathbf{b}}_A,{\mathbf{b}}_Z,{\mathbf{v}}_X) &\text{for $i = j$,} \end{cases} \end{multline} where $I'({\mathbf{v}}_A,{\mathbf{b}}_A,{\mathbf{b}}_Z,{\mathbf{v}}_X)$ is an element of $\mathbb{C}$ which is independent of $i$. Let ${\mathbf{t}}=({\mathbf{v}}_A+{\mathbf{b}}_A,{\mathbf{v}}_X,{\mathbf{v}}_Z+{\mathbf{b}}_Z,{\mathbf{v}}_B)$. Applying Equation (\ref{eq:3.5}) yields \begin{equation}\label{eq:3.7} S = \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \sum_{{\mathbf{c}},{\mathbf{d}} \in {\mathbb F}_{q}^n} \overline{\varphi_i ({\mathbf{c}})} \varphi_j ({\mathbf{d}}) \eta^{T((-{\mathbf{c}} \cdot {\mathbf{v}})+({\mathbf{d}} \cdot {\mathbf{t}}))} \text{.} \end{equation} By carefully rearranging the summations and grouping the terms, we get \begin{equation}\label{eq:3.8} S = \sum_{{\mathbf{c}},{\mathbf{d}} \in {\mathbb F}_{q}^n} \overline{\varphi_i ({\mathbf{c}})} \varphi_j ({\mathbf{d}}) \cdot \kappa \cdot \lambda \text{,} \end{equation} where \begin{align*} \kappa &= \eta^{T({\mathbf{v}}_A \cdot({\mathbf{d}}_A-{\mathbf{c}}_A)+{\mathbf{v}}_X \cdot({\mathbf{d}}_X-{\mathbf{c}}_X)+ {\mathbf{d}}_A \cdot {\mathbf{b}}_A+{\mathbf{d}}_Z \cdot {\mathbf{b}}_Z)} \text{,} \\ \lambda &= \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \eta^{T({\mathbf{v}}_B \cdot({\mathbf{d}}_B-{\mathbf{c}}_B)+{\mathbf{v}}_Z \cdot({\mathbf{d}}_Z-{\mathbf{c}}_Z))} \text{.} \end{align*} By orthogonality of characters, \begin{equation*} \lambda = \begin{cases} q^{|Z|+|B|} & \text{if }{\mathbf{d}}_B = {\mathbf{c}}_B \text{ and } {\mathbf{d}}_Z= {\mathbf{c}}_Z \text{,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} Therefore, \begin{equation}\label{eq:3.9} S=\sum_{\substack{{\mathbf{c}} \in {\mathbb F}_{q}^n \\ {\mathbf{d}}_A \in {\mathbb F}_q^{|A|} \text{,}{\mathbf{d}}_X \in {\mathbb F}_q^{|X|}}} q^{|Z|+|B|} \cdot \overline{\varphi_i ({\mathbf{c}})} \varphi_j ({\mathbf{d}}_A,{\mathbf{d}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \cdot \pi \text{,} \end{equation} where \begin{equation*} \pi = \eta^{T({\mathbf{v}}_A \cdot({\mathbf{d}}_A-{\mathbf{c}}_A)+{\mathbf{v}}_X \cdot({\mathbf{d}}_X-{\mathbf{c}}_X)+{\mathbf{d}}_A \cdot {\mathbf{b}}_A+{\mathbf{c}}_Z \cdot {\mathbf{b}}_Z)}. \end{equation*} Now, we let $k=n-d_x+1$, ${\mathbf{a}}_A={\mathbf{d}}_A-{\mathbf{c}}_A$, and ${\mathbf{a}}_X={\mathbf{d}}_X-{\mathbf{c}}_X$. Splitting up the summation once again yields \begin{multline}\label{eq:3.10} S=q^k \sum_{\substack{{\mathbf{c}}_A,{\mathbf{a}}_A \in {\mathbb F}_{q}^{|A|} \\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|} \text{,}{\mathbf{a}}_X \in {\mathbb F}_q^{|X|}}} \eta^{T({\mathbf{v}}_A \cdot {\mathbf{a}}_A+{\mathbf{v}}_X \cdot{\mathbf{a}}_X+{\mathbf{b}}_A \cdot ({\mathbf{c}}_A + {\mathbf{a}}_A) +{\mathbf{c}}_Z \cdot {\mathbf{b}}_Z)} \\ \cdot \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \text{,}\\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \text{.} \end{multline} Invoking Equation (\ref{eq:3.3}) concludes the proof for the first part with $I'$ given by \begin{equation}\label{eq:3.11} I'=q^k I \sum_{\substack{{\mathbf{c}}_A,{\mathbf{a}}_A \in {\mathbb F}_{q}^{|A|} \\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|} \text{,}{\mathbf{a}}_X \in {\mathbb F}_q^{|X|}}} \eta^{T({\mathbf{v}}_A \cdot {\mathbf{a}}_A+{\mathbf{v}}_X \cdot{\mathbf{a}}_X+{\mathbf{b}}_A \cdot ({\mathbf{c}}_A + {\mathbf{a}}_A) +{\mathbf{c}}_Z \cdot {\mathbf{b}}_Z)} \text{.} \end{equation} For the second part, let us assume the existence of a pure $((n,K,a/b))_q$ asymmetric quantum code $Q$. Note that the Fourier transformations $\Phi_i$ for $1 \leq i\leq K$ are linearly independent. We use Equations (\ref{eq:3.10}) and (\ref{eq:3.4}) to establish the equality \begin{multline}\label{eq:3.12} S = \sum_{\substack{ {\mathbf{v}}_Z \in {\mathbb F}_q^{|Z|} \text{,}\\ {\mathbf{v}}_B \in {\mathbb F}_q^{|B|}}} \overline{\Phi_i ({\mathbf{v}})} \Phi_j({\mathbf{v}}_A+{\mathbf{b}}_A,{\mathbf{v}}_X,{\mathbf{v}}_Z+{\mathbf{b}}_Z,{\mathbf{v}}_B) \\ = \begin{cases} 0 &\text{for $({\mathbf{b}}_A,{\mathbf{b}}_Z) \neq ({\mathbf{0}},{\mathbf{0}})$,} \\ q^{n} \frac{(\varphi_i,\varphi_j)}{q^{d_x-1}} &\text{for $({\mathbf{b}}_A,{\mathbf{b}}_Z) = ({\mathbf{0}},{\mathbf{0}})$.} \end{cases} \end{multline} Consider the term \begin{equation*} M:=\sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \text{,}\\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}})} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \end{equation*} in Equation (\ref{eq:3.10}). By the purity assumption, for $({\mathbf{a}}_A,{\mathbf{a}}_X) \neq ({\mathbf{0}},{\mathbf{0}})$, $M=0$. For $({\mathbf{a}}_A,{\mathbf{a}}_X)=({\mathbf{0}},{\mathbf{0}})$, $M=\frac{(\varphi_i,\varphi_j)}{q^{d_{z}-1}}$. Hence, \begin{equation}\label{eq:3.13} S=q^k \sum_{\substack{{\mathbf{c}}_A \in {\mathbb F}_{q}^{|A|} \text{,}\\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}}} \eta^{T({\mathbf{b}}_A \cdot {\mathbf{c}}_A + {\mathbf{b}}_Z \cdot {\mathbf{c}}_Z)} \cdot \frac{(\varphi_i,\varphi_j)}{q^{d_z-1}} \text{.} \end{equation} By orthogonality of characters, if $({\mathbf{b}}_A,{\mathbf{b}}_Z) \neq ({\mathbf{0}},{\mathbf{0}})$, then \begin{equation*} \sum_{\substack{{\mathbf{c}}_A \in {\mathbb F}_{q}^{|A|} \text{,}\\ {\mathbf{c}}_Z \in {\mathbb F}_q^{|Z|}}} \eta^{T({\mathbf{b}}_A \cdot {\mathbf{c}}_A + {\mathbf{b}}_Z \cdot {\mathbf{c}}_Z)} = 0 \text{,} \end{equation*} making $S=0$. If $({\mathbf{b}}_A,{\mathbf{b}}_Z)=({\mathbf{0}},{\mathbf{0}})$, then \begin{equation*} S = q^k \cdot q^{|A|+|Z|} \cdot \frac{(\varphi_i,\varphi_j)}{q^{d_z-1}} \text{.} \end{equation*} This completes the proof of the second part. \end{proof} With this result, without loss of generality, $d_z \geq d_x$ is henceforth assumed. \begin{rem}\label{rem3.3} If we examine closely the proof of Theorem~\ref{thm:3.1} above as presented in Theorem 3.1 of~\cite{WFLX09}, only the additive property (instead of linearity) is used. We will show that the conclusion of the theorem with an adjusted value for $K$ still follows if we use \underline{classical additive codes} instead of linear codes. \end{rem} \begin{thm}\label{thm:3.4} Let $d_x$ and $d_z$ be positive integers. Let $C$ be a classical additive code in ${\mathbb F}_q^n$. Assume that $d^{\perp_{\mathop{{\rm tr}}}}= d(C^{\perp_{\mathop{{\rm tr}}}})$ is the minimum distance of the dual code $C^{\perp_{\mathop{{\rm tr}}}}$ of $C$ under the trace Hermitian inner product. For a set $V:=\left\lbrace {\mathbf{v}}_i : 1\leq i \leq K\right\rbrace $ of $K$ distinct vectors in ${\mathbb F}_q^n$, let $d_v:=\mathop{{\rm min}}\left\lbrace \mathop{{\rm wt}}_{H}({\mathbf{v}}_i - {\mathbf{v}}_j + {\mathbf{c}}) : 1 \leq i \neq j \leq K, {\mathbf{c}} \in C\right\rbrace$. If $d^{\perp_{\mathop{{\rm tr}}}} \geq d_z$ and $d_v \geq d_x$, then there exists an asymmetric quantum code $Q$ with parameters $((n,K,d_z/d_x))_q$. \end{thm} \begin{proof} Define the following functions \begin{equation}\label{eq:3.14} \begin{aligned} \varphi_i :\quad &{\mathbb F}_q^n \rightarrow \mathbb{C} \text{ for } 1\leq i \leq K \\ &{\mathbf{u}} \mapsto \begin{cases} 1 &\text{if ${\mathbf{u}} \in {\mathbf{v}}_i+C$,} \\ 0 &\text{if ${\mathbf{u}} \not \in {\mathbf{v}}_i+C$.} \end{cases} \end{aligned} \end{equation} For each $d$ such that $1 \leq d \leq \mathop{{\rm min}}\left\lbrace d_x,d_z\right\rbrace $ and a partition of $\left\lbrace 1,2,\ldots,n \right\rbrace $ given in Equation (\ref{eq:3.2}), \begin{equation*} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \neq 0 \end{equation*} if and only if \begin{equation*} \begin{cases} ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) &\in {\mathbf{v}}_i+C \text{,} \\ ({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X +{\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) &\in {\mathbf{v}}_j+C \text{,} \end{cases} \end{equation*} which, in turn, is equivalent to \begin{equation}\label{eq:3.15} \begin{cases} ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) &\in {\mathbf{v}}_i+C \text{,} \\ ({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) &\in {\mathbf{v}}_j-{\mathbf{v}}_i+C \text{.} \end{cases} \end{equation} Note that since $\textnormal{{\rm wt}}_{H}({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \leq |A|+|X| = d_x-1$, we know that $({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \in {\mathbf{v}}_j-{\mathbf{v}}_i+C$ means $i=j$ by the definition of $d_v$ above. Thus, if $i \neq j$, \begin{equation}\label{eq:3.16} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_j({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) = 0 \text{.} \end{equation} Now, consider the case of $i = j$. By Equation (\ref{eq:3.15}), if $({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \not \in C$, then it has no contribution to the sum we are interested in. If $({\mathbf{a}}_A,{\mathbf{a}}_X,{\mathbf{0}}_Z,{\mathbf{0}}_B) \in C$, then \begin{multline}\label{eq:3.17} \sum_{\substack{ {\mathbf{c}}_X \in {\mathbb F}_q^{|X|} \\ {\mathbf{c}}_B \in {\mathbb F}_q^{|B|}}} \overline{\varphi_i ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B)} \varphi_i({\mathbf{c}}_A+{\mathbf{a}}_A,{\mathbf{c}}_X + {\mathbf{a}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \\ = \sum_{\begin{subarray}{c} {\mathbf{c}}_X \in {\mathbb F}_q^{|X|}, {\mathbf{c}}_B \in {\mathbb F}_q^{|B|} \\ ({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \in {\mathbf{v}}_i+C \end{subarray}} 1 \text{.} \end{multline} Proposition~\ref{OA} above tells us that, if $C$ is any classical $q$-ary code of length $n$ and size $M$ such that the minimum distance $d^{\perp}$ of its dual is greater than or equal to $d_z$, then any coset of $C$ is an orthogonal array of level $q$ and of strength exactly $d_z-1$. In other words, there are exactly $\frac{|C|}{q^{d_z-1}}$ vectors $({\mathbf{c}}_A,{\mathbf{c}}_X,{\mathbf{c}}_Z,{\mathbf{c}}_B) \in {\mathbf{v}}_i+C$ for any fixed $({\mathbf{c}}_A,{\mathbf{c}}_Z) \in {\mathbb F}_q^{d_z-1}$. Thus, for $i = j$, the sum on the right hand side of Equation (\ref{eq:3.17}) is $\frac{|C|}{q^{d_z-1}}$, which is independent of $i$. By Theorem~\ref{thm:3.1} we have an asymmetric quantum code $Q$ with parameters $((n,K,d_z/d_x))_q$. \end{proof} \begin{thm}\label{thm:3.5} Let $q=r^2$ be an even power of a prime $p$. For $i = 1,2$, let $C_i$ be a classical additive code with parameters $(n,K_i,d_i)_q$. If $C_1^{\perp_{\mathop{{\rm tr}}}} \subseteq C_2$, then there exists an asymmetric quantum code $Q$ with parameters $((n,\frac{|C_2|}{|C_1^{\perp_{\mathop{{\rm tr}}}}|},d_z/d_x))_q$ where $\left\lbrace d_z,d_x\right\rbrace = \left\lbrace d_1,d_2\right\rbrace$. \end{thm} \begin{proof} We take $C = C_1^{\perp_{\mathop{{\rm tr}}}}$ in Theorem~\ref{thm:3.4} above. Since $C_1^{\perp_{\mathop{{\rm tr}}}} \subseteq C_2$, we have $C_2 = C_1^{\perp_{\mathop{{\rm tr}}}} \oplus C'$, where $C'$ is an ${\mathbb F}_r$-submodule of $C_2$ and $\oplus$ is the direct sum so that $|C'|=\frac{|C_2|}{|C_1^{\perp_{\mathop{{\rm tr}}}}|}$. Let $C' = \left\lbrace {\mathbf{v}}_1,\ldots,{\mathbf{v}}_K\right\rbrace$, where $K = \frac{|C_2|}{|C_1^{\perp_{\mathop{{\rm tr}}}}|}$. Then \begin{align*} d^{\perp_{\mathop{{\rm tr}}}} &= d(C^{\perp_{\mathop{{\rm tr}}}}) = d(C_1) = d_1 \text{ and} \\ d_v &= \mathop{{\rm min}}\left\lbrace \textnormal{{\rm wt}}_{H}({\mathbf{v}}_i - {\mathbf{v}}_j + {\mathbf{c}}) : 1 \leq i \neq j \leq K, {\mathbf{c}} \in C \right\rbrace \\ &= \mathop{{\rm min}} \left\lbrace \textnormal{{\rm wt}}_{H}({\mathbf{v}} + {\mathbf{c}}) : {\mathbf{0}} \neq {\mathbf{v}} \in C', {\mathbf{c}} \in C_1^{\perp_{\mathop{{\rm tr}}}} \right\rbrace \geq d_2 \text{.} \end{align*} \end{proof} Theorem~\ref{thm:3.5} can now be used to construct quantum codes. In this paper, all computations are done in MAGMA~\cite{BCP97} version V2.16-5. The construction method of Theorem~\ref{thm:3.5} falls into what some have labelled the \textit{CSS-type construction}. It is noted in~\cite[Lemma 3.3]{SRK09} that any CSS-type ${\mathbb F}_{q}$-linear $[[n,k,d_{z}/d_{x}]]_{q}$-code satisfies the quantum version of the Singleton bound \begin{equation*} k \leq n-d_{x}-d_{z}+2 \text{.} \end{equation*} This bound is conjectured to hold for all asymmetric quantum codes. Some of our codes in later sections attain $k = n-d_{x}-d_{z}+2$. They are printed in boldface throughout the tables and examples. \section{Additive Codes over ${\mathbb F}_4$}\label{sec:AdditiveF4} Let ${\mathbb F}_4:=\left\lbrace 0,1,\omega,\omega^{2}=\overline{\omega}\right\rbrace $. For $x \in {\mathbb F}_{4}$, $\overline{x}=x^{2}$, the conjugate of $x$. By definition, an additive code $C$ of length $n$ over ${\mathbb F}_{4}$ is a free ${\mathbb F}_{2}$-module. It has size $2^{l}$ for some $0 \leq l \leq 2n$. As an ${\mathbb F}_{2}$-module, $C$ has a basis consisting of $l$ basis vectors. A \textit{generator matrix} of $C$ is an $l \times n$ matrix with entries elements of ${\mathbb F}_{4}$ whose rows form a basis of $C$. Additive codes over ${\mathbb F}_4$ equipped with the trace Hermitian inner product have been studied primarily in connection to designs (e.g.~\cite{KP03}) and to stabilizer quantum codes (e.g.~\cite{GHKP01} and~\cite[Sec. 9.10]{HP03}). It is well known that if $C$ is an additive $(n,2^l)_4$-code, then $C^{\perp_{\mathop{{\rm tr}}}}$ is an additive $(n,2^{2n-l})_4$-code. To compute the weight enumerator of $C^{\perp_{\mathop{{\rm tr}}}}$ we use Equation (\ref{eq:MacW}) with $q=4$ \begin{equation}\label{eq:4.1} W_{C^{\perp_{\mathop{{\rm tr}}}}}(X,Y) = \frac{1}{|C|} W_C(X+3Y,X-Y) \text{.} \end{equation} \begin{rem}\label{rem:4.1} If the code $C$ is ${\mathbb F}_4$-linear with parameters $[n,k,d]_4$, then $C^{\perp_{\mathop{{\rm H}}}} = C^{\perp_{\mathop{{\rm tr}}}}$. This is because $C^{\perp_{\mathop{{\rm H}}}}$ is of size $4^{n-k} = 2^{2n-2k}$ which is also the size of $C^{\perp_{\mathop{{\rm tr}}}}$. Alternatively, one can invoke ~\cite[Th. 3]{CRSS98}. \end{rem} From here on, we assume the trace Hermitian inner product whenever additive ${\mathbb F}_{4}$ codes are discussed and the Hermitian inner product whenever ${\mathbb F}_{4}$-linear codes are used. Two additive codes $C_1$ and $C_2$ over ${\mathbb F}_{4}$ are said to be \textit{equivalent} if there is a map sending the codewords of one code onto the codewords of the other where the map consists of a permutation of coordinates, followed by a scaling of coordinates by elements of ${\mathbb F}_{4}$, followed by a conjugation of the entries of some of the coordinates. \section{Construction from Extremal or Optimal Additive Self-Dual Codes over ${\mathbb F}_4$} \label{sec:ExtremalSD} As a direct consequence of Theorem~\ref{thm:3.5}, we have the following result. \begin{thm}\label{thm:5.1} If $C$ is an additive self-dual code of parameters $(n,2^n,d)_4$, then there exists an $[[n,0,d_z/d_x]]_4$ asymmetric quantum code $Q$ with $d_z = d_x = d(C^{\perp_{\mathop{{\rm tr}}}})$. \end{thm} Additive self-dual codes over ${\mathbb F}_{4}$ exist for any length $n$ since the identity matrix $I_{n}$ clearly generates a self-dual $(n,2^{n},1)_{4}$-code. Any linear self-dual $[n,n/2,d]_{4}$-code is also an additive self-dual $(n,2^{n},d)_{4}$-code. \begin{defn}\label{defn:5.2} A self-dual $(n,2^n,d)_4$-code $C$ is \textit{Type II} if all of its codewords have even weight. If $C$ has a codeword of odd weight, then $C$ is \textit{Type I}. \end{defn} It is known (see~\cite[Sec. 4.2]{RS98}) that Type II codes of length $n$ exist only if $n$ is even and that a Type I code is not ${\mathbb F}_4$-linear. There is a bound in~\cite[Th. 33]{RS98} on the minimum weight of an additive self-dual code. If $d_{I}$ and $d_{II}$ are the minimum weights of Type I and Type II codes of length $n$, respectively, then \begin{equation} \begin{aligned} d_{I} &\leq \begin{cases} 2 \lfloor \frac{n}{6} \rfloor + 1 \text{,} \hfil \text{ if } n \equiv 0 \pmod{6} \\ 2 \lfloor \frac{n}{6} \rfloor + 3 \text{,} \hfil \text{ if } n \equiv 0 \pmod{6} \\ 2 \lfloor \frac{n}{6} \rfloor + 2 \text{,} \hfil \text{otherwise} \end{cases} \text{,}\\ d_{II} &\leq 2 \lfloor \frac{n}{6} \rfloor + 2 \text{.} \end{aligned} \end{equation} A code that meets the appropriate bound is called \textit{extremal}. If a code is not extremal yet no code of the given type can exist with a larger minimum weight, then we call the code \textit{optimal}. The complete classification, up to equivalence, of additive self-dual codes over ${\mathbb F}_{4}$ up to $n=12$ can be found in~\cite{DP06}. The classification of extremal codes of lengths $n=13$ and $n=14$ is presented in~\cite{Var07}. Many examples of good additive codes for larger values of $n$ are presented in~\cite{GK04},~\cite{Var07}, and~\cite{Var09}. Table~\ref{table:SDCodes} summarizes the results thus far and lists down the resulting asymmetric quantum codes for lengths up to $n=30$. The subscripts $_{I}$ and $_{II}$ indicates the types of the codes. The superscripts $^{e},^{o},^{b}$ indicate the fact that the minimum distance $d$ is extremal, optimal, and best-known (not necessarily extremal or optimal), respectively. The number of codes for each set of given parameters is listed in the column under the heading \textbf{num}. \begin{table*} \caption{Best-Known Additive Self-Dual Codes over ${\mathbb F}_{4}$ for $n \leq 30$ and the Resulting Asymmetric Quantum Codes} \label{table:SDCodes} \centering \begin{tabular}{|| c | c | c | c | l | c | c | c | l || c | c | c | c | l ||} \hline \textbf{$n$} & \textbf{$d_{I}$} &\textbf{num$_{I}$} & \textbf{Ref.} & \textbf{Code $Q$} & \textbf{$d_{II}$} &\textbf{num$_{II}$} & \textbf{Ref.} & \textbf{Code $Q$} & \textbf{$n$} & \textbf{$d_{I}$} &\textbf{num$_{I}$} & \textbf{Ref.} & \textbf{Code $Q$} \\ \hline $2$ & $1^{o}$ & 1 & \cite{DP06} & $[[2,0,1/1]]_{4}$ & $2^{e}$ & 1 & \cite{DP06} & $\mathbf{[[2,0,2/2]]_{4}}$ & $3$ & $2^{e}$ & 1 & \cite{DP06} & $[[3,0,2/2]]_{4}$ \\ $4$ & $2^{e}$ & 1 & \cite{DP06} & $[[4,0,2/2]]_{4}$ & $2^{e}$ & 2 & \cite{DP06} & $[[4,0,2/2]]_{4}$ & $5$ & $3^{e}$ & 1 & \cite{DP06} & $[[5,0,3/3]]_{4}$ \\ $6$ & $3^{e}$ & 1 & \cite{DP06} & $[[6,0,3/3]]_{4}$ & $4^{e}$ & 1 & \cite{DP06} & $\mathbf{[[6,0,4/4]]_{4}}$ & $7$ & $3^{o}$ & 4 & \cite{DP06} & $[[7,0,3/3]]_{4}$ \\ $8$ & $4^{e}$ & 2 & \cite{DP06} & $[[8,0,4/4]]_{4}$ & $4^{e}$ & 3 & \cite{DP06} & $[[8,0,4/4]]_{4}$ & $9$ & $4^{e}$ & 8 & \cite{DP06} & $[[9,0,4/4]]_{4}$ \\ $10$ & $4^{e}$ & 101 & \cite{DP06} & $[[10,0,4/4]]_{4}$ & $4^{e}$ & 19 & \cite{DP06} & $[[10,0,4/4]]_{4}$ & $11$ & $5^{e}$ & 1 & \cite{DP06} & $[[11,0,5/5]]_{4}$\\ $12$ & $5^{e}$ & 63 & \cite{DP06} & $[[12,0,5/5]]_{4}$ & $6^{e}$ & 1 & \cite{DP06} & $[[12,0,6/6]]_{4}$ & $13$ & $5^{o}$ & 85845 & \cite{Var07} & $[[13,0,5/5]]_{4}$ \\ $14$ & $6^{e}$ & 2 & \cite{Var07} & $[[14,0,6/6]]_{4}$ & $6^{e}$ & 1020 & \cite{DP06} & $[[14,0,6/6]]_{4}$ & $15$ & $6^{e}$ & $\geq 2118$ & \cite{Var07} & $[[15,0,6/6]]_{4}$ \\ $16$ & $6^{e}$ & $\geq 8369$ & \cite{Var07} & $[[16,0,6/6]]_{4}$ & $6^{e}$ & $\geq 112$ & \cite{Var07} & $[[16,0,6/6]]_{4}$ & $17$ & $7^{e}$ & $\geq 2$ & \cite{Var07} & $[[17,0,7/7]]_{4}$ \\ $18$ & $7^{e}$ & $\geq 2$ & \cite{Var07} & $[[18,0,7/7]]_{4}$ & $8^{e}$ & $\geq 1$ & \cite{Var07} & $[[18,0,8/8]]_{4}$ & $19$ & $7^{b}$ & $\geq 17$ & \cite{Var07} & $[[19,0,7/7]]_{4}$ \\ $20$ & $8^{e}$ & $\geq 3$ & \cite{GK04} & $[[20,0,8/8]]_{4}$ & $8^{e}$ & $\geq 5$ & \cite{GK04} & $[[20,0,8/8]]_{4}$ & $21$ & $8^{e}$ & $\geq 2$ & \cite{Var07} & $[[21,0,8/8]]_{4}$ \\ $22$ & $8^{e}$ & $\geq 1$ & \cite{GK04} & $[[22,0,8/8]]_{4}$ & $8^{e}$ & $\geq 67$ & \cite{GK04} & $[[22,0,8/8]]_{4}$ & $23$ & $8^{b}$ & $\geq 2$ & \cite{GK04} & $[[23,0,8/8]]_{4}$ \\ $24$ & $8^{b}$ & $\geq 5$ & \cite{Var09} & $[[24,0,8/8]]_{4}$ & $8^{b}$ & $\geq 51$ & \cite{GK04} & $[[24,0,8/8]]_{4}$ & $25$ & $8^{b}$ & $\geq 30$ & \cite{GK04} & $[[25,0,8/8]]_{4}$ \\ $26$ & $8^{b}$ & $\geq 49$ & \cite{Var09} & $[[26,0,8/8]]_{4}$ & $8^{b}$ & $\geq 161$ & \cite{Var09} & $[[26,0,8/8]]_{4}$ & $27$ & $8^{b}$ & $\geq 15$ & \cite{GK04} & $[[27,0,9/9]]_{4}$ \\ $28$ & $10$ & ? & \cite{GK04} & & $10^{e}$ & $\geq 1$ & \cite{GK04} & $[[28,0,10/10]]_{4}$ & $29$ & $11^{e}$ & $\geq 1$ & \cite{GK04} & $[[29,0,11/11]]_{4}$ \\ $30$ & $11$ & ? & \cite{GK04} & & $12^{e}$ & $\geq 1$ & \cite{GK04} & $[[30,0,12/12]]_{4}$ & & & & & \\ \hline \end{tabular} \end{table*} \begin{rem}\label{rem:5.3} \begin{enumerate} \item The unique additive $(12,2^{12},6)_4$-code is also known as \textit{dodecacode}. It is well known that the best Hermitian self-dual linear code is of parameters $[12,6,4]_4$. \item In~\cite{Var09}, four so-called \textit{additive circulant graph codes} of parameters $(30,2^{30},12)_4$ are constructed without classification. It is yet unknown if any of these four codes is inequivalent to the one listed in~\cite{GK04}. \end{enumerate} \end{rem} \section{Construction from Self-Orthogonal Linear Codes} \label{sec:HSelfOrtho} It is well known (see~\cite[Th. 1.4.10]{HP03}) that a linear code $C$ having the parameters $[n,k,d]_4$ is Hermitian self-orthogonal if and only if the weights of its codewords are all even. \begin{thm}\label{thm:6.1} If $C$ is a Hermitian self-orthogonal code of parameters $[n,k,d]_4$, then there exists an asymmetric quantum code $Q$ with parameters $[[n,n-2k,d_z/d_x]]_4$, where \begin{equation}\label{eq:4.2} d_x = d_z = d(C^{\perp_{\mathop{{\rm H}}}})\text{.} \end{equation} \end{thm} \begin{proof} Seen as an additive code, $C$ is of parameters $(n,2^{2k},d)_4$ with $C^{\perp_{\mathop{{\rm tr}}}}$ being the code $C^{\perp_{\mathop{{\rm H}}}}$ seen as an $(n,2^{2n-2k},d^{\perp_{\mathop{{\rm tr}}}})$ additive code (see Remark~\ref{rem:4.1}). Applying Theorem~\ref{thm:3.5} by taking $C_1 = C^{\perp_{\mathop{{\rm tr}}}} = C_2$ to satisfy $C_1^{\perp_{\mathop{{\rm tr}}}} \subseteq C_2$ completes the proof. \end{proof} \begin{ex}\label{ex:4.3} Let $n$ be an even positive integer. Consider the repetition code $[n,1,n]_4$ with weight enumerator $1+3Y^n$. Since the weights are all even, this ${\mathbb F}_{4}$-linear code is Hermitian self-orthogonal. We then have a quantum code $Q$ with parameters $\mathbf{[[n,n-2,2/2]]_{4}}$. \end{ex} Table~\ref{table:ClassSO} below presents the resulting asymmetric quantum codes based on the classification of self-orthogonal ${\mathbb F}_{4}$-linear codes of length up to 29 and of dimensions 3 up to 6 as presented in~\cite{BO05}. I. Bouyukliev~\cite{Bou09} shared with us the original data used in the said classification plus some additional results for lengths 30 and 31. Given fixed length $n$ and dimension $k$, we only consider $[n,k,d]_4$-codes $C$ with maximal possible value for the minimum distances of their duals. For example, among 12 self-orthogonal $[10,4,4]_4$-codes, there are 4 distinct codes with $d^{\perp_{\mathop{{\rm H}}}}=3$ while the remaining 8 codes have $d^{\perp_{\mathop{{\rm H}}}}=2$. We take only the first 4 codes. The number of distinct codes that can be used for the construction of the asymmetric quantum codes for each set of given parameters is listed in the fourth column of the table. \begin{table} \caption{Asymmetric QECC from Classified Hermitian Self-Orthogonal ${\mathbb F}_{4}$-Linear Codes in~\cite{BO05}} \label{table:ClassSO} \centering \begin{tabular}{| c | l | l | c | l |} \hline \textbf{No.} & \textbf{Code $C$} &\textbf{Code $Q$} & \textbf{num} \\ \hline 1 & $[6,3,4]_4$ & $[[6,0,3/3]]_4$ & $1$ \\ 2 & $[7,3,4]_4$ & $[[7,1,3/3]]_4$ & $1$ \\ 3 & $[8,3,4]_4$ & $[[8,2,3/3]]_4$ & $1$ \\ 4 & $[8,4,4]_4$ & $[[8,0,4/4]]_4$ & $1$ \\ 5 & $[9,3,6]_4$ & $[[9,3,3/3]]_4$ & $1$ \\ 6 & $[9,4,4]_4$ & $[[9,1,3/3]]_4$ & $2$ \\ 7 & $[10,3,6]_4$ & $[[10,4,3/3]]_4$ & $1$ \\ 8 & $[10,4,4]_4$ & $[[10,2,3/3]]_4$ & $3$ \\ 9 & $[10,5,4]_4$ & $[[10,0,4/4]]_4$ & $2$ \\ 10 & $[11,3,6]_4$ & $[[11,5,3/3]]_4$ & $1$ \\ 11 & $[11,4,4]_4$ & $[[11,3,3/3]]_4$ & $3$ \\ 12 & $[11,4,6]_4$ & $[[11,3,3/3]]_4$ & $1$ \\ 13 & $[11,5,4]_4$ & $[[11,1,3/3]]_4$ & $6$ \\ 14 & $[12,3,8]_4$ & $[[12,6,3/3]]_4$ & $1$ \\ 15 & $[12,4,6]_4$ & $[[12,4,4/4]]_4$ & $1$ \\ 16 & $[12,5,6]_4$ & $[[12,2,4/4]]_4$ & $1$ \\ 17 & $[12,6,4]_4$ & $[[12,0,4/4]]_4$ & $5$ \\ 18 & $[13,3,8]_4$ & $[[13,7,3/3]]_4$ & $1$ \\ 19 & $[13,4,8]_4$ & $[[13,5,3/3]]_4$ & $5$ \\ 20 & $[13,5,6]_4$ & $[[13,3,4/4]]_4$ & $1$ \\ 21 & $[13,6,6]_4$ & $[[13,1,5/5]]_4$ & $1$ \\ 22 & $[14,3,10]_4$ & $[[14,8,3/3]]_4$ & $1$ \\ 23 & $[14,4,8]_4$ & $[[14,6,4/4]]_4$ & $1$ \\ 24 & $[14,5,8]_4$ & $[[14,4,4/4]]_4$ & $4$ \\ 25 & $[14,6,6]_4$ & $[[14,2,5/5]]_4$ & $1$ \\ 26 & $[14,7,6]_4$ & $[[14,0,6/6]]_4$ & $1$ \\ 27 & $[15,3,10]_4$ & $[[15,9,3/3]]_4$ & $1$ \\ 28 & $[15,4,8]_4$ & $[[15,7,3/3]]_4$ & $189$ \\ 29 & $[15,5,8]_4$ & $[[15,5,4/4]]_4$ & $26$ \\ 30 & $[15,6,8]_4$ & $[[15,3,5/5]]_4$ & $3$ \\ 31 & $[16,3,12]_4$ & $[[16,10,3/3]]_4$ & $1$ \\ 32 & $[16,4,10]_4$ & $[[16,8,3/3]]_4$ & $38$ \\ 33 & $[16,5,8]_4$ & $[[16,6,4/4]]_4$ & $519$ \\ 34 & $[16,6,8]_4$ & $[[16,4,4/4]]_4$ & $697$ \\ 35 & $[17,3,12]_4$ & $[[17,11,2/2]]_4$ & $4$ \\ 36 & $[17,4,12]_4$ & $[[17,9,4/4]]_4$ & $1$ \\ 37 & $[17,5,10]_4$ & $[[17,7,4/4]]_4$ & $27$ \\ 38 & $[18,3,12]_4$ & $[[18,12,2/2]]_4$ & $45$ \\ 39 & $[18,4,12]_4$ & $[[18,10,3/3]]_4$ & $11$ \\ 40 & $[18,6,10]_4$ & $[[18,6,5/5]]_4$ & $2$ \\ 41 & $[19,3,12]_4$ & $[[19,13,2/2]]_4$ & $185$ \\ 42 & $[19,4,12]_4$ & $[[19,11,3/3]]_4$ & $2570$ \\ 43 & $[20,3,14]_4$ & $[[20,14,2/2]]_4$ & $10$ \\ 44 & $[20,5,12]_4$ & $[[20,10,3/3]]_4$ & $4$ \\ 45 & $[21,3,16]_4$ & $[[21,15,3/3]]_4$ & $1$ \\ 46 & $[21,4,14]_4$ & $[[21,13,3/3]]_4$ & $212$ \\ 47 & $[21,5,12]_4$ & $[[21,11,3/3]]_4$ & $3$ \\ 48 & $[22,3,16]_4$ & $[[22,16,3/3]]_4$ & $4$ \\ 49 & $[22,5,14]_4$ & $[[22,12,4/4]]_4$ & $67$ \\ 50 & $[23,3,16]_4$ & $[[23,17,2/2]]_4$ & $46$ \\ 51 & $[23,4,16]_4$ & $[[23,15,3/3]]_4$ & $1$ \\ 52 & $[24,3,16]_4$ & $[[24,18,2/2]]_4$ & $614$ \\ 53 & $[24,4,16]_4$ & $[[24,16,3/3]]_4$ & $20456$ \\ 54 & $[25,3,18]_4$ & $[[25,19,2/2]]_4$ & $6$ \\ 55 & $[25,4,16]_4$ & $[[25,17,3/3]]_4$ & $19$ \\ 56 & $[26,3,18]_4$ & $[[26,20,2/2]]_4$ & $185$ \\ 57 & $[26,4,18]_4$ & $[[26,18,3/3]]_4$ & $14$ \\ 58 & $[27,3,20]_4$ & $[[27,21,2/2]]_4$ & $2$ \\ 59 & $[28,3,20]_4$ & $[[28,22,2/2]]_4$ & $46$ \\ 60 & $[28,4,20]_4$ & $[[28,20,3/3]]_4$ & $1$ \\ 61 & $[29,3,20]_4$ & $[[29,23,2/2]]_4$ & $850$ \\ 62 & $[29,4,20]_4$ & $[[29,21,3/3]]_4$ & $11365$ \\ 63 & $[30,5,20]_4$ & $[[30,20,3/3]]_4$ & $\geq90$ \\ 64 & $[31,4,22]_4$ & $[[31,23,3/3]]_4$ & $1$ \\ \hline \end{tabular} \end{table} Comparing some entries in Table~\ref{table:ClassSO}, say, numbers 5 and 6, we notice that the $[[9,3,3/3]]_4$-code has better parameters than the $[[9,1,3/3]]_4$-code does. Both codes are included in the table in the interest of preserving the information on precisely how many of such codes there are from the classification result. In~\cite[Table 7]{GG09}, examples of ${\mathbb F}_{4}$-linear self-dual codes for even lengths $2 \leq n \leq 80$ are presented. Table~\ref{table:ClassSD} lists down the resulting asymmetric quantum codes for $32 \leq n \leq 80$. \begin{table} \caption{Asymmetric QECC from Hermitian Self-Dual ${\mathbb F}_{4}$-Linear Codes based on~\cite[Table 7]{GG09} for $32 \leq n \leq 80$} \label{table:ClassSD} \centering \begin{tabular}{| l | l || l | l |} \hline \textbf{Code $C$} &\textbf{Code $Q$} & \textbf{Code $C$} &\textbf{Code $Q$}\\ \hline $[32,16,10]_4$ & $[[32,0,10/10]]_4$ & $[58,29,14]_4$ & $[[58,0,14/14]]_4$\\ $[34,17,10]_4$ & $[[34,0,10/10]]_4$ & $[60,30,16]_4$ & $[[60,0,16/16]]_4$\\ $[36,18,12]_4$ & $[[36,0,12/12]]_4$ & $[62,31,18]_4$ & $[[62,0,18/18]]_4$\\ $[38,19,12]_4$ & $[[38,0,12/12]]_4$ & $[64,32,16]_4$ & $[[64,0,16/16]]_4$\\ $[40,20,12]_4$ & $[[40,0,12/12]]_4$ & $[66,33,16]_4$ & $[[66,0,16/16]]_4$\\ $[42,21,12]_4$ & $[[42,0,12/12]]_4$ & $[68,34,18]_4$ & $[[68,0,18/18]]_4$\\ $[44,22,12]_4$ & $[[44,0,12/12]]_4$ & $[70,35,18]_4$ & $[[70,0,18/18]]_4$\\ $[46,23,14]_4$ & $[[46,0,14/14]]_4$ & $[72,36,18]_4$ & $[[72,0,18/18]]_4$\\ $[48,24,14]_4$ & $[[48,0,14/14]]_4$ & $[74,37,18]_4$ & $[[74,0,18/18]]_4$\\ $[50,25,14]_4$ & $[[50,0,14/14]]_4$ & $[76,38,18]_4$ & $[[76,0,18/18]]_4$\\ $[52,26,14]_4$ & $[[52,0,14/14]]_4$ & $[78,39,18]_4$ & $[[78,0,18/18]]_4$\\ $[54,27,16]_4$ & $[[54,0,16/16]]_4$ & $[80,40,20]_4$ & $[[80,0,20/20]]_4$\\ $[56,28,14]_4$ & $[[56,0,14/14]]_4$ & & \\ \hline \end{tabular} \end{table} For parameters other than those listed in Table~\ref{table:ClassSO}, we do not have complete classification just yet. The Q-extension program described in~\cite{Bou07} can be used to extend the classification effort given sufficient resources. Some classifications based on the optimality of the minimum distances of the codes can be found in~\cite{BGV04} and in~\cite{ZLL09}, although when used in the construction of asymmetric quantum codes using our framework, they do not yield good $d_{z} = d_{x}$ relative to the length $n$. Many other ${\mathbb F}_{4}$-linear self-orthogonal codes are known. Examples can be found in~\cite[Table II]{CRSS98}, \cite{MLZF07}, as well as from the list of best known linear codes (BKLC) over ${\mathbb F}_4$ as explained in~\cite{Gr09}. Table~\ref{table:RandomSO} presents more examples of asymmetric quantum codes constructed based on known self-orthogonal linear codes up to length $n=40$. The list of codes in Table~\ref{table:RandomSO} is by no means exhaustive. It may be possible to find asymmetric codes with better parameters. \begin{table} \caption{Asymmetric QECC from Hermitian Self-orthogonal ${\mathbb F}_{4}$-Linear Codes for $n \leq 40$} \label{table:RandomSO} \centering \begin{tabular}{| c | l | l | l |} \hline \textbf{No.} & \textbf{Code $C$} & \textbf{Code $Q$} & \textbf{Ref.} \\ \hline 1 & $[5,2,4]_4$ & $\mathbf{[[5,1,3/3]]_4}$ & \cite[BKLC]{Gr09}\\ 2 & $[6,2,2]_4$ & $[[6,2,2/2]]_4$ & \cite{Bou09}\\ 3 & $[8,2,6]_4$ & $[[8,4,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 4 & $[10,2,8]_4$ & $[[10,6,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 5 & $[14,7,6]_4$ & $[[14,0,6/6]]_4$ & \cite[Table 7]{GG09}\\ 6 & $[15,2,12]_4$ & $[[15,11,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 7 & $[16,8,6]_4$ & $[[16,0,6/6]]_4$ & \cite[Table 7]{GG09}\\ 8 & $[20,2,16]_4$ & $[[20,16,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 9 & $[20,5,12]_4$ & $[[20,10,4/4]]_4$ & \cite[Table II]{CRSS98}\\ 10 & $[20,9,8]_4$ & $[[20,2,6/6]]_4$ & \cite[p.~788]{MLZF07}\\ 11 & $[20,10,8]_4$ & $[[20,0,8/8]]_4$ & \cite[Table 7]{GG09}\\ 12 & $[22,8,10]_4$ & $[[22,6,5/5]]_4$ & \cite{LM09}\\ 13 & $[22,10,8]_4$ & $[[22,2,6/6]]_4$ & \cite{LM09}\\ 14 & $[22,11,8]_4$ & $[[22,0,8/8]]_4$ & \cite[Table 7]{GG09}\\ 15 & $[23,8,10]_4$ & $[[23,7,5/5]]_4$ & \cite{LM09}\\ 16 & $[23,8,12]_4$ & $[[23,7,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 17 & $[23,10,8]_4$ & $[[23,3,6/6]]_4$ & \cite{LM09}\\ 18 & $[24,5,16]_4$ & $[[24,14,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 19 & $[24,8,10]_4$ & $[[24,8,5/5]]_4$ & \cite{LM09}\\ 20 & $[24,9,12]_4$ & $[[24,6,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 21 & $[24,12,8]_4$ & $[[24,0,8/8]]_4$ & \cite[Table 7]{GG09}\\ 22 & $[25,2,20]_4$ & $[[25,21,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 23 & $[25,5,16]_4$ & $[[25,15,4/4]]_4$ & \cite[Table II]{CRSS98}\\ 24 & $[25,8,10]_4$ & $[[25,9,5/5]]_4$ & \cite{LM09}\\ 25 & $[25,10,12]_4$ & $[[25,5,7/7]]_4$ & \cite{LM09}\\ 26 & $[26,2,20]_4$ & $[[26,22,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 27 & $[26,6,16]_4$ & $[[26,14,4/4]]_4$ & \cite[BKLC]{Gr09}\\ 28 & $[26,9,10]_4$ & $[[26,8,5/5]]_4$ & \cite{LM09}\\ 29 & $[26,10,10]_4$ & $[[26,6,6/6]]_4$ & \cite{LM09}\\ 30 & $[26,11,12]_4$ & $[[26,4,8/8]]_4$ & \cite[BKLC]{Gr09}\\ 31 & $[27,6,16]_4$ & $[[27,15,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 32 & $[27,9,10]_4$ & $[[27,9,5/5]]_4$ & \cite{LM09}\\ 33 & $[27,10,10]_4$ & $[[27,7,6/6]]_4$ & \cite{LM09}\\ 34 & $[27,12,12]_4$ & $[[27,3,9/9]]_4$ & \cite[BKLC]{Gr09}\\ 35 & $[28,7,16]_4$ & $[[28,14,5/5]]_4$ & \cite[Table II]{CRSS98}\\ 36 & $[28,8,12]_4$ & $[[28,12,5/5]]_4$ & \cite{LM09}\\ 37 & $[28,10,10]_4$ & $[[28,8,6/6]]_4$ & \cite{LM09}\\ 38 & $[28,13,12]_4$ & $[[28,2,10/10]]_4$ & \cite[BKLC]{Gr09}\\ 39 & $[29,8,16]_4$ & $[[29,13,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 40 & $[29,11,10]_4$ & $[[29,7,6/6]]_4$ & \cite{LM09}\\ 41 & $[29,14,12]_4$ & $[[29,1,11/11]]_4$ & \cite[BKLC]{Gr09}\\ 42 & $[30,2,24]_4$ & $[[30,26,2/2]]_4$ & \cite[BKLC]{Gr09}\\ 43 & $[30,5,20]_4$ & $[[30,20,4/4]]_4$ & \cite[Table 13.2]{NRS06}\\ 44 & $[30,9,12]_4$ & $[[30,12,5/5]]_4$ & \cite{LM09}\\ 45 & $[30,11,10]_4$ & $[[30,8,6/6]]_4$ & \cite{LM09}\\ 46 & $[30,15,12]_4$ & $[[30,0,12/12]]_4$ & \cite[Table 7]{GG09}\\ 47 & $[31,9,16]_4$ & $[[31,13,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 48 & $[32,6,20]_4$ & $[[32,20,4/4]]_4$ & \cite[Table 3]{GO98}\\ 49 & $[32,9,14]_4$ & $[[32,14,5/5]]_4$ & \cite{LM09}\\ 50 & $[32,11,12]_4$ & $[[32,10,6/6]]_4$ & \cite{LM09}\\ 51 & $[33,7,20]_4$ & $[[33,19,4/4]]_4$ & \cite[BKLC]{Gr09}\\ 52 & $[33,9,14]_4$ & $[[33,15,5/5]]_4$ & \cite{LM09}\\ 53 & $[33,10,16]_4$ & $[[33,13,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 54 & $[33,12,14]_4$ & $[[33,9,7/7]]_4$ & \cite[BKLC]{Gr09}\\ 55 & $[33,15,12]_4$ & $[[33,3,9/9]]_4$ & \cite[BKLC]{Gr09}\\ 56 & $[34,9,18]_4$ & $[[34,16,6/6]]_4$ & \cite{LM09}\\ 57 & $[34,13,14]_4$ & $[[34,8,8/8]]_4$ & \cite[BKLC]{Gr09}\\ 58 & $[34,16,12]_4$ & $[[34,2,10/10]]_4$ & \cite[BKLC]{Gr09}\\ 59 & $[35,5,24]_4$ & $[[35,25,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 60 & $[35,8,20]_4$ & $[[35,19,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 61 & $[35,11,14]_4$ & $[[35,13,6/6]]_4$ & \cite{LM09}\\ 62 & $[35,17,12]_4$ & $[[35,1,11/11]]_4$ & \cite[BKLC]{Gr09}\\ 63 & $[36,9,16]_4$ & $[[36,18,4/4]]_4$ & \cite{LM09}\\ 64 & $[36,11,14]_4$ & $[[36,14,6/6]]_4$ & \cite{LM09}\\ 65 & $[37,9,20]_4$ & $[[37,19,5/5]]_4$ & \cite[BKLC]{Gr09}\\ 66 & $[37,18,12]_4$ & $[[37,1,11/11]]_4$ & \cite[BKLC]{Gr09}\\ 67 & $[38,6,24]_4$ & $[[38,26,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 68 & $[38,11,18]_4$ & $[[38,16,6/6]]_4$ & \cite[BKLC]{Gr09}\\ 69 & $[39,12,18]_4$ & $[[39,15,7/7]]_4$ & \cite[BKLC]{Gr09}\\ 70 & $[39,4,28]_4$ & $[[39,31,3/3]]_4$ & \cite[BKLC]{Gr09}\\ 71 & $[39,7,24]_4$ & $[[39,25,4/4]]_4$ & \cite[BKLC]{Gr09}\\ 72 & $[40,5,28]_4$ & $[[40,30,4/4]]_4$ & \cite[Table II]{CRSS98}\\ 73 & $[40,15,16]_4$ & $[[40,10,7/7]]_4$ & \cite[BKLC]{Gr09}\\ \hline \end{tabular} \end{table} For lengths larger than $n=40$,~\cite{GO98} provides some known ${\mathbb F}_{4}$-linear codes of dimension 6 that belong to the class of \textit{quasi-twisted codes}. Based on the weight distribution of these codes~\cite[Table 3]{GO98}, we know which ones of them are self-orthogonal. Applying Theorem~\ref{thm:3.5} to them yields the 12 quantum codes listed in Table~\ref{table:ClassQT}. \begin{table} \caption{Asymmetric QECC from Hermitian Self-orthogonal Quasi-Twisted Codes Found in~\cite{GO98}} \label{table:ClassQT} \centering \begin{tabular}{| l | l || l | l |} \hline \textbf{Code $C$} & \textbf{Code $Q$} & \textbf{Code $C$} & \textbf{Code $Q$}\\ \hline $[48,6,32]_4$ & $[[48,36,3/3]]_4$ & $[144,6,104]_4$ & $[[144,132,3/3]]_4$\\ $[78,6,56]_4$ & $[[78,66,4/4]]_4$ & $[150,6,108]_4$ & $[[150,138,3/3]]_4$\\ $[102,6,72]_4$ & $[[102,90,3/3]]_4$ & $[160,6,116]_4$ & $[[160,148,3/3]]_4$\\ $[112,6,80]_4$ & $[[112,100,3/3]]_4$ & $[182,6,132]_4$ & $[[182,170,3/3]]_4$\\ $[120,6,86]_4$ & $[[120,108,4/4]]_4$ & $[192,6,138]_4$ & $[[192,180,3/3]]_4$\\ $[132,6,94]_4$ & $[[132,120,3/3]]_4$ & $[200,6,144]_4$ & $[[200,188,3/3]]_4$\\ \hline \end{tabular} \end{table} Another family of codes that we can use is the \textit{MacDonald codes}, commonly denoted by $C_{k,u}$ with $k>u>0$. The MacDonald codes are linear codes with parameters $[(q^{k}-q^{u})/(q-1),k,q^{k-1}-q^{u-1}]_{q}$. Some historical background and a construction of their generator matrices can be found in~\cite{BD03}. It is known that these codes are \textit{two-weight codes}. That is, they have nonzero codewords of only two possible weights. In~\cite[Figures 1a and 2a]{CaKa86}, the MacDonald codes are labeled SU1. There are $q^{k}-q^{k-u}$ codewords of weight $q^{k-1}-q^{u-1}$ and $q^{k-u}-1$ codewords of weight $q^{k-1}$. The MacDonald codes satisfy the equality of the \textit{Griesmer bound} which says that, for any $[n,k\geq1,d]_q$-code, \begin{equation}\label{eq:4.3} n \geq \sum_{i=0}^{k-1}\left \lceil \frac{d}{q^{i}} \right \rceil \text{.} \end{equation} \begin{ex}\label{ex:4.4} For $q=4,k>u>1$, the MacDonald codes are self-orthogonal since both $4^{k-1}-4^{u-1}$ and $4^{k-1}$ are even. For a $[(4^{k}-4^{u})/3,k,4^{k-1}-4^{u-1}]_{4}$-code $C_{k,u}$, we know (see~\cite[Lemma 4]{BD03}) that $d(C_{k,u}^{\perp_{\mathop{{\rm H}}}}) \geq 3$. Using $C_{k,u}$ and applying Theorem~\ref{thm:6.1}, we get an asymmetric quantum code $Q$ with parameters \begin{equation*} [[(4^{k}-4^{u})/3,((4^{k}-4^{u})/3)-2k,(\geq 3)/(\geq 3)]]_4 \text{.} \end{equation*} For $k\leq 5$ we have the following more explicit examples. The weight enumerator is written in an abbreviated form. For instance, $(0,1),(12,60),(16,3)$ means that the corresponding code has 1 codeword of weight 0, 60 codewords of weight 12 and 3 codewords of weight 16. \begin{enumerate} \item For $k=3,u=2$, we have the $[16,3,12]_4$-code with weight enumerator $(0,1),(12,60),(16,3)$. The resulting asymmetric QECC is a $[[16,10,3/3]]_4$-code. This code is listed as number 31 in Table~\ref{table:ClassSO}. \item For $k=4,u=3$, we have the $[64,4,48]_4$-code with weight enumerator $(0,1),(48,252),(64,3)$. The resulting asymmetric QECC is a $[[64,56,3/3]]_4$-code. \item For $k=4,u=2$, we have the $[80,4,60]_4$-code with weight enumerator $(0,1),(60,240),(64,15)$. The resulting asymmetric QECC is a $[[80,72,3/3]]_4$-code. \item For $k=5,u=4$, we have the $[256,5,192]_4$-code with weight enumerator $(0,1),(192,1020),(256,3)$. The resulting asymmetric QECC is a $[[256,246,3/3]]_4$-code. \item For $k=5,u=3$, we have the $[320,5,240]_4$-code with weight enumerator $(0,1),(240,1008),(256,15)$. The resulting asymmetric QECC is a $[[320,310,3/3]]_4$-code. \item For $k=5,u=2$, we have the $[336,5,252]_4$-code with weight enumerator $(0,1),(252,960),(256,63)$. The resulting asymmetric QECC is a $[[336,326,3/3]]_4$-code. \end{enumerate} \end{ex} \section{Construction from Nested Linear Cyclic Codes}\label{sec:Cyclic} The asymmetric quantum codes that we have constructed so far have $d_{z} = d_{x}$. From this section onward, we construct asymmetric quantum codes with $d_{z} \geq d_{x}$. In most cases, $d_{z} > d_{x}$. It is well established that, under the natural correspondence of vectors and polynomials, the study of cyclic codes in ${\mathbb F}_q^n$ is equivalent to the study of ideals in the residue class ring \begin{equation*} \mathcal{R}_n = {\mathbb F}_q[x]/(x^n-1) \text{.} \end{equation*} The study of ideals in $\mathcal{R}_n$ depends on factoring $x^n-1$. Basic results concerning and the properties of cyclic codes can be found in~\cite[Ch. 4]{HP03} or~\cite[Ch. 7]{MS77}. A cyclic code $C$ is a subset of a cyclic code $D$ of equal length over ${\mathbb F}_q$ if and only if the generator polynomial of $D$ divides the generator polynomial of $C$. Both polynomials divide $x^n-1$. Once the factorization of $x^n-1$ into irreducible polynomials is known, the nestedness property becomes apparent. We further require that $n$ be relatively prime to $4$ to exclude the so-called repeated-root cases since the resulting cyclic codes when $n$ is not relatively prime to $4$ have inferior parameters. See~\cite[p. 976]{Cha98} for comments and references regarding this matter. \begin{thm}\label{thm:7.1} Let $C$ and $D$ be cyclic codes of parameters $[n,k_1,d_1]_4$ and $[n,k_2,d_2]_4$, respectively, with $C \subseteq D$, then there exists an asymmetric quantum code $Q$ with parameters $[[n,k_2-k_1,d_z/d_x]]_4$, where \begin{equation}\label{eq:7.1} \left\lbrace d_z,d_x \right\rbrace = \left\lbrace d(C^{\perp_{\mathop{{\rm H}}}}),d_2 \right\rbrace \text{.} \end{equation} \end{thm} \begin{proof} Apply Theorem~\ref{thm:3.5} by taking $C_1 = C^{\perp_{\mathop{{\rm tr}}}}$ and $C_2 = D$. Since $C$ is an $[n,k_1,d_1]_4$ code, $C$ is an additive code of parameters $(n,2^{2k_1},d_1)_4$. Similarly, $D$ is an additive code of parameters $(n,2^{2k_2},d_2)_4$. The values for $d_z$ and $d_x$ can be verified by simple calculations. \end{proof} \begin{ex}\label{ex:7.2} Let $C$ be the repetition $[n,1,n]_4$-code generated by the polynomial $(x^{n}+1)/(x+1)$. If we take $C=D$ in Theorem~\ref{thm:7.1}, then we get a quantum code $Q$ with parameters $\mathbf{[[n,0,n/2]]_4}$. \end{ex} Tables~\ref{table:Cyclic} and~\ref{table:Cyclic2} list examples of asymmetric quantum codes constructed from nested cyclic codes up to $n=25$. We exclude the case $C=D$ since the parameters of the resulting quantum code $Q$ are $[[n,0,d(C)/2]]_{4}$ which are never better than those of the code $Q$ in Example~\ref{ex:7.2}. Among the resulting codes $Q$ of equal length and dimension, we choose one with the largest $d_z,d_x$ values. For codes $Q$ with equal length and distances, we choose one with the largest dimension. \begin{table*} \caption{Asymmetric QECC from Nested Cyclic Codes} \label{table:Cyclic} \centering \begin{tabular}{| c | l | l | l |} \hline \textbf{No.} & \textbf{Codes $C$ and $D$} & \textbf{Generator Polynomials of $C$ and of $D$} & \textbf{Code $Q$} \\ \hline 1 & $[3,1,3]_4$ & $(x+1)(x+\omega)$ & $\mathbf{[[3,1,2/2]]_4}$ \\ & $[3,2,2]_4$ & $(x+1)$ & \\ 2 & $[5,1,5]_4$ & $(x^2+\omega x+1)(x^2+\omega^2 x+1)$ & $\mathbf{[[5,2,3/2]]_{4}}$ \\ & $[5,3,3]_4$ & $(x^2+\omega^2 x+1)$ & \\ 3 & $[7,1,7]_4$ & $(x^3+x+1)(x^3+x^2+1)$ & $[[7,3,3/2]]_4$ \\ & $[7,4,3]_4$ & $(x^3+x+1)$ & \\ 4 & $[7,3,4]_4$ & $(x^3+x+1)(x+1)$ & $[[7,1,3/3]]_4$ \\ & $[7,4,3]_4$ & $(x^3+x+1)$ & \\ 5 & $[9,1,9]_4$ & $(x+\omega)(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & $[[9,1,6/2]]_4$ \\ & $[9,2,6]_4$ & $(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & \\ 6 & $[9,1,9]_4$ & $(x+\omega)(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & $[[9,4,3/2]]_4$ \\ & $[9,5,3]_4$ & $(x+\omega)(x^3+\omega)$ & \\ 7 & $[9,1,9]_4$ & $(x+\omega)(x+\omega^2)(x^3+\omega)(x^3+\omega^2)$ & $\mathbf{[[9,7,2/2]]_4}$ \\ & $[9,8,2]_4$ & $(x+\omega)$ & \\ 8 & $[11,5,6]_4$ & $(x+1)(x^5+\omega^2 x^4+x^3+x^2+\omega x+1)$ & $[[11,1,5/5]]_4$ \\ & $[11,6,5]_4$ & $(x^5+\omega^2 x^4+x^3+x^2+\omega x+1)$ & \\ 9 & $[11,1,11]_4$ & $(x^{11}+1)/(x+1)$ & $[[11,5,5/2]]_4$ \\ & $[11,6,5]_4$ & $(x^5+\omega x^4+x^3+x^2+\omega^2 x+1)$ & \\ 10 & $[13,6,6]_4$ & $(x+1)(x^6 +\omega x^5 +\omega^2 x^3 +\omega x +1)$ & $[[13,1,5/5]]_4$ \\ & $[13,7,5]_4$ & $(x^6 +\omega x^5 +\omega^2 x^3 +\omega x +1)$ & \\ 11 & $[13,1,13]_4$ & $(x^{13}+1)/(x+1)$ & $[[13,6,5/2]]_4$ \\ & $[13,7,5]_4$ & $(x^6 +\omega x^5 +\omega^2 x^3 +\omega x +1)$ & \\ 12 & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2 + \omega^2 x + \omega^2))$ & $[[15,1,9/3]]_4$ \\ & $[15,4,9]_4$ & $(x^{15}+1)/((x+1)(x+\omega)(x^2 + \omega^2 x + \omega^2))$ & \\ 13 & $[15,6,8]_4$ & $(x^9 +\omega x^8 +x^7+x^5 +\omega x^4 + \omega^2 x^2 +\omega^2x + 1)$ & $[[15,1,7/5]]_4$ \\ & $[15,7,7]_4$ & $(x^8 +\omega^2 x^7 +\omega x^6 +\omega x^5 +\omega^2 x^4+x^3+x^2+\omega x + 1)$ & \\ 14 & $[15,7,7]_4$ & $(x^8 +\omega^2 x^7 +\omega x^6 +\omega x^5 +\omega^2 x^4+x^3+x^2+\omega x + 1)$ & $[[15,1,6/6]]_4$ \\ & $[15,8,6]_4$ & $(x^7 + x^6+\omega x^4+x^2+\omega^2 x +\omega^2)$ & \\ 15 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,2,11/2]]_4$ \\ & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2))$ & \\ 16 & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2))$ & $[[15,2,8/3]]_4$ \\ & $[15,5,8]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2)(x^2+\omega^2 x+1))$ & \\ 17 & $[15,6,8]_4$ & $(x^{15}+1)/((x^2+\omega x+\omega)(x^2+\omega^2 x+\omega^2)(x^2+\omega^2 x+1))$ & $[[15,2,6/5]]_4$ \\ & $[15,8,6]_4$ & $(x^7 + x^6+\omega x^4+x^2+\omega^2 x +\omega^2)$ & \\ 18 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,3,9/2]]_4$ \\ & $[15,4,9]_4$ & $(x^{15}+1)/((x+1)(x+\omega)(x^2+\omega^2 x+\omega^2))$ & \\ 19 &$[15,8,6]_4$ & $x^{7} + \omega x^{6} + \omega^2 x^{4} + \omega^2 x^{2} + \omega x + \omega^2$ & $[[15,4,7/3]]_4$ \\ &$[15,12,3]_4$& $x^{3} + x^{2} + \omega^{2}$ & \\ 20 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,4,8/2]]_4$ \\ & $[15,5,8]_4$ & $(x^{15}+1)/((x+1)(x^2+\omega^2 x+\omega^2)(x^2+\omega^2 x+1))$ & \\ 21 &$[15,4,10]_4$& $(x^{15}+1)/(x^4 + \omega^2 x^3 + \omega x^2 + \omega x + w)$ & $[[15,5,5/4]]_4$ \\ &$[15,9,5]_4$& $x^6 + \omega^2 x^5 + \omega^2 x^4 + x^3 + x^2 + \omega x + 1$ & \\ 22 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,6,7/2]]_4$ \\ & $[15,7,7]_4$ & $(x^8 +\omega^2 x^7 +\omega x^6 +\omega x^5 +\omega^2 x^4+x^3+x^2+\omega x + 1)$ & \\ 23 & $[15,3,11]_4$ & $(x^{15}+1)/((x+1)(x^2 + \omega^2 x + \omega^2))$ & $[[15,6,5/3]]_4$ \\ & $[15,9,5]_4$ & $(x^6 + \omega x^5 + x^4 + x^3 + \omega^2 x^2 + \omega^2 x + 1)$ & \\ 24 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,7,6/2]]_4$ \\ & $[15,8,6]_4$ & $(x^7 + x^6+\omega x^4+x^2+\omega^2 x +\omega^2)$ & \\ 25 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $[[15,8,5/2]]_4$ \\ & $[15,9,5]_4$ & $(x^6 + \omega x^5 + x^4 + x^3 + \omega^2 x^2 + \omega^2 x + 1)$ & \\ 26 &$[15,3,11]_4$& $(x^{15}+1)/(x^3 + \omega^2 x^2 + \omega^2)$ & $[[15,9,3/3]]_4$ \\ &$[15,12,3]_4$& $x^3 + x^2 + \omega^2$ & \\ 27 &$[15,1,15]_4$& $(x^{15}+1)/(x+1)$ & $[[15,11,3/2]]_4$ \\ &$[15,12,3]_4$& $x^3 + x^2 + \omega^2$ & \\ 28 & $[15,1,15]_4$ & $(x^{15}+1)/(x+1)$ & $\mathbf{[[15,13,2/2]]_4}$ \\ & $[15,14,2]_4$ & $(x+ \omega)$ & \\ 29 & $[17,12,4]_4$ & $(x^5 + \omega x^3 + \omega x^2 + 1)$ & $[[17,1,9/4]]_4$ \\ & $[17,13,4]_4$ & $(x^4 + x^3 + \omega^2 x^2 + x + 1)$ & \\ 30 & $[17,8,8]_4$ & $(x^9 +\omega x^8 +\omega^2 x^7 +\omega^2 x^6 +\omega^2 x^3 +\omega^2 x^2 +\omega x + 1)$ & $[[17,1,7/7]]_4$ \\ & $[17,9,7]_4$ & $(x^8 +\omega^2 x^7 +\omega^2 x^5 +\omega^2 x^4 +\omega^2 x^3 +\omega^2 x + 1)$ & \\ 31 & $[17,1,17]_4$ & $(x^{17}+1)/(x+1)$ & $[[17,4,9/2]]_4$ \\ & $[17,5,9]_4$ & $(x^{17}+1)/(x^5 +\omega^2 x^4 +\omega^2 x^3 +\omega^2 x^2 +\omega^2 x + 1)$ & \\ 32 & $[17,4,12]_4$ & $(x^{17}+1)/(x^4+x^3+\omega x^2+x+1)$ & $[[17,4,8/4]]_4$ \\ & $[17,8,8]_4$ & $(x^9 +\omega x^8 +\omega^2 x^7 +\omega^2 x^6 +\omega^2 x^3 +\omega^2 x^2 +\omega x + 1)$ & \\ 33 & $[17,4,12]_4$ & $(x^{17}+1)/(x^4+x^3+\omega x^2+x+1)$ & $[[17,5,7/4]]_4$ \\ & $[17,9,7]_4$ & $(x^8 +\omega^2 x^7 +\omega^2 x^5 +\omega^2 x^4 +\omega^2 x^3 +\omega^2 x + 1)$ & \\ 34 & $[17,1,17]_4$ & $(x^{17}+1)/(x+1)$ & $[[17,8,7/2]]_4$ \\ & $[17,9,7]_4$ & $(x^8 +\omega x^7 +\omega x^5 +\omega x^4 +\omega x^3 +\omega x + 1)$ & \\ 35 & $[17,1,17]_4$ & $(x^{17}+1)/(x+1)$ & $[[17,12,4/2]]_4$ \\ & $[17,13,4]_4$ & $(x^4 + x^3 + \omega^2 x^2 + x + 1)$ & \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Asymmetric QECC from Nested Cyclic Codes Continued} \label{table:Cyclic2} \centering \begin{tabular}{| c | l | l | l |} \hline \textbf{No.} & \textbf{Codes $C$ and $D$} & \textbf{Generator Polynomials of $C$ and $D$} & \textbf{Code $Q$} \\ \hline 36 & $[19,9,8]_4$ & $(x+1)(x^9 +\omega^2 x^8 +\omega^2 x^6 +\omega^2 x^5 +\omega x^4 +\omega x^3 +\omega x + 1)$ & $[[19,1,7/7]]_4$\\ & $[19,10,7]_4$ & $(x^9 + \omega^2 x^8 + \omega^2 x^6 + \omega^2 x^5 + \omega x^4 + \omega x^3 + \omega x + 1)$ & \\ 37 & $[19,1,19]_4$ & $(x^{19}+1)/(x+1)$ & $[[19,9,7/2]]_4$ \\ & $[19,10,7]_4$ & $(x^9 + \omega x^8 + \omega x^6 + \omega x^5 + \omega^2 x^4 + \omega^2 x^3 + \omega^2 x + 1)$ & \\ 38 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,1,14/2]]_4$ \\ & $[21,2,14]_4$ & $(x^{21}+1)/(x^2 + \omega^2 x + \omega)$ & \\ 39 &$[21,4,12]_4$& $(x^{21}+1)/(x^4 + \omega x^3 + \omega^2 x^2 + x + 1)$ & $[[21,3,11/3]]_4$ \\ &$[21,7,11]_4$& $x^{14} + \omega x^{13} + \omega^2 x^{12} + \omega^2 x^{10} + x^8 +\omega^2 x^7 + x^6 + \omega^2 x^4 + \omega^2 x^2 + \omega x + 1$ & \\ 40 &$[21,4,12]_4$& $(x^{21}+1)/(x^4 + \omega^2 x^3 + \omega x^2 + x + 1)$ & $[[21,4,9/3]]_4$ \\ &$[21,8,9]_4$& $(x^{21}+1)/(x^8 + x^7 + \omega x^6 + \omega x^5 + x^4 + \omega^2 x^3 + x^2 + x + \omega)$ &\\ 41 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,3,12/2]]_4$ \\ & $[21,4,12]_4$ & $(x^{21}+1)/((x+1)(x^3+\omega^2 x^2+1))$ & \\ 42 & $[21,7,11]_4$ & $(x^{21}+1)/(x^7 +\omega^2 x^6 + x^4 + x^3 +\omega^2 x + 1)$ & $[[21,3,8/5]]_4$ \\ & $[21,10,8]_4$ & $(x^{11}+\omega x^{10}+ x^8 + x^7 +x^6 + x^5 +\omega x^4 +\omega x^3 +\omega^2 x^2 +\omega^2 x + 1)$ & \\ 43 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 + \omega x^2 +\omega^2 x + 1)$ & $[[21,4,9/3]]_4$ \\ & $[21,8,9]_4$ & $(x^{21}+1)/(x^8 +\omega x^7 +\omega^2 x^6 + x^5 +\omega^2 x^4 +\omega x^3 +\omega x^2 +\omega)$ & \\ 44 & $[21,7,11]_4$ & $(x^{21}+1)/(x^7 +\omega^2 x^6 + x^4 + x^3 +\omega^2 x + 1)$ & $[[21,4,6/5]]_4$ \\ & $[21,11,6]_4$ & $(x^{21}+1)/(x^{11} + x^8 +\omega^2 x^7 + x^2 +\omega)$ & \\ 45 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,6,11/2]]_4$ \\ & $[21,7,11]_4$ & $(x^{21}+1)/(x^7 +\omega^2 x^6 + x^4 + x^3 +\omega^2 x + 1)$ & \\ 46 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 +\omega x^2 +\omega^2 x + 1)$ & $[[21,6,8/3]]_4$ \\ & $[21,10,8]_4$ & $(x^{11}+\omega x^{10}+ x^8 + x^7 +x^6 + x^5 +\omega x^4 +\omega x^3 +\omega^2 x^2 +\omega^2 x + 1)$ & \\ 47 &$[21,7,11]_4$& $(x^{21}+1)/(x^7 + \omega^2 x^6 + x^4 + x^3 + \omega ^2 x + 1)$ & $[[21,7,5/5]]_4$ \\ &$[21,14,5]_4$& $x^7 + x^6 + x^4 + \omega x^3 + \omega^2x + \omega$ & \\ 48 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,7,9/2]]_4$ \\ & $[21,8,9]_4$ & $(x^{21}+1)/(x^8 +\omega x^7 +\omega^2 x^6 + x^5 +\omega^2 x^4 +\omega x^3 +\omega x^2 +\omega)$ & \\ 49 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 +\omega x^2 +\omega^2 x + 1)$ & $[[21,7,6/3]]_4$ \\ & $[21,11,6]_4$ & $(x^{10} +\omega x^9 + x^8 +\omega x^7 + x^6 + x^5 + x^4 +\omega^2 x^2 +\omega^2)$ & \\ 50 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,9,8/2]]_4$ \\ & $[21,10,8]_4$ & $(x^{11}+\omega x^{10}+x^8+x^7+x^6+x^5+\omega x^4 +\omega x^3+\omega^2 x^2+\omega^2 x+ 1)$ & \\ 51 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,10,6/2]]_4$ \\ & $[21,11,6]_4$ & $(x^{10} + x^7 +\omega^2 x^6 + x^4 +\omega x^2 +\omega^2)$ & \\ 52 & $[21,4,12]_4$ & $(x^{21}+1)/(x^4 + x^3 +\omega x^2 + \omega^2 x + 1)$ & $[[21,10,5/3]]_4$ \\ & $[21,14,5]_4$ & $(x^7 + x^6 + x^4 +\omega x^3 +\omega^2 x +\omega)$ & \\ 53 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,13,5/2]]_4$ \\ & $[21,14,5]_4$ & $(x^7 + x^6 + x^4 +\omega x^3 +\omega^2 x +\omega)$ & \\ 54 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $[[21,16,3/2]]_4$ \\ & $[21,17,3]_4$ & $(x+\omega)(x^3+\omega^2 x^2 +1)$ & \\ 55 & $[21,1,21]_4$ & $(x^{21}+1)/(x+1)$ & $\mathbf{[[21,19,2/2]]_4}$ \\ & $[21,20,2]_4$ & $(x+\omega)$ & \\ 56 & $[23,11,8]_4$ & $(x+1)(x^{11}+x^{10}+x^6+x^5+x^4+x^2+1)$ & $[[23,1,7/7]]_4$ \\ & $[23,12,7]_4$ & $(x^{11}+x^{10}+x^6+x^5+x^4+x^2+1)$ & \\ 57 & $[23,1,23]_4$ & $(x^{23}+1)/(x+1)$ & $[[23,11,7/2]]_4$ \\ & $[23,12,7]_4$ & $(x^{11}+x^9+x^7+x^6+x^5+x+1)$ & \\ 58 & $[25,1,25]_4$ & $(x^{25}+1)/(x+1)$ & $[[25,2,15/2]]_4$ \\ & $[25,3,15]_4$ & $(x^{25}+1)/(x^3 + \omega x^2 + \omega x + 1)$ & \\ 59 & $[25,12,4]_4$ & $(x^{13}+\omega^2 x^{12}+\omega^2 x^{11}+x^{10}+\omega x^8+x^7+x^6+\omega x^5 + x^3+\omega^2 x^2+\omega^2 x+ 1)$ & $[[25,2,4/4]]_4$ \\ & $[25,14,4]_4$ & $(x^{11}+ x^{10}+\omega x^6 +\omega x^5 + x + 1)$ & \\ 60 & $[25,1,25]_4$ & $(x^{25}+1)/(x+1)$ & $[[25,4,5/2]]_4$ \\ & $[25,5,5]_4$ & $(x^{25}+1)/(x^5 + 1)$ & \\ 61 & $[25,10,4]_4$ & $(x^{15}+\omega^2 x^{10} +\omega^2 x^5 + 1)$ & $[[25,4,4/3]]_4$ \\ & $[25,14,4]_4$ & $(x^{11} + x^{10} +\omega x^6 +\omega x^5 + x + 1)$ & \\ 62 & $[25,10,4]_4$ & $(x^{15}+\omega^2 x^{10} +\omega^2 x^5 + 1)$ & $[[25,5,3/3]]_4$ \\ & $[25,15,3]_4$ & $(x^{10}+\omega x^5 +1)$ & \\ 63 & $[25,1,25]_4$ & $(x^{23}+1)/(x + 1)$ & $[[25,12,4/2]]_4$ \\ & $[25,13,4]_4$ & $(x^{12}+\omega x^{11}+ x^{10} +\omega^2 x^7 + x^6 +\omega^2 x^5 + x^2 +\omega x + 1)$ & \\ 64 & $[25,1,25]_4$ & $(x^{23}+1)/(x+1)$ & $[[25,14,3/2]]_4$ \\ & $[25,15,3]_4$ & $(x^{10}+\omega x^5 +1)$ & \\ 65 & $[25,1,25]_4$ & $(x^{23}+1)/(x+1)$ & $[[25,22,2/2]]_4$ \\ & $[25,23,2]_4$ & $(x^2 +\omega x +1)$ & \\ \hline \end{tabular} \end{table*} \section{Construction from Nested Linear BCH Codes}\label{sec:BCHCodes} It is well known (see~\cite[Sec. 3]{Cha98}) that finding the minimum distance or even finding a good lower bound on the minimum distance of a cyclic code is not a trivial problem. One important family of cyclic codes is the family of BCH codes. Their importance lies on the fact that their designed distance provides a reasonably good lower bound on the minimum distance. For more on BCH codes,~\cite[Ch. 5]{HP03} can be consulted. The BCH Code constructor in MAGMA can be used to find nested codes to produce more asymmetric quantum codes. Table~\ref{table:BCH} lists down the BCH codes over ${\mathbb F}_{4}$ for $n=27$ to $n=51$ with $n$ coprime to $4$. For a fixed length $n$, the codes are nested, i.e., a code $C$ with dimension $k_{1}$ is a subcode of a code $D$ with dimension $k_{2} > k_{1}$. The construction process can be done for larger values of $n$ if so desired. The range of the designed distances that can be supplied into MAGMA to come up with the code $C$ and the actual minimum distance of $C$ are denoted by $\delta(C)$ and $d(C)$, respectively. The minimum distance of $C^{\perp_{\mathop{{\rm tr}}}}$, which is needed in the computation of ${d_z,d_x}$, is denoted by $d(C^{\perp_{\mathop{{\rm tr}}}})$. To save space, the BCH $[n,1,n]_{4}$ repetition code generated by the all one vector ${\mathbf{1}}$ is not listed down in the table although this code is used in the construction of many asymmetric quantum codes presented in Table~\ref{table:BCH_QECC}. \begin{table} \caption{BCH Codes over ${\mathbb F}_{4}$ with $2 \leq k < n$ for $27\leq n \leq51$} \label{table:BCH} \centering \begin{tabular}{| c | c | c | c | l | c |} \hline \textbf{No.} & \textbf{$n$} & \textbf{$\delta(C)$} & \textbf{$d(C)$} & \textbf{Code $C$} & \textbf{$d(C^{\perp_{\mathop{{\rm tr}}}})$} \\ \hline 1 & $27$ & $2$ & $2$ & $[27,18,2]_4$ & $3$\\ 2 & & $3$ & $3$ & $[27,9,3]_4$ & $2$\\ 3 & & $4-6$ & $6$ & $[27,6,6]_4$ & $2$\\ 4 & & $7-9$ & $9$ & $[27,3,9]_4$ & $2$\\ 5 & & $10-18$ & $18$ & $[27,2,18]_4$ & $2$\\ 6 & $29$ & $2$ & $11$ & $[29,15,11]_4$ & $12$\\ 7 & $31$ & $2-3$ & $3$ & $[31,26,3]_4$ & $16$\\ 8 & & $4-5$ & $5$ & $[31,21,5]_4$ & $12$\\ 9 & & $6-7$ & $7$ & $[31,16,7]_4$ & $8$\\ 10 & & $8-11$ & $11$ & $[31,11,11]_4$ & $6$\\ 11 & & $12-15$ & $15$ & $[31,6,15]_4$ & $4$\\ 12 & $33$ & $2$ & $2$ & $[33,28,2]_4$ & $18$\\ 13 & & $3$ & $3$ & $[33,23,3]_4$ & $12$\\ 14 & & $4-5$ & $8$ & $[33,18,8]_4$ & $11$\\ 15 & & $6$ & $10$ & $[33,13,10]_4$ & $6$\\ 16 & & $7$ & $11$ & $[33,8,11]_4$ & $4$\\ 17 & & $8-11$ & $11$ & $[33,3,11]_4$ & $2$\\ 18 & & $12-22$ & $22$ & $[33,2,22]_4$ & $2$\\ 19 & $35$ & $2$ & $3$ & $[35,29,3]_4$ & $16$\\ 20 & & $3$ & $3$ & $[35,23,3]_4$ & $8$\\ 21 & & $4-5$ & $5$ & $[35,17,5]_4$ & $8$\\ 22 & & $6$ & $7$ & $[35,14,7]_4$ & $8$\\ 23 & & $7$ & $7$ & $[35,8,7]_4$ & $4$\\ 24 & & $8-14$ & $15$ & $[35,6,15]_4$ & $4$\\ 25 & & $15$ & $15$ & $[35,4,15]_4$ & $2$\\ 26 & $37$ & $2$ & $11$ & $[37,19,11]_4$ & $12$\\ 27 & $39$ & $2$ & $2$ & $[39,33,2]_4$ & $18$\\ 28 & & $3$ & $3$ & $[39,27,3]_4$ & $12$\\ 29 & & $4-6$ & $9$ & $[39,21,9]_4$ & $12$\\ 30 & & $7$ & $10$ & $[39,15,10]_4$ & $6$\\ 31 & & $8-13$ & $13$ & $[39,9,13]_4$ & $4$\\ 32 & & $14$ & $15$ & $[39,8,15]_4$ & $3$\\ 33 & & $15-26$ & $26$ & $[39,2,26]_4$ & $2$\\ 34 & $41$ & $2$ & $6$ & $[41,31,6]_4$ & $20$\\ 35 & & $3$ & $9$ & $[41,21,9]_4$ & $10$\\ 36 & & $4-6$ & $20$ & $[41,11,20]_4$ & $7$\\ 37 & $43$ & $2$ & $5$ & $[43,36,5]_4$ & $27$\\ 38 & & $3$ & $6$ & $[43,29,6]_4$ & $14$\\ 39 & & $4-6$ & $11$ & $[43,22,11]_4$ & $12$\\ 40 & & $7$ & $13$ & $[43,15,13]_4$ & $6$\\ 41 & & $8-9$ & $26$ & $[43,8,26]_4$ & $5$\\ 42 & $45$ & $2$ & $2$ & $[45,39,2]_4$ & $12$\\ 43 & & $3$ & $3$ & $[45,33,3]_4$ & $8$\\ 44 & & $4-5$ & $5$ & $[45,31,5]_4$ & $8$\\ 45 & & $6$ & $6$ & $[45,28,6]_4$ & $8$\\ 46 & & $7$ & $7$ & $[45,26,7]_4$ & $8$\\ 47 & & $8-9$ & $9$ & $[45,20,9]_4$ & $6$\\ 48 & & $10$ & $10$ & $[45,18,10]_4$ & $6$\\ 49 & & $11$ & $11$ & $[45,15,11]_4$ & $3$\\ 50 & & $12-15$ & $15$ & $[45,9,15]_4$ & $2$\\ 51 & & $16-18$ & $18$ & $[45,8,18]_4$ & $2$\\ 52 & & $19-21$ & $21$ & $[45,6,21]_4$ & $2$\\ 53 & & $22-30$ & $30$ & $[45,4,30]_4$ & $2$\\ 54 & & $31-33$ & $33$ & $[45,3,33]_4$ & $2$\\ 55 & $47$ & $2-5$ & $11$ & $[47,24,11]_4$ & $12$\\ 56 & $49$ & $2-3$ & $3$ & $[49,28,3]_4$ & $4$\\ 57 & & $4-7$ & $7$ & $[49,7,7]_4$ & $2$\\ 58 & & $8-21$ & $21$ & $[49,4,21]_4$ & $2$\\ 59 & $51$ & $2$ & $2$ & $[51,47,2]_4$ & $36$\\ 60 & & $3$ & $3$ & $[51,43,3]_4$ & $24$\\ 61 & & $4-5$ & $5$ & $[51,39,5]_4$ & $24$\\ 62 & & $6$ & $9$ & $[51,35,9]_4$ & $22$\\ 63 & & $7$ & $9$ & $[51,31,9]_4$ & $14$\\ 64 & & $8-9$ & $9$ & $[51,27,9]_4$ & $10$\\ 65 & & $10-11$ & $14$ & $[51,23,14]_4$ & $10$\\ 66 & & $12-17$ & $17$ & $[51,19,17]_4$ & $8$\\ 67 & & $18$ & $18$ & $[51,18,18]_4$ & $8$\\ 68 & & $19$ & $19$ & $[51,14,19]_4$ & $6$\\ 69 & & $20-22$ & $27$ & $[51,10,27]_4$ & $6$\\ 70 & & $23-34$ & $34$ & $[51,6,34]_4$ & $4$\\ 71 & & $35$ & $35$ & $[51,5,35]_4$ & $3$\\ \hline \end{tabular} \end{table} Table~\ref{table:BCH_QECC} presents the resulting asymmetric quantum codes from nested BCH Codes based on Theorem~\ref{thm:3.5}. The inner codes are listed in the column denoted by Code $C_{1}^{\perp_{\mathop{{\rm tr}}}}$ while the corresponding larger codes are put in the column denoted by Code $C_{2}$. The values for $d_z,d_x$ are derived from the last column of Table~\ref{table:BCH} while keeping Proposition~\ref{prop:3.2} in mind. \begin{table*} \caption{Asymmetric QECC from BCH Codes} \label{table:BCH_QECC} \centering \begin{tabular}{| c | c | l | l | l || c | c | l | l | l |} \hline \textbf{No.} & \textbf{$n$} & \textbf{Code $C_{1}^{\perp_{\mathop{{\rm tr}}}}$} & \textbf{Code $C_{2}$} & \textbf{Code $Q$} & \textbf{No.} & \textbf{$n$} & \textbf{Code $C_{1}^{\perp_{\mathop{{\rm tr}}}}$} & \textbf{Code $C_{2}$} & \textbf{Code $Q$}\\ \hline 1 & $27$ & $[27,1,27]_4$ & $[27,2,18]_4$ & $[[27,1,18/2]]_4$ & 76 & $43$ & $[43,8,26]_4$ & $[43,29,6]_4$ & $[[43,21,6/5]]_4$\\ 2 & & $[27,1,27]_4$ & $[27,3,9]_4$ & $[[27,2,9/2]]_4$ & 77 & & $[43,1,43]_4$ & $[43,29,6]_4$ & $[[43,28,6/2]]_4$\\ 3 & & $[27,1,27]_4$ & $[27,6,6]_4$ & $[[27,5,6/2]]_4$ & 78 & & $[43,8,26]_4$ & $[43,36,5]_4$ & $[[43,28,5/5]]_4$\\ 4 & & $[27,1,27]_4$ & $[27,9,3]_4$ & $[[27,8,3/2]]_4$ & 79 & & $[43,1,43]_4$ & $[43,36,5]_4$ & $[[43,35,5/2]]_4$\\ 5 & & $[27,1,27]_4$ & $[27,18,2]_4$ & $[[27,17,2/2]]_4$ & 80 & $45$ & $[45,1,45]_4$ & $[45,3,33]_4$ & $[[45,2,33/2]]_4$\\ 6 & $29$ & $[29,1,29]_4$ & $[29,15,11]_4$ & $[[29,14,11/2]]_4$ & 81 & & $[45,18,10]_4$ & $[45,20,9]_4$ & $[[45,2,9/6]]_4$\\ 7 & $31$ & $[31,1,31]_4$ & $[31,6,15]_4$ & $[[31,5,15/2]]_4$ & 82 & & $[45,1,45]_4$ & $[45,4,30]_4$ & $[[45,3,30/2]]_4$\\ 8 & & $[31,21,5]_4$ & $[31,26,3]_4$ & $[[31,5,12/3]]_4$ & 83 & & $[45,15,11]_4$ & $[45,18,10]_4$ & $[[45,3,10/3]]_4$\\ 9 & & $[31,6,15]_4$ & $[31,11,11]_4$ & $[[31,5,11/4]]_4$ & 84 & & $[45,1,45]_4$ & $[45,6,21]_4$ & $[[45,5,21/2]]_4$\\ 10 & & $[31,16,7]_4$ & $[31,21,5]_4$ & $[[31,5,8/5]]_4$ & 85 & & $[45,15,11]_4$ & $[45,20,9]_4$ & $[[45,5,9/3]]_4$\\ 11 & & $[31,11,11]_4$ & $[31,16,7]_4$ & $[[31,5,7/6]]_4$ & 86 & & $[45,26,7]_4$ & $[45,31,5]_4$ & $[[45,5,8/5]]_4$\\ 12 & & $[31,1,31]_4$ & $[31,11,11]_4$ & $[[31,10,11/2]]_4$ & 87 & & $[45,1,45]_4$ & $[45,8,18]_4$ & $[[45,7,18/2]]_4$\\ 13 & & $[31,16,7]_4$ & $[31,26,3]_4$ & $[[31,10,8/3]]_4$ & 88 & & $[45,26,7]_4$ & $[45,33,3]_4$ & $[[45,7,8/3]]_4$\\ 14 & & $[31,6,15]_4$ & $[31,16,7]_4$ & $[[31,10,7/4]]_4$ & 89 & & $[45,1,45]_4$ & $[45,9,15]_4$ & $[[45,8,15/2]]_4$\\ 15 & & $[31,11,11]_4$ & $[31,21,5]_4$ & $[[31,10,6/5]]_4$ & 90 & & $[45,18,10]_4$ & $[45,26,7]_4$ & $[[45,8,7/6]]_4$\\ 16 & & $[31,1,31]_4$ & $[31,16,7]_4$ & $[[31,15,7/2]]_4$ & 91 & & $[45,18,10]_4$ & $[45,28,6]_4$ & $[[45,10,6/6]]_4$\\ 17 & & $[31,11,11]_4$ & $[31,26,3]_4$ & $[[31,15,6/3]]_4$ & 92 & & $[45,15,11]_4$ & $[45,26,7]_4$ & $[[45,11,7/3]]_4$\\ 18 & & $[31,6,15]_4$ & $[31,21,5]_4$ & $[[31,15,5/4]]_4$ & 93 & & $[45,18,10]_4$ & $[45,31,5]_4$ & $[[45,13,6/5]]_4$\\ 19 & & $[31,1,31]_4$ & $[31,21,5]_4$ & $[[31,20,5/2]]_4$ & 94 & & $[45,1,45]_4$ & $[45,15,11]_4$ & $[[45,14,11/2]]_4$\\ 20 & & $[31,6,15]_4$ & $[31,26,3]_4$ & $[[31,20,4/3]]_4$ & 95 & & $[45,18,10]_4$ & $[45,33,3]_4$ & $[[45,15,6/3]]_4$\\ 21 & & $[31,1,31]_4$ & $[31,26,3]_4$ & $[[31,25,3/2]]_4$ & 96 & & $[45,15,11]_4$ & $[45,31,5]_4$ & $[[45,16,5/3]]_4$\\ 22 & $33$ & $[33,1,33]_4$ & $[33,2,22]_4$ & $[[33,1,22/2]]_4$ & 97 & & $[45,1,45]_4$ & $[45,18,10]_4$ & $[[45,17,10/2]]_4$\\ 23 & & $[33,23,3]_4$ & $[33,28,2]_4$ & $[[33,5,12/2]]_4$ & 98 & & $[45,15,11]_4$ & $[45,33,3]_4$ & $[[45,18,3/3]]_4$\\ 24 & & $[33,18,8]_4$ & $[33,23,3]_4$ & $[[33,5,11/3]]_4$ & 99 & & $[45,1,45]_4$ & $[45,20,9]_4$ & $[[45,19,9/2]]_4$\\ 25 & & $[33,8,11]_4$ & $[33,13,10]_4$ & $[[33,5,10/4]]_4$ & 100 & & $[45,1,45]_4$ & $[45,26,7]_4$ & $[[45,25,7/2]]_4$\\ 26 & & $[33,13,10]_4$ & $[33,18,8]_4$ & $[[33,5,8/6]]_4$ & 101 & & $[45,1,45]_4$ & $[45,28,6]_4$ & $[[45,27,6/2]]_4$\\ 27 & & $[33,18,8]_4$ & $[33,28,2]_4$ & $[[33,10,11/2]]_4$ & 102 & & $[45,1,45]_4$ & $[45,31,5]_4$ & $[[45,30,5/2]]_4$\\ 28 & & $[33,8,11]_4$ & $[33,18,8]_4$ & $[[33,10,8/4]]_4$ & 103 & & $[45,1,45]_4$ & $[45,33,3]_4$ & $[[45,32,3/2]]_4$\\ 29 & & $[33,1,33]_4$ & $[33,13,10]_4$ & $[[33,12,10/2]]_4$ & 104 & & $[45,1,45]_4$ & $[45,39,2]_4$ & $[[45,38,2/2]]_4$\\ 30 & & $[33,8,11]_4$ & $[33,23,3]_4$ & $[[33,15,4/3]]_4$ & 105 & $47$ & $[47,1,47]_4$ & $[47,24,11]_4$ & $[[47,23,11/2]]_4$\\ 31 & & $[33,1,33]_4$ & $[33,18,8]_4$ & $[[33,17,8/2]]_4$ & 106 & $49$ & $[49,1,49]_4$ & $[49,4,21]_4$ & $[[49,3,21/2]]_4$\\ 32 & & $[33,8,11]_4$ & $[33,28,2]_4$ & $[[33,20,4/2]]_4$ & 107 & & $[49,1,49]_4$ & $[49,7,7]_4$ & $[[49,6,7/2]]_4$\\ 33 & & $[33,1,33]_4$ & $[33,23,3]_4$ & $[[33,22,3/2]]_4$ & 108 & & $[49,1,49]_4$ & $[49,28,3]_4$ & $[[49,27,3/2]]_4$\\ 34 & & $[33,1,33]_4$ & $[33,28,2]_4$ & $[[33,27,2/2]]_4$ & 109 & $51$ & $[51,5,35]_4$ & $[51,6,34]_4$ & $[[51,1,34/3]]_4$\\ 35 & $35$ & $[35,14,7]_4$ & $[35,17,5]_4$ & $[[35,3,8/5]]_4$ & 110 & & $[51,18,18]_4$ & $[51,19,17]_4$ & $[[51,1,17/8]]_4$\\ 36 & & $[35,1,35]_4$ & $[35,6,15]_4$ & $[[35,5,15/2]]_4$ & 111 & & $[51,1,51]_4$ & $[51,5,35]_4$ & $[[51,4,35/2]]_4$\\ 37 & & $[35,6,15]_4$ & $[35,14,7]_4$ & $[[35,8,7/4]]_4$ & 112 & & $[51,6,34]_4$ & $[51,10,27]_4$ & $[[51,4,27/4]]_4$\\ 38 & & $[35,6,15]_4$ & $[35,17,5]_4$ & $[[35,11,5/4]]_4$ & 113 & & $[51,10,27]_4$ & $[51,14,19]_4$ & $[[51,4,19/6]]_4$\\ 39 & & $[35,14,7]_4$ & $[35,29,3]_4$ & $[[35,15,8/3]]_4$ & 114 & & $[51,1,51]_4$ & $[51,6,34]_4$ & $[[51,5,34/2]]_4$\\ 40 & & $[35,1,35]_4$ & $[35,17,5]_4$ & $[[35,16,5/2]]_4$ & 115 & & $[51,5,35]_4$ & $[51,10,27]_4$ & $[[51,5,27/3]]_4$\\ 41 & & $[35,6,15]_4$ & $[35,29,3]_4$ & $[[35,23,4/3]]_4$ & 116 & & $[51,18,18]_4$ & $[51,23,14]_4$ & $[[51,5,14/8]]_4$\\ 42 & & $[35,1,35]_4$ & $[35,29,3]_4$ & $[[35,28,3/2]]_4$ & 117 & & $[51,39,5]_4$ & $[51,47,2]_4$ & $[[51,8,24/2]]_4$\\ 43 & $37$ & $[37,1,37]_4$ & $[37,19,2]_4$ & $[[37,18,11/2]]_4$ & 118 & & $[51,6,34]_4$ & $[51,14,19]_4$ & $[[51,8,19/4]]_4$\\ 44 & $39$ & $[39,1,39]_4$ & $[39,2,26]_4$ & $[[39,1,26/2]]_4$ & 119 & & $[51,10,27]_4$ & $[51,18,18]_4$ & $[[51,8,18/6]]_4$\\ 45 & & $[39,1,39]_4$ & $[39,2,26]_4$ & $[[39,1,26/2]]_4$ & 120 & & $[51,1,51]_4$ & $[51,10,27]_4$ & $[[51,9,27/2]]_4$\\ 46 & & $[39,8,15]_4$ & $[39,9,13]_4$ & $[[39,1,13/3]]_4$ & 121 & & $[51,5,35]_4$ & $[51,14,19]_4$ & $[[51,9,19/3]]_4$\\ 47 & & $[39,21,9]_4$ & $[39,27,3]_4$ & $[[39,6,12/3]]_4$ & 122 & & $[51,10,27]_4$ & $[51,19,17]_4$ & $[[51,9,17/6]]_4$\\ 48 & & $[39,9,13]_4$ & $[39,15,10]_4$ & $[[39,6,10/4]]_4$ & 123 & & $[51,6,34]_4$ & $[51,18,18]_4$ & $[[51,12,18/4]]_4$\\ 49 & & $[39,15,10]_4$ & $[39,21,9]_4$ & $[[39,6,9/6]]_4$ & 124 & & $[51,1,51]_4$ & $[51,14,19]_4$ & $[[51,13,19/2]]_4$\\ 50 & & $[39,1,39]_4$ & $[39,8,15]_4$ & $[[39,7,15/2]]_4$ & 125 & & $[51,5,35]_4$ & $[51,18,18]_4$ & $[[51,13,18/3]]_4$\\ 51 & & $[39,8,15]_4$ & $[39,15,10]_4$ & $[[39,7,10/3]]_4$ & 126 & & $[51,6,34]_4$ & $[51,19,17]_4$ & $[[51,13,17/4]]_4$\\ 52 & & $[39,1,39]_4$ & $[39,9,13]_4$ & $[[39,8,13/2]]_4$ & 127 & & $[51,10,27]_4$ & $[51,23,14]_4$ & $[[51,13,14/6]]_4$\\ 53 & & $[39,21,9]_4$ & $[39,33,2]_4$ & $[[39,12,12/2]]_4$ & 128 & & $[51,5,35]_4$ & $[51,19,17]_4$ & $[[51,14,17/3]]_4$\\ 54 & & $[39,9,13]_4$ & $[39,21,9]_4$ & $[[39,12,9/4]]_4$ & 129 & & $[51,1,51]_4$ & $[51,18,18]_4$ & $[[51,17,18/2]]_4$\\ 55 & & $[39,8,15]_4$ & $[39,21,9]_4$ & $[[39,13,9/3]]_4$ & 130 & & $[51,6,34]_4$ & $[51,23,14]_4$ & $[[51,17,14/4]]_4$\\ 56 & & $[39,1,39]_4$ & $[39,15,10]_4$ & $[[39,14,10/2]]_4$ & 131 & & $[51,18,18]_4$ & $[51,35,9]_4$ & $[[51,17,9/8]]_4$\\ 57 & & $[39,9,13]_4$ & $[39,27,3]_4$ & $[[39,18,4/3]]_4$ & 132 & & $[51,1,51]_4$ & $[51,19,17]_4$ & $[[51,18,17/2]]_4$\\ 58 & & $[39,8,15]_4$ & $[39,27,3]_4$ & $[[39,19,3/3]]_4$ & 133 & & $[51,5,35]_4$ & $[51,23,14]_4$ & $[[51,18,14/3]]_4$\\ 59 & & $[39,1,39]_4$ & $[39,21,9]_4$ & $[[39,20,9/2]]_4$ & 134 & & $[51,1,51]_4$ & $[51,23,14]_4$ & $[[51,22,14/2]]_4$\\ 60 & & $[39,9,13]_4$ & $[39,33,2]_4$ & $[[39,24,4/2]]_4$ & 135 & & $[51,10,27]_4$ & $[51,35,9]_4$ & $[[51,25,9/6]]_4$\\ 61 & & $[39,1,39]_4$ & $[39,27,3]_4$ & $[[39,26,3/2]]_4$ & 136 & & $[51,6,34]_4$ & $[51,35,9]_4$ & $[[51,29,9/4]]_4$\\ 62 & & $[39,1,39]_4$ & $[39,33,2]_4$ & $[[39,32,2/2]]_4$ & 137 & & $[51,10,27]_4$ & $[51,39,5]_4$ & $[[51,29,6/5]]_4$\\ 63 & $41$ & $[41,1,41]_4$ & $[41,11,20]_4$ & $[[41,10,20/2]]_4$ & 138 & & $[51,5,35]_4$ & $[51,35,9]_4$ & $[[51,30,9/3]]_4$\\ 64 & & $[41,21,9]_4$ & $[41,31,6]_4$ & $[[41,10,10/6]]_4$ & 139 & & $[51,10,27]_4$ & $[51,43,3]_4$ & $[[51,33,6/3]]_4$\\ 65 & & $[41,11,20]_4$ & $[41,21,9]_4$ & $[[41,10,9/7]]_4$ & 140 & & $[51,6,34]_4$ & $[51,39,5]_4$ & $[[51,33,5/4]]_4$\\ 66 & & $[41,1,41]_4$ & $[41,21,9]_4$ & $[[41,20,9/2]]_4$ & 141 & & $[51,1,51]_4$ & $[51,35,9]_4$ & $[[51,34,9/2]]_4$\\ 67 & & $[41,11,20]_4$ & $[41,31,6]_4$ & $[[41,20,7/6]]_4$ & 142 & & $[51,5,35]_4$ & $[51,39,5]_4$ & $[[51,34,5/3]]_4$\\ 68 & & $[41,1,41]_4$ & $[41,31,6]_4$ & $[[41,30,6/2]]_4$ & 143 & & $[51,10,27]_4$ & $[51,47,2]_4$ & $[[51,37,6/2]]_4$\\ 69 & $43$ & $[43,1,43]_4$ & $[43,8,26]_4$ & $[[43,7,26/2]]_4$ & 144 & & $[51,6,34]_4$ & $[51,43,3]_4$ & $[[51,37,4/3]]_4$\\ 70 & & $[43,29,6]_4$ & $[43,36,5]_4$ & $[[43,7,14/5]]_4$ & 145 & & $[51,1,51]_4$ & $[51,39,5]_4$ & $[[51,38,5/2]]_4$\\ 71 & & $[43,22,11]_4$ & $[43,29,6]_4$ & $[[43,7,12/6]]_4$ & 146 & & $[51,5,35]_4$ & $[51,43,3]_4$ & $[[51,38,3/3]]_4$\\ 72 & $43$ & $[43,1,43]_4$ & $[43,15,13]_4$ & $[[43,14,13/2]]_4$ & 147 & & $[51,6,34]_4$ & $[51,47,2]_4$ & $[[51,41,4/2]]_4$\\ 73 & & $[43,22,11]_4$ & $[43,36,5]_4$ & $[[43,14,12/5]]_4$ & 148 & & $[51,5,35]_4$ & $[51,47,2]_4$ & $[[51,42,3/2]]_4$\\ 74 & & $[43,15,13]_4$ & $[43,29,6]_4$ & $[[43,14,6/6]]_4$ & 149 & & $[51,1,51]_4$ & $[51,47,2]_4$ & $[[51,46,2/2]]_4$\\ 75 & & $[43,1,43]_4$ & $[43,22,11]_4$ & $[[43,21,11/2]]_4$ & & & & & \\ \hline \end{tabular} \end{table*} \section{Asymmetric Quantum Codes from Nested Additive Codes over ${\mathbb F}_4$}\label{sec:nestedadditive} To show the gain that we can get from Theorem~\ref{thm:3.5} over the construction which is based solely on ${\mathbb F}_{4}$-linear codes, we exhibit asymmetric quantum codes which are derived from nested additive codes. An example of asymmetric quantum code with $k>0$ can be derived from a self-orthogonal additive cyclic code listed as Entry 3 in~\cite[Table I]{CRSS98}. The code is of parameters $(21,2^{16},9)_4$ yielding a $[[21,5,6/6]]_4$ quantum code $Q$ by Theorem~\ref{thm:3.5}. In a similar manner, a $[[23,12,4/4]]_4$ quantum code can be derived from Entry 5 of the same table. Another very interesting example is the $(12,2^{12},6)_4$ dodecacode $C$ mentioned in Remark~\ref{rem:5.3}. Its generator matrix $G$ is given in Equation (\ref{eq:9.1}). Let $G_{D},G_{E}$ be matrices formed, respectively, by deleting the last 4 and 8 rows of $G$. Construct two additive codes $D,E \subset C$ with generator matrices $G_{D}$ and $G_{E}$, respectively. Applying Theorem~\ref{thm:3.5} with $C_1=D^{\perp_{\mathop{{\rm tr}}}}$ and $C_2 = C$ yields an asymmetric quantum code $Q$ with parameters $[[12,2,6/3]]_4$. Performing the same process to $E \subset C$ results in a $[[12,4,6/2]]_4$-code. \begin{equation}\label{eq:9.1} G=\left( \begin{array}{*{12}{l}} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & \omega & \omega & \omega & \omega & \omega & \omega\\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \omega & \omega & \omega & \omega & \omega & \omega & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & \omega & \overline{\omega} & 0 & 0 & 0 & 1 & \omega & \overline{\omega}\\ 0 & 0 & 0 & \omega & \overline{\omega} & 1 & 0 & 0 & 0 & \omega & \overline{\omega} & 1\\ 1 & \overline{\omega} & \omega & 0 & 0 & 0 & 1 & \overline{\omega} & \omega & 0 & 0 & 0\\ \omega & 1 & \overline{\omega} & 0 & 0 & 0 & \omega & 1 & \overline{\omega} & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & \overline{\omega} & \omega & \omega & \overline{\omega} & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & \omega & 1 & \overline{\omega} & 1 & \omega & \overline{\omega} & 0 & 0 & 0\\ 1 & \omega & \overline{\omega} & 0 & 0 & 0 & 0 & 0 & 0 & \overline{\omega} & \omega & 1\\ \overline{\omega} & 1 & \omega & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \overline{\omega} & \omega \end{array} \right)\text{.} \end{equation} The next three subsections present more systematic approaches to finding good asymmetric quantum codes based on nested additive codes. \subsection{Construction from circulant codes}\label{subsec:circulant} As is the case with linear codes, an additive code $C$ is said to be cyclic if, given a codeword ${\mathbf{v}} \in C$, the cyclic shift of ${\mathbf{v}}$ is also in $C$. It is known (see~\cite[Th. 14]{CRSS98}) that any additive cyclic $(n,2^{k})_{4}$-code $C$ has at most two generators. A more detailed study of additive cyclic codes over ${\mathbb F}_{4}$ is given in~\cite{Huff07}. Instead of using additive cyclic codes, a subfamily which is called \textit{additive circulant $(n,2^{n})_4$-code} in~\cite{GK04} is used for ease of computation. An additive circulant code $C$ has as a generator matrix $G$ the complete cyclic shifts of just one codeword ${\mathbf{v}}=(v_{1},v_{2},\ldots,v_{n})$. We call $G$ the \textit{cyclic development of ${\mathbf{v}}$}. More explicitly, $G$ is given by \begin{equation}\label{eq:circ} G=\left( \begin{array}{*{12}{l}} v_{1} & v_{2} & v_{3} & \ldots & v_{n-1} & v_{n}\\ v_{n} & v_{1} & v_{2} & \ldots & v_{n-2} & v_{n-1}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ v_{2} & v_{3} & v_{4} & \ldots & v_{n} & v_{1} \end{array} \right)\text{.} \end{equation} To generate a subcode of a circulant extremal self-dual code $C$ we delete the rows of its generator matrix $G$ starting from the last row, the first row being the generating codeword ${\mathbf{v}}$. We record the best possible combinations of the size of the resulting code $Q$ and $\left\lbrace d_z,d_x \right\rbrace$. To save space, only new codes or codes with better parameters than those previously constructed are presented. Table~\ref{table:Circu} summarizes the finding for $n \leq 30$. Zeros on the right of each generating codeword are omitted. The number of last rows to be deleted to obtain the desired subcode is given in the column denoted by \textbf{del}. \begin{table*} \caption{Asymmetric Quantum Codes from Additive Circulant Codes for $n \leq 30$} \label{table:Circu} \centering \begin{tabular}{| c | c | l | c | l |} \hline \textbf{No.} & \textbf{$n$} & \textbf{Generator ${\mathbf{v}}$} & \textbf{del} & \textbf{Code $Q$} \\ \hline 1 & $8$ & $(\overline{\omega},1,\omega,0,1)$ & $1$ & $[[8,0.5,4/2]]_4$ \\ 2 & $10$ & $(\overline{\omega},1,\omega,\omega,0,0,1)$ & $2$ & $[[10,1,4/3]]_4$ \\ 3 & & & $3$ & $[[10,1.5,4/2]]_4$ \\ 4 & $12$ & $(1,0,\omega,1,\omega,0,1)$ & $2$ & $[[12,1,5/3]]_4$ \\ 5 & $14$ & $(\omega,1,1,\overline{\omega},0,\omega,0,1)$ & $1$ & $[[14,0.5,6/3]]_4$ \\ 6 & & & $5$ & $[[14,2.5,6/2]]_4$ \\ 7 & $16$ & $(\omega,1,\overline{\omega},\omega,0,0,0,\omega,1)$ & $2$ & $[[16,1,6/4]]_4$ \\ 8 & $16$ & $(\omega,1,1,0,0,\overline{\omega},\omega,0,1)$ & $6$ & $[[16,3,6/3]]_4$ \\ 9 & $16$ & $(\omega,1,\overline{\omega},\omega,0,0,0,\omega,1)$ & $7$ & $[[16,3.5,6/2]]_4$ \\ 10 & $19$ & $(1,0,\omega,\overline{\omega},1,\overline{\omega},\omega,0,1)$ & $4$ & $[[19,2,7/4]]_4$ \\ 11 & $20$ & $(\overline{\omega},\overline{\omega},\omega,\overline{\omega},\omega,1,0,0,0,1,1)$ & $2$ & $[[20,1,8/5]]_4$ \\ 12 & & & $3$ & $[[20,1.5,8/4]]_4$ \\ 13 & & & $7$ & $[[20,3.5,8/3]]_4$ \\ 14 & $22$ & $(\omega,\omega,\omega,\omega,1,1,\overline{\omega},\omega,0,\omega,\omega,0,\omega,\omega,1,1)$ & $4$&$[[22,2,8/5]]_4$\\ 15 & & & $6$ & $[[22,3,8/4]]_4$ \\ 16 & & & $10$ & $[[22,5,8/3]]_4$ \\ 17 & & & $11$ & $[[22,5.5,8/2]]_4$ \\ 18 & $23$ & $(1,\omega,\omega,1,\overline{\omega},1,\omega,\omega,1)$ & $2$ & $[[23,1,8/4]]_4$\\ 19 & & & $6$ & $[[23,3,8/3]]_4$ \\ 20 & $25$ & $(1,1,\omega,0,1,\overline{\omega},1,0,\omega,1,1)$ & $3$ & $[[25,1.5,8/5]]_4$\\ 21 & & & $6$ & $[[25,3,8/4]]_4$ \\ 22 & $26$ & $(1,0,\omega,\omega,\omega,\overline{\omega},\omega,\omega,\omega,0,1)$ & $5$ & $[[26,2.5,8/4]]_4$\\ 23 & & & $6$ & $[[26,3,8/3]]_4$ \\ 24 & $27$ & $(1,0,\omega,1,\omega,\overline{\omega},\omega,1,\omega,0,1)$ & $1$ & $[[27,0.5,8/5]]_4$\\ 25 & $27$ & $(1,0,\omega,\omega,1,\overline{\omega},1,\omega,\omega,0,1)$ & $5$ & $[[27,2.5,8/4]]_4$ \\ 26 & & & $6$ & $[[27,3,8/3]]_4$\\ 27 & $28$ & $(\overline{\omega},\omega,\overline{\omega},1,\overline{\omega},1,\omega,\omega,\overline{\omega},\overline{\omega},\omega,\omega,0,1,1)$ & $1$ & $[[28,0.5,10/7]]_4$\\ 28 & & & $2$ & $[[28,1,10/5]]_4$ \\ 29 & & & $9$ & $[[28,4.5,10/4]]_4$ \\ 30 & & & $11$ & $[[28,5.5,10/3]]_4$ \\ 31 & $29$ & $(1,\omega,0,\omega,\overline{\omega},1,\overline{\omega},\omega,\overline{\omega},1,\overline{\omega},\omega,0,\omega,1)$ & $1$ & $[[29,0.5,11/7]]_4$\\ 32 & & & $3$ & $[[29,1.5,11/6]]_4$ \\ 33 & & & $8$ & $[[29,4,11/4]]_4$ \\ 34 & & & $12$ & $[[29,6,11/3]]_4$ \\ 35 & $30$ & $(\overline{\omega},0,\overline{\omega},\omega,1,\omega,0,\overline{\omega},\omega,0,1,\omega,1,1,0,1)$ & $5$ & $[[30,2.5,12/6]]_4$\\ 36 & & & $6$ & $[[30,3,12/5]]_4$ \\ 37 & & & $10$ & $[[30,5,12/3]]_4$ \\ 38 & & & $11$ & $[[30,5.5,12/2]]_4$ \\ \hline \end{tabular} \end{table*} \subsection{Construction from $4$-circulant and bordered $4$-circulant codes}\label{subsec:4circulant} Following~\cite{GK04}, a \textit{$4$-circulant additive $(n,2^{n})_4$-code of even length $n$} has the following generator matrix: \begin{equation}\label{eq:4circ} G=\left( \begin{array}{*{12}{l}} I_{\frac{n}{2}} & A_{\frac{n}{2}}\\ B_{\frac{n}{2}} & I_{\frac{n}{2}} \end{array} \right) \end{equation} where $I_{\frac{n}{2}}$ is an identity matrix of size $n/2$ and $A_{\frac{n}{2}},B_{\frac{n}{2}}$ are circulant matrices of the form given in Equation (\ref{eq:circ}). Starting from a generator matrix $G_{C}$ of an additive $4$-circulant code $C$, a matrix $G_{D}$ is constructed by deleting the last $r$ rows of $G_{C}$ to derive an additive subcode $D$ of $C$. For $n \leq 30$ we found three asymmetric quantum codes which are either new or better than the ones previously constructed. Table~\ref{table:4Circu} presents the findings. Under the column denoted by \textbf{$A,B$} we list down the generating codewords for the matrices $A$ and $B$, in that order. \begin{table} \caption{Asymmetric Quantum Codes from Additive 4-Circulant Codes for $n \leq 30$} \label{table:4Circu} \centering \begin{tabular}{| c | c | c | l |} \hline \textbf{$n$} & \textbf{$A,B$} & \textbf{del} & \textbf{Code $Q$} \\ \hline $14$ & $(1,\omega,\omega,\omega,1,0,0)$, & $2$ & $[[14,1,6/3]]_4$ \\ & $(1,0,0,1,\omega,\omega,\omega)$ & & \\ $20$ & $(\overline{\omega},\omega,\omega,\omega,\omega,\omega,\overline{\omega},0,\omega,0)$, & $8$ & $[[20,4,8/2]]_4$ \\ & $(\omega,0,\overline{\omega},0,\omega,\omega,\overline{\omega},\omega,\overline{\omega},\omega)$ & & \\ \hline \end{tabular} \end{table} Let ${\mathbf{d}}=(\omega,\ldots,\omega)$ and ${\mathbf{c}}$ be the transpose of ${\mathbf{d}}$. A \textit{bordered $4$-circulant additive $(n,2^{n})_4$-code of odd length $n$} has the following generator matrix: \begin{equation}\label{eq:bordered4circ} G=\left( \begin{array}{*{12}{l}} e & {\mathbf{1}} & {\mathbf{d}}\\ {\mathbf{1}} & I_{\frac{n-1}{2}} & A_{\frac{n-1}{2}}\\ {\mathbf{c}} & B_{\frac{n-1}{2}} & I_{\frac{n-1}{2}} \end{array} \right)\ \end{equation} where $e$ is one of $0,1,\omega$, or $\overline{\omega}$, and $A_{\frac{n-1}{2}},B_{\frac{n-1}{2}}$ are circulant matrices. We perform the procedure of constructing a subcode $D$ of $C$ by deleting the rows of $G_{C}$, starting from the last row. For $n \leq 30$, the five asymmetric quantum codes, either new or of better parameters, found can be seen in Table~\ref{table:B4Circu}. As before, under the column denoted by \textbf{$A,B$} we list down the generating codewords for the matrices $A$ and $B$, in that order. \begin{table} \caption{Asymmetric Quantum Codes from Additive Bordered 4-Circulant Codes for $n \leq 30$} \label{table:B4Circu} \centering \begin{tabular}{| c | c | c | c | l |} \hline \textbf{$n$} & \textbf{$e$} & \textbf{$A,B$} & \textbf{del} & \textbf{Code $Q$} \\ \hline $23$ & $\omega$ & $(\overline{\omega},1,\overline{\omega},\omega,\omega,1,1,0,0,0,0)$, & $1$ & $[[23,0.5,8/5]]_4$ \\ & & $(0,\overline{\omega},\omega,\omega,\omega,\omega,\overline{\omega},1,0,1,\omega)$ & & \\ $23$ & $\omega$ & $(\overline{\omega},1,\overline{\omega},\omega,\omega,1,1,0,0,0,0)$, & $4$ & $[[23,2,8/4]]_4$ \\ & & $(0,\overline{\omega},\omega,\omega,\omega,\omega,\overline{\omega},1,0,1,\omega)$ & & \\ $23$ & $\omega$ & $(1,1,\overline{\omega},1,\overline{\omega},1,1,0,0,0,0)$, & $9$ & $[[23,4.5,8/2]]_4$ \\ & &$(\overline{\omega},\overline{\omega},\omega,\omega,\overline{\omega},\overline{\omega},\omega,0,\omega,0,\omega)$ & & \\ $25$ & $\omega$ & $(\omega,1,1,\omega,\overline{\omega},1,0,1,0,0,0,0)$, & $4$ & $[[25,2,8/5]]_4$ \\ & & $(1,\overline{\omega},\overline{\omega},\omega,\omega,\overline{\omega},\omega,\overline{\omega},1,0,\overline{\omega},\omega)$ & & \\ $25$ & $\omega$ & $(\omega,1,\overline{\omega},1,1,\omega,0,1,0,0,0,0)$, & $10$ & $[[25,5,8/2]]_4$ \\ & & $(1,\overline{\omega},\overline{\omega},\overline{\omega},\omega,\overline{\omega},\omega,1,\omega,\overline{\omega},0,\omega)$ & & \\ \hline \end{tabular} \end{table} \begin{rem}\label{rem:9.1} A similar procedure has been done to the generator matrices of \textit{s-extremal additive codes} found in~\cite{BGKW07} and~\cite{Var09} as well as to the \textit{formally self-dual additive codes} of~\cite{HK08}. So far we have found no new or better asymmetric codes from these sources. \end{rem} Deleting the rows of $G_{C}$ in a more careful way than just doing so consecutively starting from the last row may yield new or better asymmetric quantum codes. The process, however, is more time consuming. Consider the following instructive example taken from bordered 4-circulant codes. Let \begin{equation*} P:=\left\lbrace 1,2,4,5,8,10,12,13,14,15,16\right\rbrace \text{.} \end{equation*} Let $C$ be a bordered 4-circulant code of length $n=23$ with generator matrix $G_{C}$ in the form given in Equation (\ref{eq:bordered4circ}) with $e=\omega$ and with the circulant matrices $A,B$ generated by, respectively, \begin{equation*} \begin{aligned} &(\overline{\omega},1,\overline{\omega},\omega,\omega,1,1,0,0,0,0)\text{, and} \\ &(0,\overline{\omega},\omega,\omega,\omega,\omega,\overline{\omega},1,0,1,\omega) \text{.} \end{aligned} \end{equation*} Use the rows of $G_{C}$ indexed by the set $P$ as the rows of $G_{D}$, the generator matrix of a subcode $D$ of $C$. Using $D \subset C$, a $[[23,6,8/2]]_4$ asymmetric quantum code $Q$ can be constructed. If we use the same code $C$ but $G_{D}$ is now $G_{C}$ with rows 3,6,7,9, and 11 deleted, then, in a similar manner, we get a $[[23,2.5,8/4]]_4$ code $Q$. \subsection{Construction from two proper subcodes}\label{subsec:proper} In the previous two subsections, the larger code $C$ is an additive self-dual code while the subcode $D$ of $C$ is constructed by deleting rows of $G_{C}$. New or better asymmetric quantum codes can be constructed from two nested proper subcodes of an additive self-dual code. The following two examples illustrate this fact. \begin{ex}\label{ex:9.1} Let $C$ be a self-dual Type II additive code of length 22 with generating vector \begin{equation*} {\mathbf{v}}=(\omega,\omega,\omega,\omega,1,1,\overline{\omega}, \omega,0,\omega,\omega,0,\omega,\omega,1,1,0,\ldots,0)\text{.} \end{equation*} Let $G_{C}$ be the generator matrix of $C$ from the cyclic development of ${\mathbf{v}}$. Derive the generator matrices $G_{D}$ of $D$ and $G_{E}$ of $E$ by deleting, respectively, the last $10$ and $11$ rows of $G_{C}$. Applying Theorem~\ref{thm:3.5} on $E \subset D$ yields an asymmetric $[[22,0.5,10/2]]_4$-code $Q$. \end{ex} \begin{ex}\label{ex:9.2} Let $C$ be a self-dual Type I additive code of length 25 labeled $C_{25,4}$ in~\cite{GK04} with generating vector \begin{equation*} {\mathbf{v}}=(1,1,\omega,0,1,\overline{\omega},1,0,\omega,1,1,0,0,\ldots,0)\text{.} \end{equation*} Let $G_{C}$ be the generator matrix of $C$ from the cyclic development of ${\mathbf{v}}$. Derive the generator matrices $G_{D}$ of $D$ and $G_{E}$ of $E$ by deleting, respectively, the last $5$ and $6$ rows of $G_{C}$. An asymmetric $[[25,0.5,9/4]]_4$-code $Q$ is hence constructed. \end{ex} \section{Conclusions and Open Problems}\label{sec:Conclusion} In this paper, we establish a new method of deriving asymmetric quantum codes from additive, not necessarily linear, codes over the field ${\mathbb F}_{q}$ with $q$ an even power of a prime $p$. Many asymmetric quantum codes over ${\mathbb F}_{4}$ are constructed. These codes are different from those listed in prior works (see~\cite[Ch. 17]{Aly08} and~\cite{SRK09}) on asymmetric quantum codes. There are several open directions to pursue. On ${\mathbb F}_{4}$-additive codes, exploring the notion of nestedness in tandem with the dual distance of the inner code is a natural continuation if we are to construct better asymmetric quantum codes. An immediate project is to understand such relation in the class of cyclic (not merely circulant) codes studied in~\cite{Huff07}. Extension to codes over ${\mathbb F}_9$ or ${\mathbb F}_{16}$ is another option worth considering. More generally, establishing propagation rules may help us find better bounds on the parameters of asymmetric quantum codes. \section*{Acknowledgment}\label{sec:Acknowledge} The first author thanks Keqin Feng for fruitful discussions. The authors thank Iliya Bouyukliev, Yuena Ma, and Ruihu Li for sharing their data on known Hermitian self-orthogonal ${\mathbb F}_{4}$-codes, and Somphong Jitman for helpful discussions.
2024-02-18T23:40:23.819Z
2011-03-31T02:00:23.000Z
algebraic_stack_train_0000
2,266
20,620
proofpile-arXiv_065-11071
\section{INTRODUCTION} The properties of the earliest galaxies, such as their star formation histories, masses, production of ionizing photons and their escape fraction, are crucial in understanding the reionization process, during which the previously neutral intergalactic medium (IGM) becomes totally ionized. Thanks to the availability of large ground-based and space telescopes, and improvements in the searching technique for Lyman Break Galaxies and Lyman-$\alpha$ Emitters, we are now tracing galaxy formation at progressively higher redshifts beyond 6 (see \citealt{Bunker09} for a review). Candidates at redshifts as high as $z \sim 10$ are newly reported from the analysis of the Hubble Ultra Deep Field (HUDF) \citep{Bouwens09}. However, it is now believed that the galaxies that produced most of the necessary (re)ionizing photons were dwarfs \citep{CF07} which are currently beyond our capability of direct detection. The forthcoming James Webb Space Telescope (JWST) will have the capabilities to directly detect the reionization sources at the faint end of the luminosity function. Still, given their faintness, long integration times will be required; hence, defining target candidate reionizing sources will be of primary importance to study them in spectroscopic detail. Instead of looking at a specific galaxy directly, the redshifted 21 cm transition of HI traces the neutral gas either in the diffuse IGM or in non-linear structures, comprising the most promising probe of the reionization process (see e.g. \citealt{FOB06} for a review). While the 21 cm tomography maps out the three dimensional structure of the 21 cm emission or absorption by the IGM against the cosmic microwave background (CMB) (e.g. \citealt{Madau97,Tozzi00}), the 21 cm forest observation detects absorption lines of intervening structures towards high redshift radio sources showing high sensitivity to gas temperature \citep{XuC09}. The problem of the 21 cm forest signatures produced by different kinds of structures has been addressed by several authors. \citet{Carilli02} presented a detailed study of 21 cm absorption by the mean neutral IGM as well as filamentary structures based on the simulations of \citet{Gnedin00}, but their box was too small to account for large scale structures and was not able to resolve collapsed objects. Instead, \citet{Furlanetto02} used a simple analytic model to compute the absorption profiles and abundances of minihalos and galactic disks. Later on, \citet{Furlanetto06} re-examined four kinds of 21 cm forest absorption signatures in a broader context, especially the transmission gaps produced by ionized bubbles. Recently, \citet{XuF10} developed a more detailed model of the 21 cm absorption lines of minihalos (i.e. starless galaxies, MHs) and dwarf galaxies (star-forming galaxies, DGs) during the epoch of reionization, explored the physical origins of the line profiles, and generated synthetic spectra of the 21 cm forest on top of both high-$z$ quasars and gamma ray burst (GRB) afterglows. Interestingly, they find that: (i) MHs and DGs show very distinct 21 cm line absorption profiles (ii) they contribute differently to the spectrum due to the mass segregation between the two populations. It follows that it is in principle possible to build a criterion based on the 21 cm forest spectrum to efficiently select DGs. The goal of this work is to derive the different signatures of DGs and MHs using a 21 cm spectrum of high-$z$ radio sources, and provide a criterion to pick DGs lines in the spectrum. For these candidates, precise redshift information will be available; moreover, given the angular position of the background source, the 21 cm forest observation provides an excellent tool for locating the high-$z$ DGs. The great advantage of using high-$z$ GRBs as background radio sources is that the follow-up IR JWST observations after the afterglow has faded away will not be hampered by the presence of a very luminous source (as in the case of a background quasar) in the field\footnote{Throughout this paper, we adopt the cosmological parameters from WMAP5 measurements combined with SN and BAO data: $\Omega_b = 0.0462$, $\Omega_c = 0.233$, $\Omega_\Lambda = 0.721$, $H_{\rm 0} = 70.1 \ensuremath{\,{\rm km}} \ensuremath{\, {\rm s}^{-1}} \ensuremath{{\rm Mpc}}^{-1}$, $\sigma_{\rm 8} = 0.817$, and $n_{\rm s} = 0.96$ \citep{WMAP5}}. \section{METHOD}\label{method} Here we briefly summarize the main features of the model used in this work, but refer the interested reader to \citet{XuF10} for a more comprehensive description. We use the Sheth-Tormen halo mass function \citep{ST99} to model the halo number density at high redshift in the mass range $10^{5-10}M_\odot$, which covers the minimum mass allowed to collapse \citep{Abel00,ON07} and most of the galaxies that are responsible for reionization \citep{CF07}. The dark matter halos have an NFW density profile within the virial radii $r_{\rm vir}$ \citep{NFW97}, with a concentration parameter fitted to high-redshift simulation results by \cite{Gao05}; the dark matter density and velocity structure outside $r_{\rm vir}$ are described by an ``Infall model''\footnote{Public code available at http://wise-obs.tau.ac.il/$\tilde{\;\:}$barkana/codes.html} \citep{Barkana04}. The gas inside the $r_{\rm vir}$ is assumed to be in hydrostatic equilibrium at temperature $T_{\rm vir}$ in the dark matter potential, while the gas outside follows the dark matter overdensity and velocity field. Once the halo population is fixed, a timescale criterion for star formation is introduced to determine whether a halo is capable of hosting star formation. The timescale required for turning on star formation is modeled as the maximum between the free-fall time $t_{\rm ff}$ and the H$_2$ cooling time $t_{\rm cool}$ \citep{Tegmark97}, i.e. $t_{\rm SB} = \max \{\, t_{\rm ff},\, t_{\rm cool}\,\}$. Then star formation activity begins at $t_{\rm s} = t_{\rm F} + t_{\rm SB}$, where $t_{\rm F}$ is the halo formation time predicted by the standard EPS model \citep{LC93}. If $t_{\rm s}$ is larger than the Hubble time at the halo redshift, we define the system as a {\it minihalo}, i.e. a starless galaxy. The ionized fraction in a MH is computed with collisional ionization equilibrium, which depends on its temperature. The gas within $r_{\rm vir}$ is at the virial temperature, and in the absence of an X-ray background the gas outside is adiabatically compressed, so that the temperature is simply $T_{\rm K} = T_{\rm IGM} (1+\delta)^{\gamma-1}$, where $\gamma = 5/3$ is the adiabatic index for atomic hydrogen, and $T_{\rm IGM}$ is the temperature of the mean-density IGM. The Ly$\alpha$ photons inside a MH are produced by recombinations, which are negligible for most MHs that are almost neutral, but serve as a dominant coupling source for the most massive MHs which are partially ionized due to their higher $T_{vir}$. When the criterion $t_{\rm s} < t_{\rm H}$ is satisfied, star formation occurs within a Hubble time turning the halo into a {\it dwarf galaxy}. We use a mass-dependent handy fit of star formation efficiency provided by \citet{SF09}. Adopting the spectra of high redshift starburst galaxies provided by \citet{Schaerer02,Schaerer03}\footnote{http://cdsarc.u-strasbg.fr/cgi-bin/Cat?VI/109} and an escape fraction of $f_{\rm esc}=0.07$ as favored by the early reionization model (ERM, \citealt{Gallerani08}), we numerically follow the expansion of the HII region. The gas temperature inside the HII region is fixed at $2\times10^4\ensuremath{{\rm K}}$, while the temperature of gas around the HII region is calculated including the Hubble expansion, soft X-ray heating and the Compton heating. Although the soft X-ray heating dominates over the Compton heating, its effect is weak unless the DG has a higher stellar mass and/or a top-heavy initial mass function. Besides ionization and heating effects, the DG metal-free stellar population produces Ly$\alpha$ photons from soft X-ray cascading, which could penetrate into the nearby IGM and help to couple the spin temperature to the kinetic temperature of the gas. Finally, we account for the Ly$\alpha$ background produced by the collective contribution of DGs. With the detailed modeling of properties of both MHs and dwarf galaxies, and an associated Ly$\alpha$ background, we compute the 21 cm absorption lines of these non-linear structures. The diffuse IGM creates a global decrement in the spectra of high-$z$ radio sources, on top of which MHs and DGs produce deep and narrow absorption lines. The 21 cm optical depth of an isolated object is \citep{Field59,Madau97,Furlanetto02}: \begin{eqnarray}\label{Eq.tau} \tau (\nu) &=& \frac{3\,h_{\rm P}\,c^3 A_{10}}{32\, \pi^{3/2} k_{\rm B}}\, \frac{1}{\nu^2} \nonumber \\ &\times& \int_{-\infty}^{+\infty} \frac{n_{\rm HI}(r)} {b(r)T_{\rm S}(r)}\, \exp\left[\,-\, \frac{(u(\nu)-\bar v(r))^2}{b^2(r)}\,\right] dx, \end{eqnarray} where $A_{10} = 2.85 \times 10^{-15} \ensuremath{\, {\rm s}^{-1}}$ is the Einstein coefficient for the spontaneous decay of the 21 cm transition, $n_{\rm HI}$ is the neutral hydrogen number density, $T_{\rm S}$ is the spin temperature, and $b(r)$ is the Doppler parameter of the gas, $b(r) = \sqrt{\,2\,k_{\rm B}T_K(r)/m_{\rm H}}$. Here $u(\nu)$ is the frequency difference from the line center in terms of velocity, $u(\nu) \equiv c\, (\nu-\nu_{10})/\nu_{10}$, where $\nu_{10} =1420.4 \ensuremath{\, {\rm MHz}}$ is the rest frame frequency of the 21 cm line, and $\bar v(r)$ is bulk velocity of gas projected onto the line of sight at radius $r$. Inside of the virial radius, the gas is thermalized, and $\bar v(r) = 0$, while the gas outside the virial radius has a bulk velocity contributed from both infall and Hubble flow, which is predicted by the ``Infall Model.'' The coordinate $x$ is related to the radius $r$ by $r^2 = (\alpha\, r_{\rm vir})^2 + x^2$, where $\alpha$ is the impact parameter of the penetrating line of sight in units of $r_{\rm vir}$. \begin{figure*} \begin{tabular}{cc} \includegraphics[totalheight=6cm,width=8cm]{TransDG.eps} &\includegraphics[totalheight=6cm,width=8cm]{TransMH.eps} \end{tabular} \caption{Relative transmission along a line of sight. {\it Left:} the spectrum with absorptions caused by DGs alone from 129$\ensuremath{\, {\rm MHz}}$ to 158$\ensuremath{\, {\rm MHz}}$ corresponding to $z = 10.01$ -- $7.99$. {\it Right:} the spectrum with absorptions caused by MHs alone from 129$\ensuremath{\, {\rm MHz}}$ to 130$\ensuremath{\, {\rm MHz}}$ corresponding to $z = 10.01$ -- $9.93$.} \label{Fig.spectrum} \end{figure*} The spin temperature of neutral hydrogen is defined by the relative occupation numbers of the two hyperfine structure levels, and it is determined by \citep{Field58,FOB06}: \begin{equation} T_{\rm S}^{-1} \,=\, \frac{T_\gamma^{-1} + x_c\,T_{\rm K}^{-1} + x_\alpha\, T_{\rm C}^{-1}} {1+x_c+x_\alpha}, \end{equation} where $T_\gamma = 2.726(1+z)\,\ensuremath{{\rm K}}$ is the CMB temperature at redshift $z$, $T_{\rm K}$ is the gas kinetic temperature, and $T_{\rm C}$ is the effective color temperature of the UV radiation. In most cases, $T_{\rm C}=T_{\rm K}$ due to the frequent Ly$\alpha$ scattering \citep{FOB06}. The collisional coupling is described by the coefficient $x_c$, and $x_\alpha$ is the coupling coefficient of the Ly$\alpha$ pumping effect known as the Wouthuysen-Field coupling \citep{Wouthuysen52,Field58}. The main contributions to $x_c$ are H-H collisions and H-$e^-$ collisions, which can be written as $x_c \,=\, x_c^{\rm eH} + x_c^{\rm H\!H} \,=\, (n_e\, \kappa_{10}^{\rm eH}/A_{10})\, (T_\star/T_\gamma) + (n_{\rm HI}\, \kappa_{10}^{\rm H\!H}/A_{10})\, (T_\star/T_\gamma)$, where $T_\star = 0.0682\, \ensuremath{{\rm K}}$ is the equivalent temperature of the energy splitting of the 21 cm transition, and $\kappa_{10}^{\rm eH}$ and $\kappa_{10}^{\rm H\!H}$ are the de-excitation rate coefficients in collisions with free electrons and hydrogen atoms, respectively. The coupling coefficient $x_\alpha$ is proportional to the total scattering rate between Ly$\alpha$ photons and hydrogen atoms, $x_\alpha \,=\, (4\,P_\alpha/27\,A_{10})\, (T_\star/T_\gamma)$, where the scattering rate $P_\alpha$ is given by $P_\alpha \,=\, c\,\sigma_\alpha\, n_\alpha^{\rm tot}/\Delta\nu_D \,=\, 4\pi\, \sigma_\alpha J_\alpha$. Here $\sigma_\alpha \equiv {\displaystyle \frac{\pi e^2}{m_e c} f_\alpha}$ where $f_\alpha=0.4162$ is the oscillator strength of the Ly$\alpha$ transition, $n_\alpha^{\rm tot}$ is the total number density of Ly$\alpha$ photons, $J_\alpha$ is the number intensity of the Ly$\alpha$ photons, and $\Delta\nu_D = (b/c)\,\nu_\alpha$ is the Doppler width with $b$ being the Doppler parameter and $\nu_\alpha$ being the Ly$\alpha$ frequency. \section{RESULTS}\label{results} \subsection{The spectra of dwarfs and minihalos} With the halo number density predicted by the Sheth-Tormen mass function and the cross-sectional area of halos determined by the mean halo separation, we derive the number density of the absorption lines. Applying our star formation criterion to each intersected halo with Monte-Carlo sampled mass and formation-redshift, we compute the absorption lines of MHs in collisional ionization equilibrium or DGs photoionized by central stars, and generate a synthetic spectrum along a line of sight. The entire spectrum with absorptions of both MHs and DGs is shown in \citet{XuF10} against a quasar or a GRB afterglow. In order to illustrate the differences between the spectrum caused by MHs and that by DGs, and disregarding the background source properties, we plot the relative transmission $T=\exp(-\,\tau)$ of a spectrum with absorptions caused by dwarf galaxies alone (DG-spectrum) in the left panel of Fig.\ref{Fig.spectrum} and that with absorptions caused by MHs alone (MH-spectrum) in the right panel, respectively. Note that the ranges of observed frequency are different in the two panels. The absorption lines are very narrow and closely spaced in the MH-spectrum which resembles a 21 cm forest, while the 21 cm lines on the DG-spectrum are much rarer. A clear signature unique to the DG-spectrum is that there are some dwarf galaxies with sufficiently large HII regions that give rise to leaks (i.e. negative absorption lines with equivalent width $W_\nu < 0$) on the spectrum rather than absorption lines. Also, we see that absorption lines of MHs are generally deeper than DGs. We define $\delta(\nu)$ to be the relative difference between the flux $f$ at observed frequency $\nu$ and the global flux transmitted through the homogeneous IGM at the corresponding redshift, \begin{equation} \delta(\nu) \,\equiv\, \frac{f(\nu)}{f_{\rm IGM}} - 1. \end{equation} Then we compute the {\it flux} correlation functions with the formula \begin{equation} \xi_{ab}(\Delta\nu) \,=\, \frac{1}{N} \sum_{i=1}^{N}\, \delta_a(\nu_i)\, \delta_b(\nu_i+\Delta\nu), \end{equation} where $N$ is the total number of point pairs on the spectrum with a frequency distance of $\Delta\nu$. The subscript ``ab'' takes the values ``gg'' (``hh'') for the auto-correlation function of the DG (MH) spectrum, or ``hg'' for the cross-correlation between the MH and DG spectra. We plot these correlation functions with solid curves in Fig.\ref{Fig.correlation}. From the auto-correlation function of the DG-spectrum, we see little correlation on frequency separations larger than $10\ensuremath{\, {\rm kHz}}$, while little correlation on frequency separations larger than $1\ensuremath{\, {\rm kHz}}$ is seen in the MH-spectrum. This is what we could expect for a randomly generated spectrum with only the halo mass function. The correlation seen on smaller frequency separations just comes from the point pairs located within the same lines. The DGs have a longer correlation length because of their broader absorption lines or leaks, but the amplitude of the correlation is very low because they are rare. In order to illustrate the contribution of those dwarfs with negative absorption, we calculated the auto-correlation function of the DG-spectrum with leaks excluded; the result is shown as the dashed curve in the upper panel of Fig.\ref{Fig.correlation}. The correlation amplitude is smaller due to the reduced number of signals, and the correlation length is reduced by more than one order of magnitude. This shows that the broad signals are caused primarily by those leaks. In addition, comparing this dashed curve with the solid curve in the bottom panel, we find that on average, an absorption line of a DG is even narrower than that of a MH. This is because some of these DGs produce HII regions larger than $r_{\rm vir}$, and the absorption lines from the gas outside the virial radii but inside the HII region, which has the largest infalling velocities, are erased. The cross-correlation function between the MHs and DGs can also be obtained in the same way. In this work we have not considered the clustering property of the MHs and DGs arising from large scale structure, so the flux cross-correlation should be zero, except for the Poisson fluctuations. Our computation confirms this expectation. \begin{figure} \centering{\resizebox{9cm}{8cm}{\includegraphics{autocorrelation_DG_MH.eps}}} \caption{The flux correlation functions of the 21 cm forest spectrum. {\it Upper panel:} the auto-correlation functions of the DG-spectrum. The solid curve includes all the lines while the dashed curve excludes the leaks with $W_\nu < 0$. {\it Bottom panel:} the auto-correlation function of the MH-spectrum. The spectra used for the computation here are all from 129$\ensuremath{\, {\rm MHz}}$ to 158$\ensuremath{\, {\rm MHz}}$ corresponding to $z = 10.01$ -- $7.99$.} \label{Fig.correlation} \end{figure} \subsection{Equivalent width distributions} Directly from the spectrum, we could compute the distribution of equivalent width (EW) of the absorption lines for a specific range of observed frequency corresponding to a specific redshift. As the continuum of a background source has a global decrement due to the absorption of the diffuse IGM, the real signal of non-linear structures is the extra absorption with respect to the flux transmitted through the IGM. Therefore, the EW of an absorption line should be defined as \begin{eqnarray} W_\nu &=& \int\, \frac{\displaystyle f_c\, e^{-\,\tau_{\rm IGM}(z)} \,-\, f_c\, e^{-\,\tau(\nu)}}{\displaystyle f_c\, e^{-\,\tau_{\rm IGM}(z)}}\; d\nu \nonumber \\ &=& \int\, {\displaystyle (1 \,-\, e^{\tau_{\rm IGM}(z) \,-\, \tau(\nu)})}\; d\nu, \end{eqnarray} where $f_c$ is the continuum flux of the background radio source, and $\tau_{\rm IGM}(z)$ is the optical depth of the diffuse IGM at redshift $z$. We compute the differential and cumulative distributions of EW for the DG-spectrum in the left panel and for the MH-spectrum in the right panel respectively in Fig.\ref{Fig.EWdistr}. The histograms represent the number distributions and the solid curves are the cumulative distributions per redshift interval. \begin{figure*} \begin{tabular}{cc} \includegraphics[totalheight=6cm,width=8cm]{EWdistrDG.eps} &\includegraphics[totalheight=6cm,width=8cm]{EWdistrMH.eps} \end{tabular} \caption{The differential and cumulative distributions of equivalent width of the 21 cm absorption lines. {\it Left:} the distribution for the DG-spectrum. {\it Right:} the distribution for the MH-spectrum. The spectra used for the computation here are both from 129$\ensuremath{\, {\rm MHz}}$ to 158$\ensuremath{\, {\rm MHz}}$ corresponding to $z = 10.01$ -- $7.99$.} \label{Fig.EWdistr} \end{figure*} We see that both EW distributions for DGs and MHs peak in the same region at $W_\nu \sim 0.1$ - $0.3 \ensuremath{\, {\rm kHz}}$. This means that most of the dwarfs and MHs have comparable EWs, and we cannot distinguish them only from their EWs. However, the distribution curves of their EWs show different shapes. The EW distribution of DGs has a long tail at the small EW end, while MHs have a large-EW tail in the distribution curve. \subsection{Selection criteria} In this subsection, we aim at deriving a criterion to distinguish between the absorption lines caused by DGs and those caused by MHs. From the computation of EW distributions, we find that only dwarf galaxies cause negative absorptions and thus have $W_\nu < 0$, while only MHs are found to have EWs above $0.37\ensuremath{\, {\rm kHz}}$. Therefore, the first criterion for candidate DGs could be $W_\nu < 0$, with which we select 10 dwarfs out of 54 in our synthetic spectrum. They have a 100 percent probability of being caused by DGs. That means, we can find $18.5\%$ of the total dwarfs along the line of sight, and they are relatively large dwarfs with large HII regions. In addition, with the predicted EW distribution, we can estimate the total number of DGs in the spectrum with the number of selected dwarf galaxies that have negative absorptions. Similarly, using the second criterion $W_\nu > 0.37\ensuremath{\, {\rm kHz}}$, we can select 812 MHs out of a total of 7108. This is $11.4\%$ of all the MHs, which cannot be misidentified as DGs. From the correlation functions shown above, on the other hand, we see that DGs have a longer correlation length than MHs. In the absence of halo clustering information, the correlation length reflects the mean width of the absorption lines, so this is also a distinctive signature of dwarfs from MHs. However, we have demonstrated that the correlation of dwarfs at relatively large frequency distances are exactly caused by those with negative absorptions. Therefore, the criterion of broad absorption is degenerate with the negative tail of the EW distribution of the DGs. Excluding those dwarfs with negative absorptions, the mean width of an absorption line of a DG (about $0.3\ensuremath{\, {\rm kHz}}$) is even smaller than that of an MH (about $0.5\ensuremath{\, {\rm kHz}}$). Hence, for the lines with $0 < W_\nu < 0.37\ensuremath{\, {\rm kHz}}$, if we have infinite resolution, a narrower absorption line will have a higher probability of being caused by a dwarf galaxy. This is probably beyond the resolution capabilities of currently planned instruments. The probability of these absorption lines being a DG would be $\sim 44/6296 \sim 0.7\%$, with the complementary probability attributed to MHs. \section{DISCUSSION}\label{discuss} Using the model developed by \citet{XuF10}, we have computed 21 cm absorption line spectra (``forest'') caused by DGs and MHs separately, their flux correlation functions and EW distributions, with the aim of distinguishing DGs from MHs in a statistical way. With the selection criterion of $W_\nu < 0$, we are able to identify $\sim 18.5\%$ of DGs, and the criterion of $W_\nu > 0.37\ensuremath{\, {\rm kHz}}$ selects $\sim 11.4\%$ of MHs. As a whole, we can disentangle $\sim 11.5\%$ of all the non-linear objects along a line of sight for which we can tell whether they are DGs or MHs. In this way, we find a strong but simple criterion to select candidate DGs to be later re-observed in the optical or infrared. Using the radio afterglow of a high redshift GRB as the background, this selection strategy could be accomplished by LOFAR or SKA. Then, after the GRB fades away, the follow-up observations can be carried out by JWST, which will be capable of directly detecting the DGs that are responsible for reionization. Cosmic voids can also produce negative absorptions with respect to the mean absorption by the IGM. Accounting for the density voids requires the clustering information of large scale structure which is not included in our computation. However, according to the void size distribution based on the excursion set model developed by \citet{SW04}, the characteristic scale of a density void is much larger than that of a DG HII region. As a result, density voids will produce ``transmissivity windows'' which are about one order of magnitude broader than the ``leaks'' produced by HII regions. As the width of both ``transmissivity windows'' and ``leaks'' exceed the current spectral resolution, a second criterion of signal width could be applied to eliminate those voids. Further, it is not necessary to consider the so-called ``mixing problem'' between the density voids and the HII regions as \citet{Shang07} did for the Ly$\alpha$ forest, because the dwarfs are more likely to exist in filaments out of the voids, and they are not likely to mix with the density voids. While the selection criterion for candidate DGs is reliable, the total number of predicted DGs and the percentages of identifiable objects are model-dependent. Specifically, they depend on the star formation law and efficiency, stellar initial mass function, and $f_{\rm esc}$. However, the fraction of dwarfs having negative absorption depends only on the {\it shape} of the EW distribution. If different star formation models predict similar shapes of the EW distribution, then our prediction of the fraction of dwarfs producing leaks is quite reliable and model-independent, and the total number of DGs along a given line of sight can be safely inferred from the number of selected leaks. Otherwise, we could compare the total number of dwarfs inferred from the percentage argument with the one originally predicted from our star formation model, and use this result to constrain the model. To improve on the current selection criteria, the next step is to include the clustering properties of dark matter halos. With this ingredient included, the correlation functions will retain additional information on the distances between the lines. In principle, knowing the shape of the correlation function, one could associate to any given line in the spectrum (e.g. by using Bayesian methods) the statistical probability that it arises from a DG. We reserve these and other aspects to future work. \section{ACKNOWLEDGMENTS} We thank R. Barkana who provided his infall code. This work was supported in part by a scholarship from the China Scholarship council, by a research training fellowship from SISSA astrophysics sector, by the NSFC grants 10373001, 10533010, 10525314, and 10773001, by the CAS grant KJCX3-SYW-N2, and by the 973 program No. 2007CB8125401.
2024-02-18T23:40:23.831Z
2010-02-23T16:52:20.000Z
algebraic_stack_train_0000
2,269
4,352
proofpile-arXiv_065-11159
\section*{(Intro)} Cells are able to sense concentration gradients with high accuracy. Large eukaryotic cells such as the amoeba {\em Dictyostelium discoideum} and the budding yeast {\em Saccharomyces cerevisiae} can sense very shallow spatial gradients by comparing concentrations across their lengths \cite{Arkowitz:1999p6171}. By contrast, small motile bacteria such as {\em Escherichia coli} detect spatial gradients indirectly by measuring concentration ramps (temporal concentration changes) as they swim \cite{Macnab:1972p5864}, and can respond to concentrations as low as 3.2 nM---about three molecules per cell volume \cite{Mao:2003p5862}. The noise arising from the small number of detected molecules sets a fundamental physical limit on the accuracy of concentration sensing, as originally shown in the seminal work of Berg and Purcell \cite{Berg:1977p3458,Bialek:2005p5858}. This approach was recently extended to derive a fundamental bound on the accuracy of direct spatial gradient sensing \cite{Endres:2008p925}. However, no theory exists for the physical limit of ramp sensing, which is what bacteria actually do when they chemotact. In this Letter, we present such a theory for different measurement devices, from a single receptor to an entire cell. We contrast two strategies: linear regression (LR) of the input signal (in line with Berg and Purcell) and maximum likelihood estimation (MLE) \cite{likelihood,Endres:2009p3438}, a method from statistics to optimally fit a model to data, revealing an up to twofold advantage for the latter. Finally, we introduce a biochemical signaling network, similar to the {\em E. coli} chemotaxis system, that outputs an estimate of the ramp rate. Consistent with the derived theoretical bounds, we find that a mechanism emulating MLE yields twofold higher accuracy that one emulating LR. However, this improved performance has a cost: either storage of signaling proteins near the receptors, or irreversibility of the receptor cycle with concomitant energy consumption. \begin{figure} \includegraphics[width=\linewidth]{measurers.pdf} \caption{Schematic of measurement devices and corresponding time traces for linearly increasing concentration $c(t)=c_0+c_1t$. (a) Left: a single receptor binds a particle at rate $k_+c(t)$, and releases it at rate $k_-$. Right: binary time series of receptor occupancy. (b) Left: particles are incident on an absorbing sphere with average flux $4\pi Dac(t)$. Right: sequence of times when a particle hits the sphere. (c) Left: a monitoring sphere counts the number of particles inside its volume without hindering their diffusion. Right: number $N(t)$ of particles inside the sphere as a function of time. \label{fig:measurers} } \end{figure} Sensing small numbers of molecules implies relative noise ${\sim}n^{-1/2}$, where $n$ is the number of detected molecules. Berg and Purcell (BP) calculated how this noise affects the accuracy of concentration sensing \cite{Berg:1977p3458}. They considered three types of measurement devices: a single receptor, a perfectly absorbing sphere, and a perfectly monitoring sphere. Following their approach, we investigate ramp sensing by these three devices when presented with a concentration $c(t)=c_0+c_1t$, as schematized in Fig.~\ref{fig:measurers}. A single receptor [Fig.~\ref{fig:measurers}(a)] binds particles at rate $k_+c(t)$ and unbinds them at rate $k_-$. Following BP, we assume that diffusion is fast enough that the receptor never rebinds the same particle. An ideal observer has access to the binary time series $s(t)$ of receptor occupancy between $-T/2$ and $T/2$. The lengths of bound and unbound invervals have exponential distributions with means $1/k_-$ and $1/k_+c$, respectively. Throughout, we assume that the ramp is shallow, $c_1T\ll c_0$, and that the observation time is long compared to receptor kinetics, $T\gg 1/k_-, 1/k_+c$. In BP, the true concentration $c$ is estimated from the fraction of time the receptor is bound, $\bar{s}=\frac{1}{T}\int_{-T/2}^{T/2} dt\, s(t)$, which is equal to the equilibrium occupancy in the limit of large times: \begin{equation}\label{occupancy} \bar{s}\approx \<s\>=k_+c/(k_-+k_+c), \end{equation} where $\<\cdot\>$ represents an ensemble average. Following a similar strategy, we can estimate the ramp rate by performing the linear regression of $s(t)$ to $s_0+s_1t$: \begin{equation} s_0= \frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}} dt\,s(t),\quad s_1= \frac{12}{T^3}\int_{-\frac{T}{2}}^{\frac{T}{2}} dt\,t\, s(t), \end{equation} from which the concentration and the ramp rate are estimated using \eqref{occupancy} as: \begin{equation} c_0^{\rm LR}:=\frac{k_-}{k_+}\frac{s_0}{1-s_0},\quad c_1^{\rm LR}:=c_0^{\rm LR}\frac{s_1}{s_0(1-s_0)}. \end{equation} The uncertainties of these estimates can be calculated from the time correlations of receptor occupancy (see Appendix \ref{appA1a}), yielding: \begin{eqnarray} \frac{\<(\delta c_0^{\rm LR})^2\>}{c_0^2}=\frac{2}{n},\quad \frac{\<(\delta c_1^{\rm LR})^2\>}{(c_0/T)^2}=\frac{24}{n},\label{unlr} \end{eqnarray} where $n$ is the total number of binding events in the time $T$. Note that the result for $c_0$ is precisely that of BP \cite{Berg:1977p3458,Endres:2009p3438}. In \cite{Endres:2009p3438}, it was shown that the accuracy of concentration sensing could be improved using maximum likelihood estimation. In this scheme, the parameters of the model are chosen to maximize the probability (``likelihood'') that the observed data was generated by the model. Can we also improve the accuracy of ramp sensing over LR by using this method? The time trace $s(t)$ can be characterized by the series of binding $(t_i^+)$ and subsequent unbinding $(t_i^-)$ times, $i=1,\ldots,n$. The probability of the data within our model is \cite{Endres:2009p3438}: \begin{equation}\label{likelihood} P=e^{-k_-T_b}e^{-k_+\sum_{i} \int_{t_i^-}^{t_{i+1}^+} dt c(t) } k_-^{n}\prod_{i=1}^n k_+c(t_i^+), \end{equation} where $T_b$ is the total bound time. The concentration and the ramp rate, $c_0$ and $c_1$, are the model parameters. Given the times of the events, the likelihood is maximized with respect to $c_0$ and $c_1$ by solving ${\partial P}/{\partial c_0}=0$ and ${\partial P}/{\partial c_1}=0$, from which the maximum likelihood estimate $(c_0^{\rm MLE},c_1^{\rm MLE})$ is obtained. In general these equations have no simple solution, but we can obtain the average behavior by exploiting the fact that binding and unbinding are fast with respect to concentration changes, {\em i.e.} that the receptor remains adiabatically in equilibrium with the concentration $c(t)$. We can thus simplify the sum and product in \eqref{likelihood}: \begin{eqnarray} \sum_{i=1}^n \int_{t_i^-}^{t_{i+1}^+} dt c(t)&\approx& \int_{-\frac{T}{2}}^{\frac{T}{2}} dt\, [1-\<s(t)\>] c(t), \\ \sum_{i=1}^n \log c(t_i^+)&\approx& \int_{-\frac{T}{2}}^{\frac{T}{2}} dt\,k_- \<s(t)\>\log c(t), \end{eqnarray} where $\<s(t)\>$ is the equilibrium occupancy at time $t$, given by \eqref{occupancy} with $c=\tilde c_0+\tilde c_1 t$, where $\tilde c_0$ and $\tilde c_1$ are the {\em true} parameters that generated the data. Applying this approximation to ${\partial P}/{\partial c_0}$, ${\partial P}/{\partial c_1}$, we confirm that $c_0^{\rm MLE}=\tilde c_0$ and $c_1^{\rm MLE}=\tilde c_1$ for $T\to\infty$ (see Appendix \ref{appA1b}). For finite times, the errors in $c_0^{\rm MLE},c_1^{\rm MLE}$ can be estimated by the Cram\'er-Rao bound \cite{Cramer}, which states that the variance of parameter estimates exceeds the inverse of the Fisher information, and approaches equality in the limit of long time series: \begin{equation}\label{rao} \<\delta \mathbf{c}^{\rm T}\delta \mathbf{c}\> \gtrsim -\left[\partial_{\mathbf{c}}^{\rm T}\partial_{\mathbf{c}} \log P\right]^{-1}, \end{equation} where $\delta \mathbf{c} = (c_0^{\rm MLE}-\tilde c_0, c_1^{\rm MLE}-\tilde c_1)$ and $\partial_\mathbf{c} = (\partial/\partial c_0, \partial/\partial c_1)$. Again we can use the adiabatic approximation to compute the Hessian of the log-likelihood on the right-hand side of \eqref{rao}, to obtain: \begin{eqnarray} \frac{\<(\delta c_0^{\rm MLE})^2\>}{c_0^2} = \frac{1}{n},\quad \frac{\<(\delta c_1^{\rm MLE})^2\>}{(c_0/T)^2} = \frac{12}{n}.\label{unml} \end{eqnarray} These variances are half the ones obtained from LR \eqref{unlr}. The first result for constant concentrations is that of \cite{Endres:2009p3438}. As observed there, the LR estimate adds the uncertainties from both bound and unbound interval durations. In contrast, the maximimum likelihood estimate relies only on unbound interval durations, since these carry all the information about the concentration. We now turn to ramp sensing by an entire cell, starting with the case of an idealized absorbing sphere [Fig.~\ref{fig:measurers}(b)]. An ideal observer witnesses a time series of absorption events, described by the instantaneous current $I(t)=\sum_{i=1}^n \delta(t-t_i)$, where $\delta(t)$ is the Dirac delta function and $\{t_i\}$ are the absorption times. The average current of molecules impinging on the sphere is given by $\<I(t)\>=4\pi Dac(t)$, where $D$ is the diffusivity, $a$ the sphere radius and $c(t)$ the concentration far from the sphere \cite{Berg:1977p3458}. Applying the same methods used for the single receptor, we calculated the uncertainty of ramp sensing for linear regression of $I(t)$ as well as for MLE (see Appendix \ref{appA2}). We found no difference between the two strategies, which both yield the same uncertainties as in \eqref{unml}, with $n$ now the total number of molecules absorbed during time $T$: $n\approx 4\pi Dac_0T$. For a monitoring sphere [Fig.~\ref{fig:measurers}(c)], molecules are free to diffuse into and out of the sphere, and the observer records the number $N(t)$ of particles inside the sphere as a function of time. On average this number is $\<N(t)\>=(4/3)\pi a^3 c(t)$. Performing a linear regression of $N(t)$ to $N_0+N_1t$, one can estimate the concentration and the ramp rate through $c_0^{\rm LR}:=3 N_0/4\pi a^3$ and $c_1^{\rm LR}:=3 N_1 /4\pi a^3$. Following \cite{Berg:1977p3458}, the uncertainty of these estimates can be calculated from the time autocorrelation of $N(t)$ (see Appendix \ref{appA3}), yielding: \begin{equation}\label{conmon} \frac{\<(\delta c_0)^2\>}{c_0^2}=\frac{3}{5\pi Da c_0 T},\quad \frac{\<(\delta c_1)^2\>}{(c_0/T)^2}=\frac{36}{5\pi Da c_0 T}. \end{equation} The first result was obtained in \cite{Berg:1977p3458}. Maximum likelihood is difficult to implement in the context of the monitoring sphere because it requires a sum over all possible histories of particles exiting and returning to the sphere. Thus, whether the LR result can be improved upon remains an open question. \begin{figure} \includegraphics[width=\linewidth]{networks.pdf} \caption{Biochemical network for measuring concentration ramps. Binding of ligand to the receptor increases its activity $u$ and causes species $x$ to be produced. This production is downregulated by a feedback factor $y$ which is itself catalyzed by $x$. Right: average network response to a step function in the concentration, $c(t)=c_0+\Delta c\,\theta(t-t_0)$ (solid curves) and to a ramp, $c(t)=c_0+c_1(t-t_0)\theta(t-t_0)$ (dotted curves). In response to the step function, the network adapts precisely and $x$ decays back to its original value after an initial increase. In response to a ramp, $x$ shifts by an amount proportional to the ramp rate. The quantitative ability of the network to sense such ramps depends on whether receptors signal continuously or in a discrete burst upon ligand binding. \label{fig:networks} } \end{figure} Maximum likelihood estimation is in general the optimal way to sense ramps, and provides a twofold improvement over simple linear regression in the case of the single receptor. Could MLE be implemented in biological systems? To address this question, we now introduce a simple, deterministic biochemical network (Fig.~\ref{fig:networks}) that can approach the optimal performance limit set by MLE. The same network implements either LR or MLE depending on the receptor signaling mechanism: LR is implemented if each receptor signals continuously while a particle is bound; MLE is implemented if each receptor signals with a fixed-size burst upon binding a particle, and then releases the particle rapidly. The first case corresponds to integrating the fraction of time the receptor is bound, while the second corresponds to counting binding events. Accordingly, we will show that the shot noise (Poisson noise) due to the stochastic nature of binding and unbinding is twice as large in the first case as in the second. Let $u(t)$ be the receptor activity, proportional to the instantaneous production rate of signaling molecules. For continuous signaling, this activity is simply proportional to receptor occupancy: $u(t)=\alpha s(t)$, whereas for burst signaling, $u(t)$ is a series of fixed-size bursts at the times of binding: $u(t)=\beta \sum_{i}^{n}\delta(t-t_i^+)$. Without loss of generality, we set $\alpha=k_-$ and $\beta=1$ so that $\<u(t)\>$ is equal to the mean rate of binding events in both cases, $\<u(t)\>=k_-k_+c(t)/(k_-+k_+c(t))$. For averaging times much longer than $1/k_-$ and $1/k_+c$, we can approximate the fluctuations of $u(t)$ by Gaussian white noise, $u(t)=\<u(t)\>+\delta u(t)$, where $\<\delta u(t)\delta u(t')\>=g\<u(t)\>\delta(t-t')/[1+k_+c(t)/k_-]^2$, with $g=2$ for continuous signaling, and $g=1+(k_+c/k_-)^2$ for fixed-size burst signaling (see Appendix \ref{appB1}). For rapid unbinding, $k_-\to+\infty$, we recover the same twofold difference as between \eqref{unlr} and \eqref{unml}, and for the same reason: in the case of continuous signaling, noise from the stochasticity of bound intervals adds to the noise from random arrivals. To extract the ramp rate from receptor activity requires a network that ``takes the derivative'' of its input signal. An example is the {\em E. coli} chemotaxis system, which relies on precise adaptation via integral feedback \cite{Barkai:1997p906,Yi:2000p2861}. A minimal deterministic version of such a network is schematized in Fig.~\ref{fig:networks} and described by the following differential equations: \begin{equation} \frac{dx}{dt}=k_x\left[uf(y)-x\right],\quad \frac{dy}{dt}=k_y(x-1)\label{net}, \end{equation} where for simplicity $u(t)$ is the activity of a single receptor, and $x$ is the concentration of signaling molecules it produces. $f(y)$ is a monotonically decreasing function regulating the production of $x$. The role of $y$ is similar to that of the receptor methylation level in {\em E. coli}: $y$ precisely adapts the production rate of signaling molecules so that the steady-state value of $x$ does not depend on the external ligand concentration. This property is illustrated by the graphs on the right side of Fig.~\ref{fig:networks}, which show how the network responds to a sudden change in ligand concentration (solid curves). While the network output $x$ is insensitive to the absolute concentration, it responds to steady ramps (dotted curves). When the input varies slowly in time, $\<u(t)\>=u_0+u_1t$ (with $u_1\ll u_0k_x,u_0k_y$), the system responds by shifting $x$ away from $1$ so that the change in $y(t)$ tracks the change in $u(t)$: \begin{equation}\label{readout} \<x(t)\>=1+\gamma\frac{u_1}{k_yu_0},\ \<y(t)\>=y_0-\gamma^2\frac{u_1}{k_yu_0}-\gamma\frac{u_1}{u_0}t, \end{equation} with $u_0f(y_0)=1$ and $\gamma=-{f(y_0)}/{f'(y_0)}$. Thus $y$ provides a readout of the absolute concentration, and $x$ provides a readout of the ramp rate. The accuracy of these representations is limited by the ligand binding shot noise $\delta u(t)$. The effect of noise can be calculated by expanding the solution of \eqref{net} linearly around its average (see Appendix \ref{appB2}): \begin{equation} \left[\begin{array}{c} \delta x(t)\\\delta y(t)\end{array}\right] :=\left[\begin{array}{c}x(t)-\<x(t)\>\\y(t)-\<y(t)\>\end{array}\right]=\int_{-\infty}^tdt'\, \mathbf{K}(t-t')\delta u(t') \nonumber \end{equation} \begin{equation} \textrm{with }\mathbf{K}(t)=\frac{k_x}{u_0}e^{-k_xt/2} \left[\begin{array}{c} \cosh(\omega t)-\frac{k_x}{2\omega}\sinh (\omega t)\\ \frac{k_y}{\omega}\sinh (\omega t) \end{array}\right],\nonumber \end{equation} where $\omega^2=k_x^2/4-k_xk_y/\gamma$ ($\omega$ can be imaginary). From \eqref{readout} we deduce the uncertainties of $c_0$ and $c_1$: \begin{equation} \frac{\<(\delta c_0)^2\>}{c_0^2}=\frac{gk_y/\gamma}{2u_0},\qquad \frac{\<(\delta c_1)^2\>}{(c_0k_y/\gamma)^2}=\frac{gk_x}{2u_0}. \end{equation} For a fixed $k_y$, the optimal value of $k_x$ is the smallest one with a non-oscillating response kernel $\mathbf{K}(t)$: $k_x=4k_y/\gamma$. Systems with oscillating kernels are undesirable because they detect oscillations rather than ramps. For $k_x=4k_y/\gamma$, our results are consistent with those of Eqs.~\eqref{unlr} and \eqref{unml}, namely uncertainties inversely proportional to the number of binding events, if we interpret $\gamma/k_y \to T$ as the effective time of measurement, and $u_0$ as the rate of binding events. The factor $g$ reflects the difference between the two mechanisms of receptor signaling. Despite its simplicity, our biochemical model may help analyze features of real biological systems. There are two separate aspects to the model: on the input side, different mechanisms of receptor signaling---continuous signaling (LR) versus burst signaling (MLE)---affect readout accuracy; on the output side, integral feedback provides a natural readout for sensing ramps. Many receptors, including the well-studied chemotaxis receptors of {\em E. coli}, signal continuously rather than in bursts, and therefore do not employ MLE. In practice, how could cells implement MLE? A receptor could simply ``store'' a fixed amount of signaling molecules and release all of them upon ligand binding. Alternatively, receptors could signal continuously following a binding event but with a narrowly peaked distribution of durations. Our results can easily be extended to an arbitrary distribution of durations $\tau_b$, yielding $g=1+\<(\delta\tau_b)^2\>/\<\tau_b\>^2$: the more peaked the distribution of $\tau_b$, the less noisy the readout. For equilibrium binding/unbinding, we find $g\geq 2$ (see Appendix \ref{appC}), with an irreversible binding cycle driven by energy dissipation required to achieve $g<2$. Interestingly, there are examples of such irreversible cycles in ligand-gated ion channels \cite{Schneggenburger:1997p6174,Wang:2002p6173,Csanady:2009p6215}, where ions play the role of our output signal $x$. In these ion channels, peaked open-time distributions are interpreted as evidence that time reversibility is broken and energy is being consumed \cite{Colquhoun}. We speculate that the role of this irreversibility may be to reduce the variance of bursts, thereby increasing the accuracy of concentration or ramp sensing. Relatedly, a multiplicity of irreversible steps in rhodopsin signaling has been shown to explain the reproducibility of single-photon responses in rod cells \cite{Doan:2006p6626}. As for the mechanism of ramp sensing, the integral feedback system underlying {\em E. coli} chemotaxis is similar to our simple model. However, the receptor methylation level, which plays the same role as $y$ in our model, adjusts the binding/unbinding rates $k_+/k_-$ so that $k_-\approx k_+c$, rather than adjusting the production rate $k_xf(y)u$ as in \eqref{net}. In {\em E. coli} receptors increase their gain by responding cooperatively \cite{Endres:2006p945}, and $ k_-\approx k_+c$ is required to maximize this gain, which precludes the limit $k_-\ll k_+c$ required for MLE. Moreover, $k_+$ is physically limited by diffusion and receptor size, and should optimally be kept near the diffusion limit to maximize the number of binding events. It is worth noting that in {\em E. coli} the methylation and demethylation processes responsible for integral feedback are themselves subject to noise, giving rise to additional fluctuations \cite{Emonet:2008p2793}. For a receptor signaling in bursts, integral feedback could act by adjusting the number of released molecules upon binding if the receptor stores molecules, or the mean bound duration $\<\tau_b\>$ if signaling is continous, or the channel conductivity in ligand-gated ion channels. We hope that our analysis will suggest experiments for testing these scenarios. We thank Pankaj Mehta and Aleksandra Walczak for helpful suggestions. T. M. was supported by the Human Frontier Science Program and N.S.W. by National Institutes of Health Grant No. R01 GM082938.
2024-02-18T23:40:24.123Z
2010-02-20T17:06:57.000Z
algebraic_stack_train_0000
2,296
3,418
proofpile-arXiv_065-11178
\section{Introduction} A complete minimal hypersurface $M \subset \overline{M}^{n+1}$ is said to be $stable$ if the second variation of its volume is always nonnegative for any normal variation with compact support. In \cite{CSZ}, Cao, Shen, and Zhu showed that a complete connected stable minimal hypersurface in Euclidean space must have exactly one end. Later Ni\cite{Ni} proved that if an $n$-dimensional complete minimal submanifold $M$ in Euclidean space has sufficiently small total scalar curvature (i.e., $\int_M |A|^n dv < \infty$) then $M$ has only one end. More precisely, he proved \begin{theorem} {\rm(\cite{Ni})} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{R}^{n+p}$, $n \geq 3$. If \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{1 \over n} < C_1 = \sqrt{{n \over {n-1}}C_s^{-1}}, \end{eqnarray*} then $M$ has only one end. (Here $C_s$ is a Sobolev constant in \cite{HS}.) \end{theorem} Recently the author\cite{{Seo}} improved the upper bound $C_1$ of total scalar curvature in the above theorem. In this paper, we shall prove that the analogue of the above theorem still holds in hyperbolic space. Throughout this paper, we shall denote by $\mathbb{H}^n$ the $n$-dimensional hyperbolic space of constant sectional curvature $-1$. Our main result is the following. \begin{thm} \label{main1} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{H}^{n+p}$, $n \geq 5$. If the total scalar curvature satisfies \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{1 \over n} < {1 \over {n-1}}\sqrt{n(n-4)C_s^{-1}}, \end{eqnarray*} then $M$ has only one end. \end{thm} Miyaoka \cite{Miyaoka} showed that if $M$ is a complete stable minimal hypersurface in $\mathbb{R}^{n+1}$, then there are no nontrivial $L^2$ harmonic $1$-forms on $M$. In \cite{Yun}, Yun proved that if $M \subset \mathbb{R}^{n+1}$ is a complete minimal hypersurface with $\displaystyle{\Big(\int_M |A|^n dv \Big)^{1 \over n} < C_2 = \sqrt{C_s^{-1}}}$, then there are no nontrivial $L^2$ harmonic $1$-forms on $M$. Recently the author\cite{Seo} showed that this result is still true for any complete minimal submanifold $M^n$ in $\mathbb{R}^{n+p}$, $p \geq 1$. We shall prove that if $M$ is an $n$-dimensional complete minimal submanifold with sufficiently small total scalar curvature in hyperbolic space, then there exist no nontrivial $L^2$ harmonic $1$-forms on $M$. More precisely, we prove \begin{thm} \label{main2} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{H}^{n+p}$, $n \geq 5$. If \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{1 \over n} < \sqrt{{n(n-4) \over (n-1)^2} \frac{1}{C_s}} , \end{eqnarray*} then there are no nontrivial $L^2$ harmonic 1-forms on $M$. \end{thm} \section{Proof of the theorems} To prove Theorem \ref{main1}, we begin with the following useful facts. \begin{lem}[Sobolev inequality {\rm \cite{HS}}] \label{MS} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{H}^{n+p}$, $n \geq 3$. Then for any $\phi \in W_0 ^{1,2}(M)$ we have \begin{eqnarray*} \Big(\int_M |\phi|^{\frac{2n}{n-2}} dv\Big)^{\frac{n-2}{n}} \leq C_s \int_M |\nabla \phi|^2 dv, \end{eqnarray*} where $C_s$ depends only on $n$. \end{lem} \begin{lem} {\rm (\cite{Leung})} \label{Leung} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{H}^{n+p}$. Then the Ricci curvature of $M$ satisfies \begin{eqnarray*} {\rm Ric}(M) \geq -(n-1) -{n-1 \over n}|A|^2. \end{eqnarray*} \end{lem} \noindent Recall that the first eigenvalue of a Riemannian manifold $M$ is defined as \begin{align*} \lambda_1 (M) = \inf_{f} \frac{\int_M |\nabla f|^2}{\int_M f^2} , \end{align*} where the infimum is taken over all compactly supported smooth functions on $M$. For a complete stable minimal hypersurface $M$ in $\mathbb{H}^{n+1}$, Cheung and Leung \cite{Cheung and Leung} proved that \begin{eqnarray} \label{thm : CL} \frac{1}{4}(n-1)^2 \leq \lambda_1 (M) . \end{eqnarray} Here this inequality is sharp because equality holds when $M$ is totally geodesic (\cite{McKean}). Let $u$ be a harmonic function on $M$. From Bochner formula, we have \begin{eqnarray*} {1 \over 2}\Delta(|\nabla u|^2) = \sum {u_{ij}}^2 + {\rm Ric}(\nabla u, \nabla u) . \end{eqnarray*} Then Lemma \ref{Leung} gives \begin{eqnarray*} {1 \over 2}\Delta(|\nabla u|^2) \geq \sum {u_{ij}}^2 -(n-1)|\nabla u|^2 -{n-1 \over n}|A|^2 |\nabla u|^2. \end{eqnarray*} Choose the normal coordinates at $p$ such that $u_1(p) = |\nabla u|(p)$, $u_i(p) = 0$ for $i \geq 2$. Then we have \begin{eqnarray*} \nabla_j |\nabla u| = \nabla_j ( \sqrt{\sum u_i^2}) = \frac{\sum u_i u_{ij} }{|\nabla u|} = u_{1j} \ . \end{eqnarray*} Therefore we obtain $|\nabla | \nabla u||^2 = \sum {u_{1j}}^2$.\\ On the other hand, we have \begin{eqnarray*} {1 \over 2}\Delta(|\nabla u|^2) = |\nabla u|\Delta |\nabla u| + |\nabla|\nabla u||^2 . \end{eqnarray*} Therefore it follows \begin{eqnarray*} \sum {u_{ij}}^2 -(n-1)|\nabla u|^2 -{n-1 \over n}|A|^2 |\nabla u|^2 \leq |\nabla u|\Delta |\nabla u| + \sum {u_{1j}}^2. \end{eqnarray*} Hence we get \begin{eqnarray*} |\nabla u|\Delta |\nabla u| + (n-1)|\nabla u|^2+ {n-1 \over n}|A|^2 |\nabla u|^2 &\geq& \sum {u_{ij}}^2 - \sum {u_{1j}}^2 \\ &\geq& \sum_{i \neq 1}{u_{i1}}^2 + \sum_{i \neq 1} {u_{ii}}^2 \\ &\geq& \sum_{i \neq 1}{u_{i1}}^2 + {1 \over {n-1}}(\sum_{i \neq 1}u_{ii})^2 \\ &\geq& {1 \over {n-1}}{\sum_{i \neq 1} u_{i1}}^2 = {1 \over {n-1}}|\nabla|\nabla u||^2 , \end{eqnarray*} where we used $\Delta u = \sum u_{ii} = 0$ in the last inequality. Therefore we get \begin{eqnarray} \label{harmonic} |\nabla u|\Delta |\nabla u| + {n-1 \over n}|A|^2 |\nabla u|^2 + (n-1)|\nabla u|^2\geq {1 \over {n-1}}|\nabla|\nabla u||^2 . \end{eqnarray} Now we are ready to prove Theorem \ref{main1}.\\ {\bf Proof of Theorem \ref{main1}.} Suppose that $M$ has at least two ends. First we note that if $M$ has more than one end then there exists a nontrivial bounded harmonic function $u(x)$ on $M$ which has finite total energy(\cite{Wei}). Let $f=|\nabla u|$. From (\ref{harmonic}) we have \begin{eqnarray*} f\Delta f + {n-1 \over n } |A|^2 f^2 + (n-1)f^2 \geq {1 \over n-1}|\nabla f|^2. \end{eqnarray*} Fix a point $p \in M$ and for $R>0$ choose a cut-off function satisfying $0 \leq \varphi \leq 1$, $\varphi \equiv 1$ on $B_p(R)$, $\varphi = 0 $ on $M \setminus B_p(2R)$, and $\displaystyle{|\nabla \varphi| \leq {1 \over R}}$, where $B_p(R)$ denotes the ball of radius $R$ centered at $p\in M$. Multiplying both sides by $\varphi^2$ and integrating over $M$, we have \begin{eqnarray*} \int_M \varphi^2 f\Delta f dv + {n-1 \over n } \int_M \varphi^2 |A|^2 f^2 dv + (n-1) \int_M \varphi^2 f^2 dv \geq {1 \over n-1}\int_M \varphi^2|\nabla f|^2 dv. \end{eqnarray*} From the inequality (\ref{thm : CL}), we see \begin{eqnarray*} \frac{(n-1)^2}{4} \leq \lambda_1 (M) \leq \frac{\int_M |\nabla (\varphi f)|^2 dv}{\int_M \varphi^2 f^2 dv} . \end{eqnarray*} Therefore we get \begin{eqnarray*} \int_M \varphi^2 f\Delta f dv + {n-1 \over n } \int_M \varphi^2 |A|^2 f^2 dv + \frac{4}{n-1} \int_M |\nabla (\varphi f)|^2 dv \geq {1 \over n-1}\int_M \varphi^2|\nabla f|^2 dv. \end{eqnarray*} Using integration by parts, we get \begin{align*} -\int_M |\nabla f|^2 \varphi^2 dv - 2 \int_M f \varphi \langle \nabla f,\nabla \varphi \rangle dv + {n-1 \over n }\int_M \varphi^2 |A|^2 f^2 dv + \frac{4}{n-1} \int_M |\nabla (\varphi f)|^2 dv \\ \geq {1 \over n-1}\int_M \varphi^2|\nabla f|^2 dv. \end{align*} Applying Schwarz inequality, for any positive number $a>0$, we obtain \begin{align} \label{Sch1} {n-1 \over n }\int_M \varphi^2 |A|^2 f^2 dv &+ \Big(\frac{4}{n-1}+{1 \over a} + \frac{4}{a(n-1)}\Big) \int_M f^2 |\nabla \varphi|^2 dv \nonumber \\ &\geq \Big({n \over {n-1}} - a -\frac{4}{n-1} - \frac{4a}{n-1} \Big)\int_M \varphi^2|\nabla f|^2 dv. \end{align} On the other hand, applying Sobolev inequality(Lemma \ref{MS}), we have \begin{eqnarray*} \int_M |\nabla (f\varphi)|^2 dv \geq {C_s}^{-1} \Big( \int_M (f\varphi)^{\frac{2n}{n-2}} dv \Big)^{n-2 \over n}. \end{eqnarray*} Thus applying Schwarz inequality again, we have for any positive number $b>0,$ \begin{eqnarray} \label{Sch2} (1+b)\int_M\varphi^2|\nabla f|^2 dv \geq {C_s}^{-1} \Big( \int_M (f\varphi)^{\frac{2n}{n-2}} dv \Big)^{n-2 \over n} \\ - (1 + {1 \over b})\int_M f^2 |\nabla \varphi|^2 dv. \nonumber \end{eqnarray} Combining (\ref{Sch1}) and (\ref{Sch2}), we have \begin{align*} {n-1 \over n }\int_M \varphi^2 |A|^2 f^2 dv &\geq \Big(\frac{n-4-4a}{n-1} -a \Big)\int_M \varphi^2|\nabla f|^2 dv \\ &\geq \frac{{C_s}^{-1}}{b+1}\Big(\frac{n-4-4a}{n-1} -a \Big)\Big( \int_M (f\varphi)^{\frac{2n}{n-2}} dv \Big)^{n-2 \over n} \\ & \quad - \Big\{\frac{1}{b}\Big(\frac{n-4-4a}{n-1} -a \Big) + \frac{n+3+4a}{a(n-1)}\Big\} \int_M f^2 |\nabla \varphi|^2 dv . \end{align*} Applying H\"{o}lder inequality, we get \begin{eqnarray*} \int_M \varphi^2 |A|^2 f^2 dv \leq \Big(\int_M |A|^n \Big)^{2 \over n} \Big(\int_M (f\varphi)^{\frac{2n}{n-2}} dv \Big)^{n-2 \over n}. \end{eqnarray*} Finally we obtain \begin{align*} &\Big\{\frac{1}{b}\Big(\frac{n-4-4a}{n-1} -a \Big) + \frac{n+3+4a}{a(n-1)}\Big\} \int_M f^2 |\nabla \varphi|^2 dv \\ &\geq \Big\{\frac{{C_s}^{-1}}{b+1}\Big(\frac{n-4-4a}{n-1} -a \Big) - {n-1 \over n }\Big(\int_M |A|^n dv \Big)^{2 \over n}\Big\} \Big(\int_M (f\varphi)^{\frac{2n}{n-2}} dv \Big)^{n-2 \over n} . \end{align*} By the assumption on the total scalar curvature, we choose $a$ and $b$ small enough such that \begin{eqnarray*} \Big\{\frac{{C_s}^{-1}}{b+1}\Big(\frac{n-4-4a}{n-1} -a \Big) - {n-1 \over n }\Big(\int_M |A|^n dv \Big)^{2 \over n}\Big\} \geq \varepsilon > 0 \end{eqnarray*} for sufficiently small $\varepsilon>0$. Then letting $R \rightarrow \infty $, we have $f \equiv 0$, i.e., $|\nabla u| \equiv 0$. Therefore $u$ is constant. This contradicts the assumption that $u$ is a nontrivial harmonic function. \hfill \ensuremath{\Box} \medskip \medskip In the proof of Theorem \ref{main1}, if we do not use the fact that $\lambda_1 (M) \geq \frac{(n-1)^2}{4}$ and assume that \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{2 \over n} < \frac{n}{n-1}{C_s}^{-1}\Big({n \over {n-1}}- \frac{n-1}{\lambda_1 (M)} \Big) \end{eqnarray*} for $\lambda_1 (M) > \frac{(n-1)^2}{n}$, one can see that $M^n$ must have exactly one end by using the same argument as in the above proof, when $n \geq 3$. In other words, it follows \begin{thm} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{H}^{n+p}$, $n \geq 3$. Assume that $\lambda_1 (M) > \frac{(n-1)^2}{n}$ and the total scalar curvature satisfies \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{2 \over n} < \frac{n}{n-1}{C_s}^{-1}\Big({n \over {n-1}}- \frac{n-1}{\lambda_1 (M)} \Big) . \end{eqnarray*} Then $M$ must have only one end. \end{thm} \medskip {\bf Proof of Theorem \ref{main2}.} Let $\omega$ be an $L^2$ harmonic 1-form on minimal submanifold $M$ in $\mathbb{H}^{n+p}$. Then $\omega$ satisfies \begin{eqnarray*} \Delta \omega = 0 \ \ {\rm and}\ \ \int_M |\omega|^2 dv < \infty. \end{eqnarray*} It follows from Bochner formula \begin{eqnarray*} \Delta|\omega|^2 = 2(|\nabla \omega|^2 + {\rm Ric}(\omega,\omega)) . \end{eqnarray*} We also have \begin{eqnarray*} \Delta|\omega|^2 = 2(|\omega|\Delta |\omega| + |\nabla|\omega||^2) . \end{eqnarray*} Since $\displaystyle{|\nabla \omega|^2 \geq {n \over {n-1}} |\nabla|\omega||^2}$ by \cite{Wang}, it follows that \begin{eqnarray*} |\omega| \Delta |\omega| - {\rm Ric}(\omega,\omega) = |\nabla \omega|^2 -|\nabla|\omega||^2 \geq {1 \over {n-1}} |\nabla|\omega||^2 . \end{eqnarray*} By Lemma \ref{Leung}, we have \begin{eqnarray*} |\omega| \Delta |\omega| - {1 \over {n-1}} |\nabla|\omega||^2 \geq {\rm Ric}(\omega,\omega) \geq -(n-1)|\omega|^2 -{{n-1} \over n}|A|^2 |\omega|^2 . \end{eqnarray*} Therefore we get \begin{eqnarray*} |\omega| \Delta |\omega| + {{n-1} \over n}|A|^2 |\omega|^2 + (n-1)|\omega|^2 - {1 \over {n-1}} |\nabla|\omega||^2 \geq 0. \end{eqnarray*} Multiplying both sides by $\varphi^2$ as in the proof of Theorem \ref{main1} and integrating over $M$, we have from integration by parts that \begin{align} 0 &\leq \int_M \varphi^2 |\omega| \Delta |\omega| + {{n-1} \over n}\varphi^2|A|^2 |\omega|^2 + (n-1)|\omega|^2 \varphi^2 - {1 \over {n-1}} \varphi^2 |\nabla|\omega||^2 dv \label{form} \\ &= -2 \int_M \varphi |\omega| \langle \nabla \varphi, \nabla |\omega| \rangle dv - {n \over {n-1}}\int_M \varphi^2 |\nabla|\omega||^2 dv \nonumber \\ & \quad + (n-1)\int_M |\omega|^2 \varphi^2 dv + {{n-1} \over n}\int_M |A|^2 |\omega|^2\varphi^2 dv. \nonumber \end{align} On the other hand, we get the following from H$\rm \ddot{o}$lder inequality and Sobolev inequality(Lemma \ref{MS}) \begin{align*} \int_M |A|^2 |\omega|^2\varphi^2 dv &\leq \Big(\int_M |A|^n dv \Big)^{2 \over n} \Big(\int_M (\varphi|\omega|)^{2n \over {n-2}}dv \Big)^{{n-2} \over n} \\ &\leq C_s \Big(\int_M |A|^n dv \Big)^{2 \over n} \int_M |\nabla (\varphi |\omega|)|^2 dv \\ &= C_s \Big(\int_M |A|^n dv \Big)^{2 \over n} \Big(\int_M |\omega|^2|\nabla \varphi|^2 + |\varphi|^2 |\nabla|\omega||^2 + 2 \varphi |\omega| \langle \nabla \varphi, \nabla |\omega| \rangle dv \Big). \end{align*} Then (\ref{form}) becomes \begin{align} 0 &\leq -2 \int_M \varphi |\omega| \langle \nabla \varphi, \nabla |\omega| \rangle dv - {n \over {n-1}} \int_M \varphi^2 |\nabla |\omega||^2 dv + (n-1)\int_M |\omega|^2 \varphi^2 dv \label{form1} \\ & \quad + {{n-1} \over n}C_s \Big(\int_M |A|^n dv \Big)^{2 \over n} \Big(\int_M |\omega|^2|\nabla \varphi|^2 + \varphi^2 |\nabla|\omega||^2 + 2 \varphi |\omega| \langle \nabla \varphi, \nabla |\omega| \rangle dv \Big)\nonumber . \end{align} Applying the inequality (\ref{thm : CL}), we have \begin{eqnarray*} \frac{(n-1)^2}{4} \leq \lambda_1 (M) \leq \frac{\int_M |\nabla (\varphi |\omega|)|^2 dv}{\int_M \varphi^2 |\omega|^2 dv} . \end{eqnarray*} Thus \begin{eqnarray*} \int_M |\omega|^2 \varphi^2 dv \leq \frac{4}{(n-1)^2} \Big(\int_M |\omega|^2|\nabla \varphi|^2 + \varphi^2 |\nabla|\omega||^2 + 2 \varphi |\omega| \langle \nabla \varphi, \nabla |\omega| \rangle dv \Big) . \end{eqnarray*} Using Schwarz inequality for $\varepsilon > 0$, we have \begin{eqnarray*} 2\Big|\int_M \varphi |\omega| \langle \nabla \varphi, \nabla |\omega| \rangle dv\Big| \leq {\varepsilon \over 2} \int_M \varphi^2 |\nabla|\omega||^2 dv +{2 \over \varepsilon} \int_M |\omega|^2|\nabla \varphi|^2 dv . \end{eqnarray*} Therefore it follows from the inequality (\ref{form1}) \begin{align*} &\Bigg[{n-4 \over {n-1}} - {n-1 \over n}C_s \Big(\int_M |A|^n dv \Big)^{2 \over n} \\ &- {\varepsilon \over 2}\Big\{1 + \frac{4}{(n-1)^2} + {n-1 \over n}C_s \Big(\int_M |A|^n dv \Big)^{2 \over n}\Big\}\Bigg] \int_M \varphi^2 |\nabla|\omega||^2 dv \\ &\leq \Bigg[ \frac{4}{n-1} + {n-1 \over n}C_s \Big(\int_M |A|^n dv \Big)^{2 \over n} \\ &\quad +{2 \over \varepsilon}\Big\{1 + \frac{4}{(n-1)^2} + {n-1 \over n}\Big(\int_M |A|^n dv \Big)^{2 \over n} \Big\} \Bigg] \int_M |\omega|^2|\nabla \varphi|^2 dv. \end{align*} Since $\displaystyle{\Big(\int_M |A|^n dv \Big)^{2 \over n} < {n(n-4) \over (n-1)^2} \frac{1}{C_s}}$ by assumption, choosing $\varepsilon > 0$ sufficiently small and letting $R \rightarrow \infty$, we obtain $\nabla |\omega| \equiv 0$, i.e., $|\omega|$ is constant. However, since $\displaystyle{\int_M |\omega|^2 dv < \infty}$ and the volume of $M$ is infinite (\cite{Anderson} and \cite{Wei}.), we get $\omega \equiv 0$. \hfill \ensuremath{\Box} \medskip \bigskip In the proof of Theorem \ref{main1}, if we do not use the fact that $\lambda_1 (M) \geq \frac{(n-1)^2}{4}$ and assume that \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{2 \over n} < \frac{(n\lambda_1(M) - (n-1)^2)n}{(n-1)^2C_s \lambda_1 (M)} \end{eqnarray*} for $\lambda_1 (M) > \frac{(n-1)^2}{n}$, one can see that there exist no nontrivial $L^2$ harmonic 1-forms on $M^n$ for $n \geq 3$ by using the same argument as in the above proof. More precisely, we have \begin{thm} Let $M$ be an $n$-dimensional complete immersed minimal submanifold in $\mathbb{H}^{n+p}$, $n \geq 3$. Assume that $\lambda_1 (M) > \frac{(n-1)^2}{n}$ and the total scalar curvature satisfies \begin{eqnarray*} \Big(\int_M |A|^n dv \Big)^{2 \over n} < \frac{(n\lambda_1(M) - (n-1)^2)n}{(n-1)^2C_s \lambda_1 (M)} . \end{eqnarray*} Then there are no nontrivial $L^2$ harmonic 1-forms on $M$. \end{thm}
2024-02-18T23:40:24.190Z
2010-02-20T14:39:46.000Z
algebraic_stack_train_0000
2,300
3,003
proofpile-arXiv_065-11211
\section{Introduction\label{sec:Introduction}} Modern graphics processors (\emph{GPU}s) have evolved into highly parallel and fully programmable architectures. Current many-core GPUs such as nVIDIA's GTX and Tesla GPUs can contain up to 240 processor cores on one chip and can have an astounding peak performance of up to 1 TFLOP. The upcoming Fermi GPU recently announced by nVIDIA is expected to have more than 500 processor cores. However, GPUs are known to be hard to program and current general purpose (i.e. non-graphics) GPU applications concentrate typically on problems that can be solved using fixed and/or regular data access patterns such as image processing, linear algebra, physics simulation, signal processing and scientific computing (see e.g. \cite{gpgpu}). The design of efficient GPU methods for discrete and combinatorial problems with data dependent memory access patterns is still in its infancy. In fact, there is currently still a lively debate even on the best \emph{sorting} method for GPUs (e.g. \cite{s6-Garland,s7-Sanders}). Until very recently, the comparison-based Thrust Merge method \cite{s6-Garland} by Nadathur Satish, Mark Harris and Michael Garland of nVIDIA Corporation was considered the best sorting method for GPUs. However, an upcoming paper by Nikolaj Leischner, Vitaly Osipov and Peter Sanders \cite{s7-Sanders} (to appear in Proc. IPDPS 2010) presents a randomized sample sort method for GPUs that significantly outperforms Thrust Merge. A disadvantage of the randomized sample sort method is that its performance can vary with the input data distribution because the data is partitioned into buckets that are created via \emph{randomly} selected data items. In this paper, we present and evaluate a \emph{deterministic} sample sort algorithm for GPUs, called \textsc{GPU Bucket Sort}, which has the same performance as the randomized sample sort method in \cite{s7-Sanders}. An experimental performance comparison on nVIDIA's GTX 285 and Tesla architectures shows that for uniform data distribution, the\emph{ }best case\emph{ }for randomized sample sort, our deterministic sample sort method is in fact \emph{exactly} as fast as the randomized sample sort method of \cite{s7-Sanders}. However, in contrast to \cite{s7-Sanders}, the performance of \textsc{GPU Bucket Sort} remains the same for any input data distribution because buckets are created deterministically and bucket sizes are guaranteed. The remainder of this paper is organized as follows. Section \ref{sec:Review:-GPU-Architectures} reviews the \emph{Tesla} architecture framework for GPUs and the CUDA programming environment, and Section \ref{sec:Previous-Work} reviews previous work on GPU based sorting. Section \ref{sec:Deterministic-Sample-Sort} presents \textsc{GPU Bucket Sort} and discusses some details of our CUDA \cite{cuda-prog-guide} implementation. In Section \ref{sec:Experimental-Results-And}, we present an experimental performance comparison between our deterministic sample sort implementation, the randomized sample sort implementation in \cite{s7-Sanders}, and the Thrust Merge implementation in \cite{s6-Garland}. In addition to the performance improvement discussed above, our deterministic sample sort implementation appears to be more memory efficient as well because \textsc{GPU Bucket Sort} is able to sort considerably larger data sets within the same memory limits of the GPUs. \section{Review: GPU Architectures\label{sec:Review:-GPU-Architectures}} As in \cite{s6-Garland} and \cite{s7-Sanders}, we will focus on nVIDIA's unified graphics and computing platform for GPUs known as the \emph{Tesla} architecture framework \cite{Lindholm2008} and associated \emph{CUDA} programming model \cite{cuda-prog-guide}. However, the discussion and methods presented in this paper apply in general to GPUs that support the OpenCL standard \cite{opencl-spec} which is very similar to CUDA. A schematic diagram of the Tesla unified GPU architecture is shown in Figure \ref{fig:nVIDIA-Tesla-Architecture}. A Tesla GPU consists of an array of streaming processors called \emph{Streaming Multiprocessors} \emph{(SM}s). Each SM contains eight processor cores and a small size (16 KB) low latency local \emph{shared memory} that is shared by its eight processor cores. All SMs are connected to a \emph{global }DRAM\emph{ memory} through an interconnection network. For example, an nVIDIA GeForce GTX 260 has 27 SMs with a total of 216 processor cores while GTX 285 and Tesla GPUs have 30 SMs with a total of 240 processor cores. A GTX 260 has approximately 900 MB global DRAM memory while GTX 285 and Tesla GPUs have up to 2 GB and 4 GB global DRAM memory, respectively (see Table \ref{tab:GPU-Performance-Characteristics}). A GPU's global DRAM memory is arranged in independent memory partitions. The interconnection network routes the read/write memory requests from the processor cores to the respective global memory partitions, and the results back to the cores. Each global memory partition has its own queue for memory requests and arbitrates among the incoming read/write requests, seeking to maximize DRAM transfer efficiency by grouping read/write accesses to neighboring memory locations. Memory latency to global DRAM memory is optimized when parallel read/write operations can be grouped into a minimum number of arrays of contiguous memory locations. It is important to note that data accesses from processor cores to their SM's local shared memory are at least an order of magnitude faster than accesses to global memory. This is an important consideration for any efficient sorting method. Another critical issue for the performance of CUDA implementations is conditional branching. CUDA programs typically execute very large numbers of threads. In fact, a large number of threads is critical for hiding latencies for global memory accesses. The GPU has a hardware thread scheduler that is built to manage tens of thousands and even millions of concurrent threads. All threads are divided into blocks of up to 512 threads, and each block is executed by an SM. An SM executes a thread block by breaking it into groups of 32 threads called \emph{warps} and executing them in parallel using its eight cores. These eight cores share various hardware components, including the instruction decoder. Therefore, the threads of a warp are executed in SIMT (single instruction, multiple threads) mode, which is a slightly more flexible version of the standard SIMD (single instruction, multiple data) mode. The main problem arises when the threads encounter a conditional branch such as an IF-THEN-ELSE statement. Depending on their data, some threads may want to execute the code associated with the "true" condition and some threads may want to execute the code associated with the "false" condition. Since the shared instruction decoder can only handle one branch at a time, different threads can not execute different branches concurrently. They have to be executed in sequence, leading to performance degradation. GPUs provide a small improvement through an instruction cache at each SM that is shared by its eight cores. This allows for a "small" deviation between the instructions carried out by the different cores. For example, if an IF-THEN-ELSE statement is short enough so that both conditional branches fit into the instruction cache then both branches can be executed fully in parallel. However, a poorly designed algorithm with too many and/or large conditional branches can result in serial execution and very low performance. \section{Previous Work On GPU Sorting\label{sec:Previous-Work}} Sorting algorithms for GPUs started to appear a few years ago and have been highly competitive. Early results include GPUTeraSort \cite{s1-GPUTeraSort} based on bitonic merge, and adaptive bitonic sort \cite{s2-GPU-ABiSort} based on a method by Bilardi et.al. \cite{s3-Bilardi}. Hybrid sort \cite{s4-hybrid} used a combination of bucket sort and merge sort, and D. Cederman et.al. \cite{s5-practical-quicksort} proposed a quick sort based method for GPUs. Both methods (\cite{s4-hybrid,s5-practical-quicksort}) suffer from load balancing problems. Until very recently, the comparison-based Thrust Merge method \cite{s6-Garland} by Nadathur Satish, Mark Harris and Michael Garland of nVIDIA Corporation was considered the best sorting method for GPUs. Thrust Merge uses a combination of odd-even merge and two-way merge, and overcomes the load balancing problems mentioned above. Satish et.al. \cite{s6-Garland} also presented an even faster GPU radix sort method for the special case of integer sorting. Yet, an upcoming paper by Nikolaj Leischner, Vitaly Osipov and Peter Sanders \cite{s7-Sanders} (to appear in Proc. IPDPS 2010) presents a randomized sample sort method for GPUs that significantly outperforms Thrust Merge \cite{s6-Garland}. However, as discussed in Section \ref{sec:Introduction}, the performance of randomized sample sort can vary with the distribution of the input data because buckets are created through randomly selected data items. Indeed, the performance analysis presented in \cite{s7-Sanders} measures the runtime of their randomized sample sort method for six different data distributions to document the performance variations observed for different input distributions. \section{\textsc{GPU Bucket Sort}\emph{:\\Deterministic} Sample Sort For GPUs\label{sec:Deterministic-Sample-Sort}} In this section we present \textsc{GPU Bucket Sort}, a \emph{deterministic} sample sort algorithm for GPUs, and discuss its CUDA implementation. An outline of \textsc{GPU Bucket Sort} is shown in Algorithm \ref{alg:Deterministic-Sample-Sort} below. It consists of a local sort (Step 1), a selection of samples that define balanced buckets (Steps 3-5), moving all data into those buckets (Steps 6-8), and a final sort of each bucket. \begin{algorithm}\label{alg:Deterministic-Sample-Sort} \textsc{GPU Bucket Sort} (Deterministic Sample Sort For GPUs) \emph{Input}: An array $A$ with $n$ data items stored in global memory. \emph{Output}: Array $A$ sorted. \begin{enumerate} \item Split the array $A$ into $m$ sublists $A_{1},...,A_{m}$ containing $\frac{n}{m}$ items each where $\frac{n}{m}$ is the shared memory size at each SM. \item \emph{Local Sort}: Sort each sublist $A_{i}$ ($i$=1,..., $m$) locally on one SM, using the SM's shared memory as a cache. \item \emph{Local Sampling}: Select $s$ equidistant samples from each sorted sublist $A_{i}$ ($i$=1,..., $m$) for a total of $sm$ samples. \item \emph{Sorting All Samples}: Sort all $sm$ samples in global memory, using all available SMs in parallel. \item \emph{Global Sampling}: Select $s$ equidistant samples from the sorted list of $sm$ samples. We will refer to these $s$ samples as \emph{global samples}. \item \emph{Sample Indexing}: For each sorted sublist $A_{i}$ ($i$=1,..., $m$) determine the location of each of the $s$ global samples in $A_{i}$. This operation is done for each $A_{i}$ locally on one SM, using the SM's shared memory, and will create for each $A_{i}$ a partitioning into $s$ buckets $A_{i1}$,..., $A_{is}$ of size $a_{i1}$,..., $a_{is}$. \item \emph{Prefix Sum}: Through a parallel prefix sum operation on $a_{11}$,..., $a_{m1}$, $a_{12}$,..., $a_{m2}$, ..., $a_{1s}$,..., $a_{ms}$ calculate for each bucket $A_{ij}$($1 \leq i \leq m$, $1 \leq j \leq s$, ) its starting location $l_{ij}$ in the final sorted sequence. \item \emph{Data Relocation}: Move all $sm$ buckets $A_{ij}$($1\leq i \leq m$, 1$\leq j \leq s$) to location $l_{ij}$. The newly created array consists of $s$ sublists $B_{1}$, ..., $B_{s}$ where $B_{j}=A_{1j}\cup A_{2j}\cup...\cup A_{mj}$ for 1$\leq j \leq s$. \item \emph{Sublist Sort}: Sort all sublists $B_{j}$, 1$\leq j \leq s$, using all SMs. \end{enumerate} \end{algorithm} Our discussion of Algorithm \ref{alg:Deterministic-Sample-Sort} and its implementation will focus on GPU performance issues related to shared memory usage, coalesced global memory accesses, and avoidance of conditional branching. Consider an input array $A$ with $n=32M$ data items and a local shared memory size of $\frac{n}{m}=2K$ data items. In Steps 1 and 2 of Algorithm \ref{alg:Deterministic-Sample-Sort}, we split the array A into $m=16K$ sublists of $2K$ data items each and then locally sort each of those $m=16K$ sublists. More precisely, we create 16 K thread blocks of 512 threads each, where each thread block sorts one sublist using one SM. Each thread block first loads a sublist into the SM's local shared memory using a coalesced parallel read from global memory. Note that, each of the 512 threads is responsible for $\frac{n}{m}/512=4$ data items. The thread block then sorts a sublist of $\frac{n}{m}=2K$ data items in the SM's local shared memory. We tested different implementations for the local shared memory sort within an SM, including quicksort, bitonic sort, and adaptive bitonic sort \cite{s3-Bilardi}. In our experiments, bitonic sort was consistently the fastest method, despite the fact that it requires $O(n\log^{2}n)$ work. The reason is that, for Step 2 of Algorithm \ref{alg:Deterministic-Sample-Sort}, we always sort $2K$ data items only, irrespective of $n$. For such a small number of items the simplicity of bitonic sort, it's small constants in the running time, and it's perfect match for SIMD style parallelism outweigh the disadvantage of additional work. In Step 3 of Algorithm \ref{alg:Deterministic-Sample-Sort}, we select $s=64$ equidistant samples from each sorted sublist. The choice of value for $s$ is discussed in Section \ref{sec:Experimental-Results-And}. The implementation of Step 3 is built directly into the final phase of Step 2 when the sorted sublists are written back into global memory. In Step 4, we are sorting all $sm=1M$ selected samples in global memory, using all available SMs in parallel. Here, we compared GPU bitonic sort \cite{s1-GPUTeraSort}, adaptive bitonic sort \cite{s2-GPU-ABiSort} based on \cite{s3-Bilardi}, and randomized sample sort \cite{s7-Sanders}. Our experiments indicate that for up to 16 M data items, simple bitonic sort is still faster than even randomized sample sort \cite{s7-Sanders} due to its simplicity, small constants, and complete avoidance of conditional branching. Hence, Step 4 was implemented via bitonic sort. In Step 5, we select $s=64$ equidistant \emph{global samples} from the sorted list of $sm=1M$ samples. Here, each thread block/SM loads the $s=64$ global samples into its local shared memory where they will remain for the next step. In Step 6, we determine for each sorted sublist $A_{i}$ ($i$=1, ..., $m$) of $\frac{n}{m}=2K$ data items the location of each of the $s=64$ global samples in $A_{i}$. For each $A_{i}$, this operation is done locally by one thread block on one SM, using the SM's shared memory, and will create for each $A_{i}$ a partitioning into $s=64$ buckets $A_{i1}$,..., $A_{is}$ of size $a_{i1}$,..., $a_{is}$. Here, we apply parallel binary search in $A_{i}$ for each of the global samples. More precisely, we first take the $\frac{s}{2}$-th global sample element and use one thread to perform a binary search in $A_{i}$, resulting in a location $l_{s/2}$ in $A_{i}$. Then we use two threads to perform two binary searches in parallel, one for the $\frac{s}{4}$-th global sample element in the part of $A_{i}$ to the left of location $l_{s/2}$ , and one for the $\frac{3s}{4}$-th global sample element in the part of $A_{i}$ to the right of location $l_{s/2}$. This process is iterated $\log s=6$ times until all $s=64$ global samples are located in $A_{i}$. With this, each $A_{i}$ is split into $s=64$ buckets $A_{i1}$,..., $A_{is}$ of size $a_{i1}$,..., $a_{is}$. Note that, we do not simply perform all $s$ binary searches fully in parallel in order to avoid memory contention within the local shared memory\cite{cuda-prog-guide}. Step 7 uses a prefix sum calculation to obtain for all buckets their starting location in the final sorted sequence. The operation is illustrated in Figure \ref{fig:Illustration-Of-Step-7} and can be implemented with coalesced memory accesses in global memory. Each row in Figure \ref{fig:Illustration-Of-Step-7} shows the $a_{i1}$,..., $a_{is}$ calculated for each sublist. The prefix sum is implemented via a parallel column sum (using all SMs), followed by a prefix sum on the columns sums (on one SM in local shared memory), and a final update of the partial sums in each column (using all SMs). In Step 8, the $sm=1M$ buckets are moved to their correct location in the final sorted sequence. This operation is perfectly suited for a GPU and requires one parallel coalesced data read followed by one parallel coalesced data write operation. The newly created array consists of $s=64$ sublists $B_{1}$, ..., $B_{s}$ where each $B_{j}=A_{1j}\cup A_{2j}\cup...\cup A_{mj}$ has at most $\frac{2n}{s}=1M$ data items \cite{Schaeffer}. In Step 9, we sort each $B_{j}$ using the same bitonic sort implementation as in Step 4. Note that, since each $B_{j}$ is smaller than $16M$ data items, simple bitonic sort is faster for each $B_{j}$ than even randomized sample sort \cite{s7-Sanders} due to bitonic sort's simplicity, small constants, and complete avoidance of conditional branching. \begin{figure}[tbh] \begin{centering} \includegraphics[width=7cm]{pictures/prefixsums/step-7} \par\end{centering} \caption{Illustration Of Step 7 In Algorithm \ref{alg:Deterministic-Sample-Sort}\label{fig:Illustration-Of-Step-7}} \end{figure} \section{Experimental Results And Discussion\label{sec:Experimental-Results-And}} For our experimental evaluation, we executed Algorithm \ref{alg:Deterministic-Sample-Sort} on three different GPUs (nVIDIA Tesla, GTX 285, and GTX 260) for various data sets of different sizes, and compared our results with those reported in \cite{s6-Garland} and \cite{s7-Sanders} which are the current best GPU sorting methods. Table \ref{tab:GPU-Performance-Characteristics} shows some important performance characteristics of the three different GPUs. The Tesla and GTX 285 have more cores than the GTX 260. The GTX 285 has the highest core clock rate and in summary the highest level of core computational power. The Tesla has the largest memory but the GTX 285 has the best memory clock rate and memory bandwidth. In fact, even the GTX 260 has a higher clock rate and memory bandwidth than the Tesla C1060. Figure \ref{fig:det-260-285-tesla-comp} shows a comparison of the runtime of our \textsc{GPU Bucket Sort} method on the Tesla C1060, GTX 260 and GTX 285 (with 2 GB memory) for varying number of data items. Each data point shows the average of 100 experiments. The observed variance was less than 1 ms for all data points since \textsc{GPU Bucket Sort} is deterministic and any fluctuation observed was due to noise on the GPU (e.g. operating system related traffic). All three curves show a growth rate very close to linear which is encouraging for a problem that requires $O(n\log n)$ work. \textsc{GPU Bucket Sort} performs better on the GTX 285 than both Tesla and GTX 260. Furthermore, it performs better on the GTX 260 than on the Tesla C1060. This indicates that \textsc{GPU Bucket Sort} is memory bandwidth bound which is expected for sorting methods since the sorting problem requires only very little computation but a large amount of data movement. For individual steps of \textsc{GPU Bucket Sort}, the order can sometimes be reversed. For example, we observed that Step 2 of Algorithm \ref{alg:Deterministic-Sample-Sort} (local sort) runs faster on the Tesla C1060 than on the GTX 260 since this step is executed locally on each SM and its performance is largely determined by the number of SMs and the performance of the SM's cores. However, the GTX 285 remained the fastest machine, even for all individual steps. We note that \textsc{GPU Bucket Sort} can sort up to $n=64M$ data items within the 896 MB memory available on the GTX 260 (see Figure \ref{fig:det-260-285-tesla-comp}). On the GTX 285 with 2 GB memory and Tesla C1060 our \textsc{GPU Bucket Sort} method can sort up to $n=256M$ and $n=512M$ data items, respectively (see Figures \ref{fig:det-rand-merge-285}\&\ref{fig:det-rand-merge-tesla}). Figure \ref{fig:det-steps} shows in detail the time required for the individual steps of Algorithm \ref{alg:Deterministic-Sample-Sort} when executed on a GTX 285. We observe that \emph{sublist sort} (Step 9) and \emph{local sort} (Step 2) represent the largest portion of the total runtime of \textsc{GPU Bucket Sort}. This is very encouraging in that the {}``overhead'' involved to manage the deterministic sampling and generate buckets of guaranteed size (Steps 3-7) is small. We also observe that the \emph{data relocation} operation (Step 8) is very efficient and a good example of the GPU's great performance for data parallel access when memory accesses can be coalesced (see Section \ref{sec:Review:-GPU-Architectures}). Note that, for Algorithm \ref{alg:Deterministic-Sample-Sort} the sample size $s$ is a free parameter. With increasing $s$, the sizes of sublists $B_{j}$ created in Step 8 of Algorithm \ref{alg:Deterministic-Sample-Sort} decrease and the time for Step 9 decreases as well. However, the time for Steps 3-7 grows with increasing $s$. This trade-off is illustrated in Figure \ref{fig:Runtime-Sample-Size} which shows the total runtime for Algorithm \ref{alg:Deterministic-Sample-Sort} as a function of $s$ for fixed $n=32M,64M,128M$. As shown in Figure \ref{fig:Runtime-Sample-Size}, the total runtime is smallest for $s=64$, which is the parameter chosen for our \textsc{GPU Bucket Sort} code. Figures \ref{fig:det-rand-merge-285} and \ref{fig:det-rand-merge-tesla} show a comparison between \textsc{GPU Bucket Sort} and the current best GPU sorting methods, Randomized Sample Sort \cite{s7-Sanders} and Thrust Merge Sort \cite{s6-Garland}. Figure \ref{fig:det-rand-merge-285} shows the runtimes for all three methods on a GTX 285 and Figure \ref{fig:det-rand-merge-tesla} shows the runtimes of all three methods on a Tesla C1060. Note that, \cite{s6-Garland} and \cite{s7-Sanders} did not report runtimes for the GTX 260. For \textsc{GPU Bucket Sort}, all runtimes are the averages of 100 experiments, with less than 1 ms observed variance. For Randomized Sample Sort and Thrust Merge Sort, the runtimes shown are the ones reported in \cite{s7-Sanders} and \cite{s6-Garland}. For Thrust Merge Sort, performance data is only available for up to $n=16M$ data items. For larger values of $n$, the current Thrust Merge Sort code shows memory errors \cite{Garland-Priv-Comm}. As reported in \cite{s7-Sanders}, the current Randomized Sample Sort code can sort up to $32M$ data items on a GTX 285 with 1 GB memory and up to $128M$ data items on a Tesla C1060. Our \textsc{GPU Bucket Sort} code appears to be more memory efficient. \textsc{GPU Bucket Sort }can sort up to $n=256M$ data items on a GTX 285 with 2GB memory and up to $n=512M$ data items on a Tesla C1060. Therefore, Figures \ref{fig:det-rand-merge-285}a and \ref{fig:det-rand-merge-tesla}a show the performance comparison with higher resolution for up to $n=64M$ and $n=128M$, respectively, while Figures \ref{fig:det-rand-merge-285}b and \ref{fig:det-rand-merge-tesla}b show the performance comparison for the entire range up to $n=256M$ and $n=512M$, respectively. We observe in Figures \ref{fig:det-rand-merge-285}a and \ref{fig:det-rand-merge-tesla}a that, as reported in \cite{s7-Sanders}, Randomized Sample Sort \cite{s7-Sanders} significantly outperforms Thrust Merge Sort \cite{s6-Garland}. Most importantly, we observe that Randomized Sample Sort \cite{s7-Sanders} and our Deterministic Sample Sort (\textsc{GPU Bucket Sort}) show nearly identical performance on both, the GTX 285 and Tesla C1060. Note that, the experiments in \cite{s7-Sanders} used a GTX 285 with 1 GB memory whereas we used a GTX 285 with 2 GB memory. As shown in Table \ref{tab:GPU-Performance-Characteristics}, the GTX 285 with 1 GB has a slightly better memory clock rate and memory bandwidth than the GTX 285 with 2 GB which implies that the performance of Deterministic Sample Sort (\textsc{GPU Bucket Sort}) on a GTX 285 is actually a few percent better than the performance of Randomized Sample Sort. The data sets used for the performance comparison in Figures \ref{fig:det-rand-merge-285} and \ref{fig:det-rand-merge-tesla} were uniformly distributed, random data items. The data distribution does not impact the performance of Deterministic Sample Sort (\textsc{GPU Bucket Sort}) but has an impact on the performance of Randomized Sample Sort. In fact, the uniform data distribution used for Figures \ref{fig:det-rand-merge-285} and \ref{fig:det-rand-merge-tesla} is a \emph{best case} scenario for Randomized Sample Sort where all bucket sizes are nearly identical. Figures \ref{fig:det-rand-merge-285}b and \ref{fig:det-rand-merge-tesla}b show the performance of \textsc{GPU Bucket Sort} for up to $n=256M$ and $n=512M$, respectively. For both architectures, GTX 285 and Tesla C1060, we observe a very close to linear growth rate in the runtime of \textsc{GPU Bucket Sort} for the entire range of data sizes. This is very encouraging for a problem that requires $O(n\log n)$ work. In comparison with Randomized Sample Sort, the linear curves in Figures \ref{fig:det-rand-merge-285}b and \ref{fig:det-rand-merge-tesla}b show that our \textsc{GPU Bucket Sort} method maintains a fixed \emph{sorting rate} (number of sorted data items per time unit) for the entire range of data sizes, whereas it is shown in \cite{s7-Sanders} that the sorting rate for Randomized Sample Sort fluctuates and often starts to decrease for larger values of $n$. \section{Conclusions\label{sec:Conclusions}} In this paper, we presented a \emph{deterministic} sample sort algorithm for GPUs, called \textsc{GPU Bucket Sort}. Our experimental evaluation indicates that \textsc{GPU Bucket Sort} is considerably faster than Thrust Merge \cite{s6-Garland}, the best comparison-based sorting algorithm for GPUs, and it is exactly as fast as the new Randomized Sample Sort for GPUs \cite{s7-Sanders} when the input data sets used are uniformly distributed, which is a \emph{best case} scenario for Randomized Sample Sort. However, as observed in \cite{s7-Sanders}, the performance of Randomized Sample Sort fluctuates with the input data distribution whereas \textsc{GPU Bucket Sort} does not show such fluctuations. In fact, \textsc{GPU Bucket Sort} showed a fixed \emph{sorting rate} (number of sorted data items per time unit) for the entire range of data sizes tested (up to $n=512M$ data items). In addition, our \textsc{GPU Bucket Sort} implementation appears to be more memory efficient because \textsc{GPU Bucket Sort} is able to sort considerably larger data sets within the same memory limits of the GPUs.
2024-02-18T23:40:24.307Z
2010-02-24T05:27:15.000Z
algebraic_stack_train_0000
2,308
4,570
proofpile-arXiv_065-11240
\section{Introduction} In standard cosmology, we assume that our universe is isotropic and homogeneous, and accordingly is described by the Friedmann-Lema$\hat{\mbox{\i}}$tre-Robertson-Walker (FLRW) metric. Recent observation of Cosmic Microwave Background (CMB) temperature distribution on the celestial sphere shows that the spatial curvature is flat. Furthermore, the distance-redshift relation of type Ia supernovae indicates that the expansion of the present universe is accelerated. Then, we are led to introduce, within the flat FLRW model, ``dark energy,'' which has negative pressure and behaves just like a positive cosmological constant. However, no satisfactory model that explains the origin of dark energy has so far been proposed. As an attempt to explain the SNIa distance-redshift relation without invoking dark energy, Tomita proposed a ``local void model'' \ccite{Tomita1st}. In this model, our universe is no longer assumed to be homogeneous, having instead an underdense local void in the surrounding overdense universe. The isotropic nature of cosmological observations is realized by assuming the spherical symmetry and demanding that we live near the center of the void. Furthermore, the model is supposed to contain only ordinary dust like cosmic matter. Since such a spacetime can be described by Lema$\hat{\mbox{\i}}$tre-Tolman-Bondi (LTB) spacetime \ccite{L}-\ccite{B}, we also call this model the ``LTB cosmological model.'' Since the rate of expansion in the void region is larger than that in the outer overdense region, it can explain the observed dimming of SNIa luminosity. In fact, many numerical analysis \ccite{Tomita1st2}-\ccite{GBHkSZ} have recently shown that this LTB model can accurately reproduce the SNIa distance-redshift relation. However, in order to verify the LTB model as a viable cosmological model, one has to test the LTB model by various observations---such as CMB temperature anisotropy---other than the distance-redshift relation\footnote{ Recently, some constraints on the LTB model from BAO and kSZ effects have also been discussed, see e.g. \ccite{GBHkSZ}. Still, the possibility of the LTB model is not completely excluded. }. For this purpose, in this paper, we derive some analytic formulae that can be used to rigorously compare consequences of the LTB model with observations of CMB anisotropy. More precisely, we derive analytic formulae for CMB temperature anisotropy for dipole and quadrupole momenta, and then use the dipole formula to place the constraint on the distance between an observer and the symmetry center of the LTB model. We also check the consistency of our formulae with some numerical analysis of the CMB anisotropy in the LTB model, previously made by Alnes and Amarzguioui \ccite{AACMB}. In \refsec{LTB}, we briefly summarize the LTB metric. In \refsec{multipole}, we derive analytic formulae for CMB anisotropy in the LTB model. In \refsec{constraint}, we obtain some constraints concerning the position of the observer. \refsec{summary} is devoted to a summary. \section{LTB spacetime} \label{O57_sec: LTB} A spherically symmetric spacetime with only non-relativistic matter is described by the Lema$\hat{\mbox{\i}}$tre-Tolman-Bondi (LTB) metric \ccite{L}-\ccite{B} \Eq{ ds^2 = -dt^2 + \frac{\{R' (t, r)\}^2}{1-k(r)r^2}dr^2 + R^2 (t, r) d\Omega_2^2, } where $' \equiv \p r$, $k(r)$ is an arbitrary function of only $r$. The Einstein equations reduce to \Eqr{ \left(\frac{\dot R}{R}\right)^2 &=& \frac{2GM(r)}{R^3} - \frac{k(r)r^2}{R^2}, \\ 4\pi\rho (t,r) &=& \frac{M' (r)}{R^2 R'}, } where $\dot{} \equiv \p t$, $M(r)$ is an arbitrary function of only $r$, and $\rho(t, r)$ is the energy density of the non-relativistic matter. The general solution for the Einstein equations in this model admits two arbitrary functions $k(r)$ and $M(r)$. By appropriately choosing the profile of these functions, one can construct some models which can reproduce the distance-redshift relation of SNIa in this model. \section{Analytic formulae for CMB anisotropy in LTB model} \label{O57_sec: multipole} In this section, we derive analytic formulae for the CMB anisotropy in the LTB model. First, we assumed that the universe was locally in thermal equilibrium (that is, the distribution function $F$ was Planck distribution $\Phi$) at the last scattering surface, and the direction of the CMB photon traveling is fixed. In this case, $F$ can be written as $F = \Phi (\omega/T)$, where $\omega \equiv p^t$, and $T$ is the temperature. Then, the CMB temperature anisotropy $\delta T/T$ is defined by \Eq{ \delta F = -\frac{\delta T}{T}\omega\p\omega F. } Second, supposing that an observer lives at a distance of $\delta x^i$ from the center of the void, it follows that \Eqr{ (\delta F)^{(1)} &=& \delta x^i (\p i F)_0, \\ (\delta F)^{(2)} &=& \frac{1}{2}\delta x^i \delta x^j (\p i \p j F)_0, } where the subscript 0 means the value at the center ($r = 0$) at the present time ($t = t_0$). From these, the CMB temperature anisotropy dipole $(\delta T/T)^{(1)}$ and quadrupole $(\delta T/T)^{(2)}$ are written as \Eqr{ \left(\frac{\delta T}{T}\right)^{(1)} &=& -\frac{\delta x^i (\p i F)_0}{\Fy}, \\ \left(\frac{\delta T}{T}\right)^{(2)} &=& -\frac{1}{2}\frac{\delta x^i \delta x^j (\p i\p j F)_0}{\Fy} +\frac{1}{2}\left\{\left(\frac{\delta T}{T}\right)^{(1)}\right\}^2 \frac{\Fyy}{\Fy}. } We assume that the distribution function $F(x, p)$ itself is spherically symmetric. Then, $F$ can be written as $F(x, p) = F_0 (t, r, \omega, \mu)$, where $\mu \equiv R'p^r /(\sqrt{1 - kr^2}\omega)$. This implies that $\p i F = (\p i r)\Fr + (\p i \omega)\Fo + (\p i \mu)\Fm$. Then, we can derive analytic formulae for the CMB anisotropy dipole by solving the Boltzmann equation $\mathscr{L}[F_0] = \Ft + \dot r \Fr + \dot{\omega}\Fo + \dot{\mu}\Fm = 0$. The result is \Eq{ \left(\frac{\delta T}{T}\right)^{(1)} = \delta L n^j \Omega_j \left\{\frac{\sqrt{1 - k(r_i)r_i^2}}{R'_0}\eP{t_0}{t_i}\left(\frac{\Fr}{\Fy}\right)_i +\int^{r_i}_0 dr \Hpa' \exp\left[\int^t_{t_0}dt_1 \Hpa(t_1)\right]\right\}, \label{O57_eq: dipole} } where $\delta L n^j$ is the position vector of the observer, $\Omega^j \equiv x^j /r$, $\tilde P(t_0, t_i) \equiv \int^{r_i}_0 dr R''/R'$, $\Hpa \equiv \dot R'/R'$, and the subscript $i$ denotes the value at the last scattering surface. By a similar method, we also derive the CMB anisotropy quadrupole formula \Eqr{ \left(\frac{\delta T}{T}\right)^{(2)} &=& -\frac{\delta x^i \delta x^j}{2(\Fy)_i} \Biggl[(\delta_{ij} - \Omega_i \Omega_j )\left(\frac{\Fr}{r} - \mu\frac{\Fm}{r^2}\right)_0 + \Omega_i \Omega_j (\Frr)_0 \nonumber\\ && \hspace{62pt} + \left\{\frac{\ape''}{\ape}\delta_{ij} +\ape\left(\frac{R'}{\sqrt{1 - kr^2}} - \ape\right)''\frac{\Omega_i \Omega_j}{(R')^2}\right\}_0 (\Fy)_i \Biggr] \nonumber\\ && +\frac{1}{2}\left\{\left(\frac{\delta T}{T}\right)^{(1)}\right\}^2 \frac{\Fyy}{\Fy}, \label{O57_eq: quadrupole} } where $\ape \equiv R/r$. \section{Constraint on LTB model} \label{O57_sec: constraint} In this section, we derive some constraints concerning the position of the off-center observers in the LTB model from the CMB dipole formula \refeq{dipole}. In general, the CMB temperature anisotropy is decomposed in terms of the spherical harmonics $Y_{lm}$ by \Eq{ \frac{\delta T}{T} = \sum_{l, m} a_{lm}Y_{lm}, } where the amplitudes in the expression are recovered as \Eq{ a_{lm} = \int^{2\pi}_0 d\phi \int^{\pi}_0d\theta \sin\theta\frac{\delta T}{T}Y_{lm}. } We are interested in $a_{10}$ as the dipole moment. We estimate the CMB dipole formula \refeq{dipole} numerically by using the profile considered in \ccite{AACMB} (\reffig{AA}), \Eqr{ M(r) &=& \frac{1}{2}H_{\perp}^{2}(t_{0}, r_{\rm out})r^{3}\left[\alpha_{0} - \Delta\alpha\left(\frac{1}{2} - \frac{1}{2}\tanh{\frac{r - r_0}{2\Delta r}}\right)\right], \\ k(r) &=& -H_{\perp}^{2}(t_{0}, r_{\rm out})\left[\beta_{0} - \Delta\beta\left(\frac{1}{2} - \frac{1}{2}\tanh{\frac{r - r_0}{2\Delta r}}\right)\right], \label{O57_eq: profile1} } where \Eqr{ t_{s}(r) = 0, \ \ \Hpe(t_{0}, r_{\rm out}) = 51 \ {\rm km/s/Mpc}, \ \ \alpha_{0} = 1, \ \ \Delta\alpha = 0.90, \nonumber\\ r_{0} = 1.34 \ {\rm Gpc}, \ \ \Delta r = 0.536 \ {\rm Gpc}, \ \ \beta_{0} = 1 - \alpha_{0} = 0, \ \ \Delta\beta = -\Delta\alpha = -0.90, \label{O57_eq: profile2} } and $\Hpe \equiv \dot{\ape}/\ape$. \begin{figure} \begin{tabular}{cc} \begin{minipage}{0.46\hsize} \begin{center} \includegraphics[width=0.77\hsize]{O57_AA-I-rho.eps} \end{center} \end{minipage} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=0.77\hsize]{O57_AA-I-H.eps} \end{center} \end{minipage} \end{tabular} \caption{The profile considered in \ccite{AACMB}. The subscript $rec$ denotes the value at the recombination, and $\Hpa ({\rm or}\ \Hpe) = 100h \rm km/s/Mpc$. } \label{O57_fig: AA} \end{figure} The induced $a_{10}$ is of order $10^{-3}$ or less observed by Cosmic Background Explorer (COBE) \ccite{COBE}, so we find that \Eq{ \delta L \lineq 15 \rm Mpc, } where $\delta L$ is the distance from the observer to the center of the void. This is consist with the result of \ccite{AACMB}. \section{Summary} \label{O57_sec: summary} In the LTB model, we have derived the analytic formulae for the CMB anisotropy dipole \refeq{dipole} and quadrupole \refeq{quadrupole}, which can be used to rigorously compare consequences of this model with observations of the CMB anisotropy. Moreover, we checked the consistency of our formulae with results of the numerical analysis in \ccite{AACMB}, and constrained the distance from an observer to the center of the void. One of the advantages in obtaining analytic formulae is that we can identify physical origins of the CMB anisotropy in the LTB model. For example, in the CMB dipole formula \refeq{dipole}, we can regard the first term as the initial condition at the last scattering surface, and the second term as the Integrated Sachs-Wolfe effect. \section*{Acknowledgments} The authors would like to thank all participants of the workshop $\Lambda$-LTB Cosmology (LLTB2009) held at KEK from 20 to 23 October 2009 for useful discussions. We would also like to thank Hajime Goto for useful discussions. This work is supported by the project Shinryoiki of the SOKENDAI Hayama Center for Advanced Studies and the MEXT Grant-in-Aid for Scientific Research on Innovative Areas (No. 21111006).
2024-02-18T23:40:24.418Z
2010-04-12T02:00:52.000Z
algebraic_stack_train_0000
2,315
1,781
proofpile-arXiv_065-11362
\section{Introduction}\label{sec: intro} Individuals tend to interact and form relationships with others who are more similar to one's self than randomly chosen members of the population. This phenomenon, known as homophily or “love of the same” \citep{lazarsfeld1954friendship}, is one of the most prevalent and robust features of human social networks and has been widely studied across fields \citep{lazarsfeld1954friendship, mcpherson2001birds, moody2001race, kossinets2009origins}. The ubiquitous presence of homophily directly translates into human interactions segregated across a variety of dimensions. Since the resulting segregation patterns have important consequences on behavior, diffusion, inequality, and social mobility \citep{glaeser2000social, ruef2003structure, centola2011experimental, bramoulle2012homophily, jackson2017economic, zeltzer2019gender}, it is of utmost importance to understand the origins, nature, and consequences of homophily. Rather than analyzing the origins or consequences, the objective of this article is to test the relative importance of the large set of differing dimensions of homophily in friendship networks analyzed in previous studies (see \cite{mcpherson2001birds} for an influential review of the literature). Abstracting from whether the studies document induced or choice homophily or rather social influence \cite{kossinets2009origins, currarini2010identifying}, there exists large and robust evidence that friendships sort on gender, age, race, ethnicity, religion, and socio-economic status \citep{wittek2020fighting, kruse2016neighbors, ibarra1992homophily, moody2001race, marmaros2006friendships, kossinets2009origins, currarini2010identifying, zeltzer2019gender}. In the same vein, meeting constraints--proxied by living in the same neighbourhoods, belonging to the same club, working in the same firm, studying the same major, living in the same dorm, and so on--are important predictors of who interacts with whom \citep{glaeser2000social, marmaros2006friendships, kossinets2009origins, mayer2008old}. Some studies have focused on more subtle but still detectable dimensions, such as smoking \cite{christakis2008collective} and obesity \citep{christakis2007spread, valente2009adolescent, de2010obesity}; others have analyzed factors imperceptible by a naked eye, such as political orientation, academic performance, cognitive abilities, behavioral or personality traits among others \citep{almack1922influence, marmaros2006friendships, conti2013popularity, mayer2008old}.\footnote{The number of studies proposing one or multiple dimensions of homophily in social networks is large and reviewing all of them is out of scope of this article.} Although the existing work has established the baseline for homophily theories, the literature has proposed an overwhelming number of dimensions along which human relationships might sort, without providing a unified framework according to which we can categorize the different dimensions to determine their relative importance.\footnote{\cite{lazarsfeld1954friendship} distinguish between status homophily and value homophily, but this categorization is based on the nature of the factor rather than on empirical grounds.} That is, there is no systematic analysis assessing how important and robust the proposed predictors of friendship are from a statistical perspective. The objective of this paper is to provide a step toward in characterizing the potential dimensions of homophily based on their empirical relevance. To that aim, we point out one problem of the existing literature: the typical approach is to analyze a \textit{small} and/or \textit{exogenously given} set of potential dimensions of homophily, depending on the data and variables at researchers' disposal. Nevertheless, these approaches can lead to pre-testing bias and increase the ”researcher's degrees of freedom” and ”p-hacking” if the estimation is conducted using multiple combinations of regressors with the aim of obtaining statistically significant estimates \citep{simmons2011}. In addition, many human characteristics are highly correlated in real life: gender is frequently correlated with self-confidence, prosocial, and risk attitudes \citep[see e.g.][]{barber2001boys, croson2009gender}, while race predicts a large array of socio-economic outcomes \citep{guterbock1983race, altonji1999race}. These correlations in turn generate two classic statistical issues while estimating homophily using standard approaches. Focusing on a small number of characteristics increases the probability of accepting a particular determinant of friendship formation incorrectly (see below for concrete examples) due to the omitted-variable problem, which leads to biased--and even inconsistent--estimates. A converse issue arises if several correlated factors are included into one model. Such a multicollinearity issue leads to a well known problem regarding the interpretation of the individual estimates and cast doubts on the (non-)redundancy of the individual factors. This multicollinearity problem may result in a loss of efficiency, leading to an incorrect rejection of a determinant. Thus, both statistical issues can lead to serious misunderstanding of origins and consequences of homophily. Using the statistical jargon, the overwhelming variety of potential factors that might influence network formation generates a problem known as \textit{model uncertainty}, arising from the lack of a unified framework for examining the drivers of friendship. One natural way to think about model uncertainty is to admit that we do not know which is the “true” model and attach probabilities to different models \citep{sala2004}. This is the main idea behind Bayesian Model Averaging (BMA), which consists in estimating all possible combinations of the regressors, comparing the performance of all the models resulting from the different combinations, and taking a weighted average over all the candidate models, where the weights are the probabilities that the candidate model is the true model \citep{hoeting1999bayesian}. In our analysis, we account for uncertainty in the friendship formation model and identify robust determinants of link formation in friendship networks by using a BMA panel-data approach.\footnote{For the sake of robustness, we also employ the Weighted Least Squares estimation method to identify the determinants. Our conclusions are robust to the modeling approach; see Section \ref{sec: rob}.} To the best of our knowledge, this is the first study that addresses model uncertainty while identifying the main drivers of friendship. In this paper, we study potential determinants of friendship formation among the freshmen of a faculty at a major Spanish University ($n=273$). The data at our disposal have several particular features. First, we mapped the friendship networks twice, once at the first week of their freshmen year and once at the very end of the same academic year. Such timing provides a unique opportunity to examine the determinants of the emergence of friendships in the group. Although some links already exist in the beginning, the first-week network resembles a scarcely connected randomly generated network. Most of the elicited friendships were established between the two periods and we can control for preexisting relationships using the first network elicitation. Second, we collected a rich set of individual characteristics for each student, enabling to examine the role of homophily on large array of dimensions. We broadly classify the dimensions under study as observable and unobservable (or imperceptible) and contrast their importance in friendship formation. We define an attribute as observable if it can be detected by mere observation. Examples of such characteristics are race, gender, geographical/physical proximity, obesity, or whether one smokes. We classify as unobservable or imperceptible characteristics those attributes that require more intimate disclosure or a deeper conversation to be detected; these include family background, cognitive and personality traits, or economic preferences. As for our starting hypothesis, on the one hand, there is no reason to believe \textit{a priori} that the observable characteristics should play a more important role that the unobservable ones, or viceversa. On the other hand, one may argue that observable factors are more salient and, hence, the primary drivers of friendship formation, while homophily on unobservable attributes may only arise after certain amount of personal interaction. In line with the latter, \cite{mcpherson2001birds} note that --in contrast to, say, gender or race-- socio-economic indicators are more sensible on the strength of the relationships. \cite{mayer2008old} estimate the determinants of friendships using an array of race and socio-economic indicators both separately and jointly. If separately considered, most indicators predict friendships. However, once all indicators are included into one regression, the influence of the socio-economic attributes weakens and some become insignificant. Similarly, \cite{girard2015} report that, although their behavioral indicators all correlate significantly with friendships, the magnitude, significance, and even the sign of their estimated impact on friendships in multivariate regressions are sensible to the set of dependent variables. These issues are akin to the omitted-variable and collinearity issues discussed above. Therefore, we prefer not to be inclined toward any of the two--or any other--alternative hypothesis and let the data speak. The methodology that we employ is explicitly designed to target such \textit{ex-ante} ambiguity in the underlying hypothesis. We note that, in contrast to other studies, our sample is largely homogeneous in term of race, ethnicity, age, ``profession", cohort, major, and religion, all belonging among the classic determinants of homophily \cite{mcpherson2001birds}. We belief that this is an advantage of our study: such homogeneity in terms of observable features maximizes the chances that our subjects organize along the unobservable, imperceptible traits under study. We briefly summarize our findings in what follows. Applying the BMA uncovers that the most important determinants of friendship formation are common gender, common study section (reflecting the importance of geographical location), and smoking habits In a stark contrast, none of the unobservable factors capturing individual preferences, socio-economic indicators, and individual personality and behavioral traits play any robust role in friendship formation in our data. Since women and men form networks differently \cite{ductor2018gender}, we show that friendship formation differs between women and men. We found that common gender, smoking habits and common section are specially important to explain the friendships of women. However, the only robust determinant for men is common class section among our candidate factors. The rest of the paper proceeds as follows. Section \ref{sec:methodology} describes the data and lays out the empirical strategy. Section \ref{sec: findings} presents our findings. Section \ref{sec: rob} assesses the robustness of the results. We discuss the implications of our findings in Section \ref{sec: discussion}. \section{Materials and Methods} \label{sec:methodology} We exploit a data set collected from all freshmen ($n=273$) in the Faculty of Economic and Business Sciences at the Universidad de Granada, Spain, over the whole academic year 2010-2011. These data contain rich information on students' individual characteristics and their social networks.\footnote{The data were previously employed in an analysis of the role of prenatal exposure to sex hormones and social integration \citep{kovavrik2017} and altruism \citep{branas2013second}, respectively.} The data elicitation was approved by the Ethical Committee of the Universidad de Granada and all subjects provided informed written consent. Anonymity was also assured according to the Spanish Law 15/1999 for Personal Data Protection. We use this database because it exhibits three particular features providing a unique opportunity to examine the role of both perceptible and imperceptible determinants of friendship formation, the main question of this study. \subsection{Sample and section distribution} The first interesting feature of the data is the homogeneity of the sample in terms of race, ethnicity, age, religion, major, cohort, and ``profession'' (Table \ref{Table indvchar}). By definition, all participants are first-year students (without any profession), studying Economics. In addition, our subjects are mostly Caucasian students (99\% of them), aged 17-18 (80\% of them), and catholic (61\% of them). \begin{table}[h!] \centering \caption{Participant's characteristics} \begin{tabular}{lcc} \hline & (1) & (2) \\ Variable & Observations & Frequency\\\hline \textbf{Year of birth} & & \\\hline $<$1990 & 27 & 14\% \\ 1990 & 11 & 6\% \\ 1991 & 43 & 22\% \\ 1992 & 113 & 58\% \\\hline \textbf{Religion} & & \\\hline Catholic & 120 & 62\% \\ Non-Believer/Agnostic & 62 & 32\%\\ Other & 9 & 6\% \\\hline \textbf{Gender} & & \\\hline Male & 136 & 56\% \\ Female & 108 & 44\% \\\hline \textbf{Ethnicity} & & \\\hline Caucasian & 202 & 99\%\\ Other & 3 & 1\%\\\hline \textbf{Study/class sections} & & \\\hline A & 61 & 27\% \\ B & 64 & 29\%\\ C & 48 & 21\%\\ D & 51 & 23\%\\\hline \textbf{Household Monthly Income} & & \\\hline Income $=<$ 1500 & 69 & 36\% \\ 1500 $<$Income $=<$ 2500 & 63 & 32\% \\ 2500 $<$Income $=<$ 3500 & 35 & 18\% \\ Income $>$ 3500 & 27 & 14\% \\\hline \end{tabular} \begin{minipage}{16cm} \footnotesize \end{minipage} \label{Table indvchar} \end{table} Therefore, out of the classic determinants of friendships and homophily \citep{mcpherson2001birds}, we only expect gender, proximity (see below), and similar socio-economic background to affect who befriends whom. We believe that, due to such homogeneity, there is a large chance that friendships sort out by the more imperceptible, unobservable characteristics under scrutiny (see Section \ref{sec: var} and Table \ref{Table varlist}). At beginning of their first year, the School distributes the incoming students randomly into study/class sections\footnote{This is the standard procedure and unrelated to our study.}. In the case of the 2010-2011 cohort, the students were randomly assigned into four sections (labelled as A, B, C, and D). No student has any ability to self-select into a particular section and they are explicitly advised to attend the classes of their corresponding section. Following the previous literature \citep[e.g.][]{marmaros2006friendships, carrell2013natural}, we use the section distribution as a measure of location or geographical distance, determining meeting opportunities of students. The random assignment provides a causal interpretation regarding the role of meeting opportunity in link formation \citep{marmaros2002peer}. \subsection{Networks} The second advantage of the dataset stems from the fact that networks were elicited twice, during the first week of the academic year in October 2010 (Period 1) and at the end of the academic year in May 2011 (Period 2). All students were initially invited to disclose the names of their friends at the \textit{beginning} of the \textit{first} year. This is an important aspect of the data because it enables us to control for any pre-existing relationships and focus on the social dynamics taking place within the class during the first academic year. The students were again surveyed at the end of the school year when one may expect to observe a standard social network.\footnote{See \cite{kovavrik2017} for more information regarding the network elicitation.} For the analysis, we mapped the elicited friendships in two undirected networks using subjects as nodes and friendship nominations as links under the conditions that a link between two individuals exists if both named each other as a friend. That is, we focus on the network of reciprocal relations. Since link reciprocity is one of Granovetter's \citep{granovetter1973strength} conditions for a relationship to be considered strong, we focus on reciprocal ties to make sure that we analyze friendship rather than acquaintances. As expected, the network in period 1 was sparsely connected, suggesting that students had very few ties with their peers at the beginning of the school year. In stark contrast, the end-of-the-year network resembles classic socially generated networks in most respects (e.g. skewed degree distribution, high reciprocity of connections, network transitivity, positive assortativity etc.; see Table 1 and Figure 3 in \cite{kovavrik2017}). This time setting and the fact that the friendship network was almost empty in period 1 allow us to evaluate the causal role of observable characteristics (such as gender or smoking behaviour) and imperceptible features (e.g., personality or economic preferences) in the link formation process of friendship networks.\footnote{The fact that the initial network is almost empty allows us to consider simple probability models in the estimation of the formation of ties.} Since participation was voluntary, some subjects were not active in both periods (similar consideration applies in case of the data introduced in Section \ref{sec: var}). The number of students in period 1 and 2 was 204 and 246, respectively; 168 students participated in both periods. In our empirical analysis, we focus on the 168 subjects who participated in both periods, since we can estimate their link formation in Period 2 conditional on the prior network in Period 1 or focus on people who have not had any link in Period 1 (our robustness check in Section \ref{sec: rob}).\footnote{We are not able to identify friendships made between period 1 and 2 that were broken before period 2.} We use the 168 subjects to create all possible pairs of potential friendships among the 168 subjects, removing duplicated pairs. Thus, we have a pairwise panel data with 8515 observations. The dependent variable of interest is friendship indicating whether two individuals $i$ and $j$ nominated each other in Period 2 as friends. This variable $g_{ij,2}=1$ if both $i$ and $j$ did; $g_{ij,2}=0$ otherwise.\footnote{To illustrate the definition of pairwise panel and our dependent variable, consider the following example. We have a two-column matrix, where column 1 contains the id of individual $i$ and column 2 the id of individual $j$. For individual $i=1$, we create $n-1=167$ observations to indicate whether person 1 is a friend of each $j=2,3,..,168$. The dependent variable, $g_{1,3}=1$ if individual individuals 1 and 3 mutually nominate each other as a friend. In addition, the observation $g_{3,1}$ is removed from the sample, since the friendship relationship between 1 and 3 is already accounted for in the observation for $i=1$ and $j=3$.} We then consider the (dis)similarity in observable and imperceptible characteristics between $i$ and $j$ as the determinants of their friendship formation. \subsection{Non-network variables} \label{sec: var} The third feature of the data worth mentioning is the richness of the available information on the subjects. Apart from the network elicitation, subjects participated in a series of questionnaires, surveys, and incentivized economic experiments designed to collect data on their heterogeneity over their whole freshmen year. In this study, we consider 20 characteristics that might affect the formation of students' friendships. We broadly classify these characteristics as observable (O) and unobservable or imperceptible characteristics (U) and contrast their importance in explaining friendship formation. Observable characteristics are attributes that can be easily observed without any deep interaction, whereas we classified as unobservable or imperceptible those attributes that would require more intimate disclosure or a deeper conversation to be detected. To assess the (dis)similarity of two individuals, we either employ the absolute difference between their values for the characteristic in question or an indicator dummy if they share a characteristic. Table \ref{Table varlist} lists all the characteristics and indicates whether homophily is assessed in our regressions using an absolute value (Transformation = A) or an indicator dummy (Transformation = D). In what follows, we present all the variables in Table \ref{Table varlist} one by one: \begin{itemize} \item \textbf{Observable characteristics:} \begin{enumerate} \setlength{\itemsep}{0.5em} \item \textit{Common gender.} Large literature has documented that age, gender, religion, race, and ethnicity are key determinants of friendship \citep{patacchini2016racial, mcpherson2001birds, currarini2009economic, kossinets2009origins, currarini2010identifying}. Since our sample is homogeneous in terms of age, religion, race, and ethnicity, we only study common gender among these classic observable factors. \item \textit{Common section.} Geographical location and spatial proximity are crucial for interaction (\cite{preciado2012does}). For instance, \cite{glaeser2000social} find that students in university dorms are more likely to interact with their roommates and dormmates even if they are assigned to dorms and rooms randomly, \cite{currarini2010identifying} document that friendship formation is largely influenced by meeting chances, and \cite{preciado2012does} show that the log-odds of friendship existence decrease with the geographical distance. In our setting, all the first-year students were randomly distributed into four sections at the beginning of the freshman year. We thus expect homophily in section assignments. We capture such ``location'' using a dummy variable \textit{common section}, which is equal to one if both $i$ and $j$ belong to the same section. \item \textit{Body mass index (BMI).} \cite{christakis2007spread} document homophily in BMI as a driver of the epidemic obesity in the U.S. \cite{de2010obesity} show that male friends tended to be similar in their consumption of high-calorie foods. We thus include the absolute difference in BMI of $i$ and $j$ as a potential determinant of them becoming friends. \item \textit{Smoking.} Individuals who smoke have been documented to cluster together \citep{christakis2008collective, hsieh2018}. We therefore consider a \textit{smoking} dummy equal to one if both $i$ and $j$ self-report being smokers. \item \textit{Pre-existing friendship.} Finally, our regressions control for whether $i$ and $j$ had been reported each othe as a friend at the first week of their freshmen year, using a variable \textit{Friends Period 1} that indicates whether $i$ and $j$ named each other in Period 1. \end{enumerate} \vspace{0.3em} \item \textbf{Unobservable characteristics:} \begin{enumerate} \setlength{\itemsep}{1em} \item \textit{Socio-economic status.} Education, occupation, and social class are classic dimensions on which people sort in social networks \citep{mcpherson2001birds}. Since our subjects are all students, we include two measures of family background. First, as in \cite{conti2013popularity}, we consider \textit{parental education}, measured for individual $i$ as the average education between the mother and the father and included for a pair $i$ and $j$ in the analysis as the absolute difference in the average parental education between $i$ and $j$. Second, we consider the absolute difference between the self-reported \textit{household income} of $i$ and $j$. \item \textit{Common interests.} Students are more likely to become friends if they share interests, as documented in \citep{marmaros2002peer}. Therefore, we include two variables related to preferences regarding the field of study. First, \textit{STEM best} is a dummy equal to 1 if $i$ and $j$ have received the highest grade in their final high-school exam in the scientific track (that includes topics from science, technology, engineering, or mathematics (STEM) subjects), rather than in humanities or social sciences. Second, \textit{STEM preference} is a dummy variable equal to 1 if both individuals report a STEM subject as the most preferred, irrespective of their grades. \item \textit{Prosocial attitudes.} There exists large evidence that prosocial attitudes correlate with network positioning, including with who befriends whom \citep{girard2015, branas2010altruism, apicella2012social, kovavrik2012}. We first use the behavior in the Dictator game (DG) elicited together with the networks in Periods 1 and 2 to measure subjects' \textit{altruism}. DG is a classic experimental protocol, in which a ``Dictator" splits a fixed amount of money between herself and another anonymous individual, the Recipient. Dictators rarely keep all the money for themselves; on average, 20\% of the money is donated to the recipient, despite the anonymity of the transfers. The donated quantity is usually interpreted as an instance of altruistic behavior. As a measure of altruism, we employ the amount donated in the DG in Period 2, which--we believe--reflects better the social dynamics in the groups under study. Once again, the variable included in the regressions is the absolute difference between the altruism exhibited by $i$ and $j$. In addition, since the amount donated in the DG by an individual generally differs across the two periods and, in general, decreased between the two time periods, we define \textit{altruism learning} as equal to one if the amount given in Period 2 is lower than in Period 1 for both $i$ and $j$ (and zero otherwise). This variable enables to test whether there can be homophily in learning. We additionally measure \textit{reciprocity} from the answer to the following survey question: ``I am willing to do a boring job to return previous help''. The answers vary from 1 (minimum willingness to help) to 7 (maximum reciprocity) and we construct the variable \textit{reciprocity} between $i$ and $j$ by taking the absolute difference between their answers to the survey question. Last, \textit{volunteering} was elicited from the answer to the question ``Are you a member of a voluntary organization (e.g., Red Cross, NGO, Political Party, Sports Club, Church Choir, economic association,...)?''. We assess homophily in volunteering using a dummy equal to 1 if both $i$ and $j$ answered yes to this question. \item \textit{Self-confidence.} One's self-confidence is an important determinant of her socio-economic outcomes in a range of domains \citep{sharot2011optimism}. We measure \textit{self-confidence} from the answer to the question ``When I face a problem I usually trust that I will solve it'', the answer could take values from 1 (minimum self-confidence) to 7 (maximum self-confidence). Homophily in self-confidence is captured by the absolute difference between both individuals’ self-confidence scores. \item \textit{Risk and time preferences.} Common attitudes to risk and delays may affect who befriends whom. Indeed, \cite{kovarik2014risk} document a correlation between the clustering coefficient and risk aversion in college friendship networks and \cite{girard2015} detects homophily on time but not risk preferences. In our data, both \textit{risk preferences} and \textit{time preferences} were elicited using multiple price lists (see \cite{andersen2006elicitation} for a review). As for risk, participants made 10 binary decisions between two lotteries with different amounts and probabilities. This elicitation was incentivized. In case of time preferences, they faced 11 binary decisions between a lower immediate or earlier amount of money and a later but higher quantity.\footnote{The elicitation of time preferences was hypothetical. However a recent paper shows that hypothetical and incentivized time preferences are the same (see \cite{BranasTIME}).} Following the literature, we use the number of times an individual selects the less risky option and the number of decisions, in which she chooses the immediate or earlier amount, as the measures of risk aversion and impatience, respectively. In both cases, homophily in preferences is reflected in the absolute difference between both individuals' risk and time scores. \item \textit{Cognitive characteristics.} Students in the same class frequently engage in group activities and sorting on intelligence is probably the earliest evidence of homophily in the literature \cite{almack1922influence}. More recently, \cite{conti2013popularity} suggest that IQ and other cognitive variables might condition the formation of social networks or groups in certain contexts, \cite{kretschmer2018selection} show that students tend to befriend peers with similar academic achievement. We employ two measures of cognitive abilities in our analysis. First, we generate a variable \textit{reflective} taking value of 1 if two individuals scored a positive amount in the Cognitive Reflection Test by \cite{frederick2005cognitive} -- see \cite{BranasCRT} for a meta-analysis. Second, we include the absolute difference in CRT between $i$ and $j$, labeled as \textit{CRT}. Both variables reflect whether two individuals have a similar cognitive style, but \textit{CRT} captures it more precisely. Finally, \cite{chapman2018loss} estimate that choice consistency in tasks similar to our risk and time preference elicitation procedure is positively correlated with cognitive abilities, education, and income. Therefore, we generate a variable \textit{inconsistency} and set it equal to 2 if an individual exhibits inconsistent preferences in both risk and time preferences (i.e., switches between the risk averse/patient to risk loving/impatient options more than once; see \cite{andersen2006elicitation} or \cite{BranasEER} for a recent review), equal to 1 if she exhibit inconsistent preferences either in the risk or time elicitation, and 0 if she answer the task consistently. Again, we compute homophily in inconsistency as the absolute difference between both individuals’ inconsistency scores. \item \textit{Political attitudes.} Common political orientation has shown to be correlated with friendship formation \cite{mcpherson2001birds, marmaros2006friendships, mayer2008old}. We elicit political orientation of the individual from the answer to the question ``Do you think the size of the public sector in Spain is too large?'' and define a dummy variable \textit{political orientation} equal to one if both individuals state a positive answer. It is a indirect measure of political orientation. \end{enumerate} \end{itemize} \begin{table}[h!] \footnotesize \centering \caption{Potential determinants of Friendship} \begin{tabular}{lll} \hline Variable & Transformation & Description\\\hline Observable & & \\\hline Gender & D & 1 = Both individuals are of the same gender.\\ Smoking & D & 1 = Both individuals are smokers. \\ Common section & D & 1 = Both individuals in the same section.\\ BMI & A & Absolute difference between the BMI of $i$ and $j$.\\ Friends Period 1 & D & 1= If $i$ and $j$ were friends in Period 1.\\ \hline Unobservable & & \\\hline Parental education & A & Absolute difference between the average parental education of $i$ and $j$.\\ STEM best & D & 1 = Both individuals obtained their highest mark in a STEM subject.\\ STEM preference & D & 1 = Both individuals selected a STEM subject as their most preferred \\ & & subject.\\ Reflective & D & 1 = Both individuals scored more than 0 in the CRT. \\ CRT & A & Absolute difference in cognitive reflection test scores between $i$ and $j$. \\ Household Income & A & Absolute difference between the household income of $i$ and $j$.\\ Altruism learning & D & 1= Both individuals give less in the dictator game of period 2 than\\ & & in period 1.\\ Altruism & A & Absolute difference between individuals’ amount given in the dictator \\ & & game of period 2.\\ Reciprocity & A & Absolute difference between individuals’ willingness to return help, \\ & & previously given.\\ Kindness & D & 1= Both individuals were members of a voluntary organization.\\ Self-confidence & A & Absolute difference between both individuals’ self-confidence score.\\ Risk preferences & A & Absolute difference between both individuals’ risk score.\\ Time preferences & A & Absolute difference between both individuals’ patience score.\\ Inconsistency & A & Absolute difference between both individuals’ inconsistency score.\\ Political orientation & D & 1= Both individuals have a `right-wing' political orientation.\\ \hline \end{tabular} \begin{minipage}{16cm} \footnotesize Transformation=D if a variable is a dummy equal to 1 if $i$ and $j$ share the same characteristic; Transformation=A if the variable is ordinal and the absolute difference between the characteristics of $i$ and $j$ employed. \end{minipage} \label{Table varlist} \end{table} Table \ref{Table ss} summarizes all the individual characteristics in our sample. We observe that 44\% of the participants are female, 22\% are smokers, 44\% belief that the public sector is too large (suggesting relatively more right-wing political orientation), and 30\% state a science-oriented (STEM) fields as the most preferred. We also note that the average CRT and altruism are relatively low, 0.55 and 1.07, respectively. On the other hand, the average self-confidence index is high, 5.7. \begin{table}[h!] \centering \caption{Summary statistics} \begin{tabular}{lccccc} \hline Variable & N & Mean & Std. Dev. & Min. & Max.\\\hline Observable & & & \\\hline Female & 244 & 0.44 & 0.50 & 0 & 1\\ Smoking & 186 & 0.22 & 0.42 & 0 & 1\\ BMI & 186& 22.53 & 3.23 & 15.60& 41.87\\ Friends Period 1 & 204 & 2.7& 2.1& 0& 11\\ Friends Period 2 & 246 & 7.3&4.5 & 0& 18\\ \hline Unobservable & & & \\\hline Parental education & 186 & 2.73 & 1.53 & 0 & 5\\ STEM best & 217 & 0.28 & 0.45 & 0 & 1\\ STEM preference & 217 & 0.30 & 0.46 & 0 & 1\\ Reflective & 274 & 0.54 & 0.50 & 0 & 1\\ CRT & 217 & 0.55 & 0.73 & 0 & 3 \\ Household Income & 185 & 2.39 & 1.21 & 0 & 5\\ Altruism learning & 274 & 0.38 & 0.49 & 0 & 1\\ Altruism & 168 & 1.07 & 1.06 & 0 & 5\\ Reciprocity & 186 & 3.57 & 1.81 & 1 & 7\\ Volunteering & 186 & 0.17 & 0.38 & 0 & 1\\ Self-confidence & 186 & 5.74 & 1.23 & 1 & 7\\ Risk preferences & 195 & 4.94 & 1.43 & 0 & 10\\ Time preferences & 168 & 5.60 & 2.54 & 0 & 11\\ Inconsistency & 202 & 0.15 & 0.36 & 0 & 1\\ Political orientation (right)& 186 & 0.44 & 0.50 & 0 & 1\\ \hline \end{tabular} \begin{minipage}{12cm} \end{minipage} \label{Table ss} \end{table} \subsection{Friendship model} \label{sec: model} Our aim is to detect the main determinants of frienship formation and estimate their importance in the friendship formation process. The specification of our friendship model relies on homophily theories, positing that individuals prefer to befriend with individuals who are similar to them. As a result, we expect that two individuals are more likely to be friends if they share personal characteristics. However, there is no agreement in the literature about the importance of the different characteristics in explaining link formation nor whether observable characteristics, such as gender or race, should matter more than features invisible to the naked eye, such as cognitive or personality traits. As a result, \textit{a priori}, we have no specific empirical model and little guidance as to which factors are likely to be influential in the friendship formation process. We deal with this problem, known as \textit{model uncertainty} in the literature, using a Bayesian Model Averaging (BMA) approach \cite{hoeting1999bayesian}. In this section, we present the friendship formation model and the BMA methodology. Our friendship model starts with a basic pairwise probability model that relates the probability that individuals $i$ and $j$ are friends in Period 2, may 2011, with their similarity in the characteristics presented and explained in Section \ref{sec: var}, $X_{ij}$, conditioning on the probability of being friends in Period 1. Such specification follows standard approaches of modelling network formation in the literature (see e.g. the friendship formation in \cite{marmaros2006friendships, mayer2008old} or the co-authorship model of \cite{fafchamps2010matching}). Formally, \begin{equation} P(g_{ij,2}=1/X_{ij})=\beta_{0}+\beta_{1}|X_{i}-X_{j}|+\beta_{2} D_{ij}+\beta_{3} g_{ij,1}+u_{ij}, \label{eq: friend} \end{equation} where $|X_{i}-X_{j}|$ is the vector of absolute differences between the non-binary measures of individuals $i$ and $j$, while $D_{ij}$ is a vector containing all dummy variables that capture common dichotomous characteristics as specified in Table \ref{Table varlist}. The variable $g_{ij,t}=1(0)$ if $i$ and $j$ were (not) friends in Period $t \in \{1,2\}$ and $u_{ij}$ is the disturbance term. Below, we also estimate this model for females and males separately to analyse whether the friendship determinants differ across genders. \subsection{Methodology} \label{sec: bma} We consider two different approaches to identify the underlying factors explaining link formation in the network: BMA and, mostly for robustness, Weighted average least squares (WALS; see Section \ref{sec: rob}). Our benchmark estimation method is the BMA reported in Section \ref{sec: findings}. BMA is specifically designed to address model uncertainty by estimating model (\ref{eq: friend}) for all possible combinations of the regressors and taking a weighted average over all the candidate models, where the weights are determined by applying Bayes' rule. In simple terms, BMA only selects the determinants of link formation that are not too sensitive to the inclusion or removal of other factors into the model specification. Therefore, it explicitly accounts for the omitted-variable and multicollinearity issues discussed in Section \ref{sec: intro}. The probability that model $j$, $M_j$, is the "true" model given the data, $y$ (i.e., the posterior model distribution given a prior model probability) is defined as \begin{equation} P(M_j|y)=\frac{P(y|M_j)P(M_j)}{\Sigma_{i=1}^{2^k}P(y|M_i)P(M_i)}, \end{equation} where $P(y|M_j)$ is the marginal likelihood of Model $j$, $P(M_i)$ is the prior model probability, and $\Sigma_{i=1}^{2k}P(y|M_i)P(M_i)$ is the integrated likelihood of model $j$. We employ the priors specified in \cite{magnus2010comparison}. In particular, they consider uniform priors on the model space, so each model has the same probability of being the true one. Moreover, they use a Zellner's g-prior structure for the regression coefficients and set the hyperparameter $g=\frac{1}{max(N,K^{2})}$ as in Fernandez et al. (2001), where $K$ is the number of regressors and $N$ the number of observations. This hyperparameter measures the degree of prior uncertainty over the coefficients. For robustness, we also consider 'random theta', 'fixed' and 'uniform' priors for the model space. In the next section, we present the estimates of the \textit{posterior inclusion probability (PIP)} of an explanatory factor, which can be interpreted as the probability that a particular regressor belongs to the true model of friendship. We also present results for the posterior mean, the coefficients averaged over all models, and the posterior standard deviation, which describes the uncertainty in the parameters and the model. In the robustness section, we will present results of the WALS (Section \ref{sec: rob}), a recent model average approach, which takes an intermediate position between frequentist and Bayesian methods. \section{Results} \label{sec: findings} Table $\ref{Table BMAall}$ reports our main estimation results, obtained using the BMA approach described in Section \ref{sec: bma} over the 168 students who participated in all the surveys and economic games troughout the whole academic year. The table lists all the determinants of friendship formation under scrutiny ranked by their estimated inclusion probability ($PIP$). The first column lists the different dimensions of homophily and the second column presents the $PIP$ of each potential determinant of friendship formation in May 2011. As a rule of thumb, a factor is considered a very robust predictor of friendship if $PIP \geq 0.80$.\footnote{This is a relatively conservative threshold as compared to the literature, but note from Tables \ref{Table BMAall} - \ref{Table BMAmale} that all our conclusions hold for any threshold $PIP \geq 0.3$.} We find that the most robust determinants for the full sample are the variables capturing common gender, common smoking habits, belonging to the same section, and being already friends before entering into the college. The only observable dimension under study that does not predict friendship robustly is BMI. Moreover, all the robust determinants affect friendship with the expected sign (see the posterior mean in the third column of Table $\ref{Table BMAall}$). More precisely, people are 1.7 percentage points more likely to befriend each other if they have the same gender. Smoking is even a stronger driver of friendship than gender: if two individuals smoke, they have a 3.4 percentage points higher probability of forming a tie in our network. This is consistent with \cite{christakis2008collective}, \cite{hsieh2018} and \cite{lorant2019peer} who report evidence of strong peer effects in the uptake of smoking. Last, although people were assigned into the study sections randomly, being assigned to the same section plays a important role in link formation of students' networks, increasing the probability of being friends by 6.4\%. This corroborates the existing evidence regarding the key role of meeting opportunities in network formation. Unsurprisingly, being friend in October 2010 is the strongest predictor of friendship in May 2011. \begin{table}[htp!] \caption{Determinants of link formation in students' network} \centering \vspace{0.5cm} \begin{tabular}[c]{lccc}\hline \\ & PI prob. & Pt. Mean & Pt. Std. \\ \hline \\ \textbf{Common Gender}&1& 0.017 & 0.003\\[4pt] \textbf{Common Section}&1& 0.064 &0.003\\[4pt] \textbf{Friends$_{t-1}$} &1&0.435&0.023\\[4pt] \textbf{Both Smokers}&0.97&0.034&0.010\\[4pt] Inconsistent diff. & 0.22 & -0.022& 0.005\\[4pt] Altruism diff.&0.08&-0.003&0.001\\[4pt] CRT diff. & 0.04 & -0.0001 & 0.0008\\[4pt] Both Reflective &0.03&0.0002&0.011\\[4pt] Time pref. diff. &0.02&-0.0000&0.0001\\[4pt] Income diff. &0.02&-0.0000&0.0003\\[4pt] Risk diff.&0.01& 0.0000 &0.0001\\[4pt] Reciprocity diff.&0.01& 0.0000 &0.0002\\[4pt] Self-confidence diff.&0.01& -0.0000 &0.0002\\[4pt] BMI diff.&0.01& -0.0000 &0.0001\\[4pt] Parent educ. diff.&0.01& -0.0000 &0.0001\\[4pt] Both volunteers&0.01& -0.0000 &0.0012\\[4pt] Both STEM best grade&0.01& 0.0000 &0.0006\\[4pt] Both STEM pref. &0.01& 0.0000 &0.0006\\[4pt] Both altruism learner&0.01& -0.0000 &0.0003\\[4pt] Both right&0.01& 0.0000 &0.0004\\[4pt] Observations & 8515 & 8515& 8515\\ \hline \\ \end{tabular} \begin{minipage}{14cm} \footnotesize \textit{Notes}: Column 1 presents the posterior inclusion probability ($PIP$). Robust determinants are those with $PIP \geq 0.8$ in bold. Column 2 shows the posterior mean. Column 3 reports the posterior standard deviation. The dependent variable is equal to 1 if $i$ and $j$ are friends in Period 2, 0 otherwise. The results are obtained by using a uniform prior for the prior model probability and a BRIC prior for the hyperparameter that measures the degree of prior uncertainty on coefficients, $g=1/max(N,K^{2})$. \end{minipage} \label{Table BMAall} \end{table} In contrast to the observable dimensions, none of the imperceptible characteristics matter for friendship formation in our data. Inconsistent risk and time preferences are the best predictor with $PIP=0.22$, relatively low inclusion probability that cannot rival with the $PIP$'s estimated for the observable features. The remaining unobservable characteristics exhibit an estimated $PIP < 0.1$. Hence, the first conclusions of our analysis is that, if we focus on the whole sample under study, only observable dimensions matter for friendships, while none of the unobservable factors play an robust role in the friendship formation process in our data. To understand whether the friendship formation processes differ across genders, we reestimate the friendship model separately for the male and female participants. The results are presented in Tables \ref{Table BMAfemale} and \ref{Table BMAmale} in the appendix, respectively. In line with the pooled estimates, common gender, smoking habits, and belonging to a common section are important drivers of friendship for women; these factors have the associated $PIP=1$. In a stark contrast, the only robust predictors determining friendship formation in men is belonging to the same study section that increases the probability of forming a link by 6.8\% with $PIP=1$. The gender differences notwithstanding, the gender-specific estimates corroborate that observable characteristics are important drivers of friendship formation while imperceptible features play a minor role. \section{Robustness} \label{sec: rob} In this section, we check the robustness of the results to (i) the employment of different priors in the BMA model, (ii) applying a different estimation method designed to identify the most robust drivers of friendship formation, and (iii) removing the students who have already had friends in Period 1 (October 2010). The first two tests check whether our results are not a consequence of our estimation procedure, while the third case allows to see whether our results are not driven by preexisting social ties in the group. As for (i), we present results for an analysis using two alternative priors for the model probability: the ``random theta'' prior proposed by \cite{ley2009effect} who advocate for a \textit{binomial-beta hyperprior} on the \textit{a priori inclusion probability}; and \textit{fixed common prior inclusion probabilities} for each regressor as in \cite{sala2004determinants}. Figure \ref{fig: BMA} shows that our main findings are largely robust to the specification of the model priors: the most important drivers of link formation in our data are the same regardless of the prior specification. \begin{figure}[htp!] \caption{Determinants of link formation in students' network: PIP using different model priors} \label{fig: BMA} \begin{centering} \begin{tabular}{cc} \includegraphics[scale=0.30, angle=0]{bmaplot.eps} \\ \end{tabular} \par\end{centering} \noindent {\footnotesize{Note: The random prior corresponds to the 'random theta' prior by \cite{ley2009effect}, who suggest a binomial-beta hyperprior on the a priori inclusion probability. The fixed is the fixed common prior inclusion probabilities for each regressor as in \cite{sala2004determinants}. Unif corresponds to the uniform model prior. The sample includes 8,515 observations.}} \end{figure} Second, we test whether the findings hold while using alternative methods dealing with model uncertainty. To that end, we apply the WALS method introduced by \cite{magnus2010comparison}. The rule of thumb with this method is that an explanatory factor is considered robust if the absolute value of the $t$-statistic is higher than 2. The results for the full sample, presented in Table \ref{Table WALS}, show that the most robust drivers are the same as in Table \ref{Table BMAall}. Nonetheless, this alternative method suggest an additional determinant of friendship formation: \textit{altruism} with a $t$-statistic of -2.79. The weighted average coefficient of the absolute difference of the amount donated in the DG between both individuals is 0.004, implying that homophily on altruism is associated with an increase of the probability that two individuals befriend each other by 0.4 percentage points. This does not affect our conclusions that observable features are considerably more relevant than imperceptible characteristics, but it suggests that some of the latter should still be considered and future work should determine which observable features robustly predict tie formation across contexts and cultures (see the next section for a discussion of this and other issues). \begin{table}[h!] \centering \caption{Determinants of link formation in students' network: A Weighted Average Least Squares approach (WALS). Full sample.} \vspace{0.5cm} \begin{tabular}[c]{lccc}\hline \\ & Coef. & Std. & t-stat. \\ \hline \\ \textbf{Common Section}&0.059& 0.003 & 20.20\\[4pt] \textbf{Friends$_{t-1}$} & 0.420&0.021& 19.81\\[4pt] \textbf{Common Gender}& 0.016& 0.003 & 6.04\\[4pt] \textbf{Both Smokers}&0.031&0.008& 3.89\\[4pt] \textbf{Altruism diff.} &-0.004& 0.001& -2.79\\[4pt] CRT diff. & -0.003 & 0.002 & -1.81\\[4pt] Inconsistent diff. & -0.007 & 0.004 & -1.75\\[4pt] Both Reflective & 0.005&0.003 & 1.47\\[4pt] Parent educ. diff.&-0.001& 0.0010 & -1.18\\[4pt] Reciprocity diff.&0.0008& 0.0008 & 1.01\\[4pt] Time pref. diff. &-0.0005& 0.0006 & -0.81\\[4pt] Both altruism learner& -0.0018& 0.0026 & -0.71\\[4pt] Both right&0.002& 0.0033 &0.59\\[4pt] Risk diff.&0.0005& 0.0009 &0.57\\[4pt] Income diff. & -0.0006& 0.0013&-0.43\\[4pt] Both volunteers&-0.0034& 0.0087&0.39\\[4pt] BMI diff.&0.0002& 0.0005 &0.32\\[4pt] Both STEM pref. &0.0007& 0.0044 &0.16\\[4pt] Self-confidence diff.&-0.0001& 0.0012 & -0.05\\[4pt] Both STEM best grade&-0.0001& 0.0051 &-0.03\\[4pt] Observations & 8515 & 8515& 8515\\ \hline \\ \end{tabular} \begin{minipage}{12.1cm} \footnotesize \textit{Notes}: The results are obtained by using the Weighted Average Least Squares approach introduced by \cite{magnus2010comparison}. Determinants with a $t > 2$ are considered robust. \end{minipage} \label{Table WALS} \end{table} Finally, to focus on the pure formation process (free of any pre-college influences), we reestimate our benchmark model excluding all the students who have already had at least one friend in the first week of their first academic year (Period 1). This decreases the number of potential pairs to 8483 observations, but the results presented in Table \ref{Table BMAall_es} are both quantitatively and qualitatively similar to those obtained using the full sample. \begin{table}[h!] \caption{Determinants of link formation in students' network: Excluding students with friends in period 1.} \centering \vspace{0.5cm} \begin{tabular}[c]{lccc}\hline \\ & PI prob. & Pt. Mean & Pt. Std. \\ \hline \\ \textbf{Common Gender}&1& 0.015 & 0.003\\[4pt] \textbf{Common Section}&1& 0.064 &0.003\\[4pt] \textbf{Both Smokers}&0.87&0.026&0.013\\[4pt] Inconsistent diff. & 0.07 & -0.0006& 0.0023\\[4pt] Altruism diff.&0.06&-0.0002&0.0007\\[4pt] CRT diff. & 0.05 & -0.0002 & 0.0008\\[4pt] Time pref. diff. &0.03&-0.0000&0.0002\\[4pt] Both Reflective &0.03&0.0001&0.0010\\[4pt] Both volunteers&0.02& -0.0001 &0.0016\\[4pt] Income diff. &0.02&-0.0000&0.0002\\[4pt] Risk diff.&0.01& 0.0000 &0.0001\\[4pt] Reciprocity diff.&0.01& 0.0000 &0.0001\\[4pt] Self-confidence diff.&0.01& -0.0000 &0.0002\\[4pt] BMI diff.&0.01& 0.0000 &0.0001\\[4pt] Parent educ. diff.&0.01& -0.0000 &0.0001\\[4pt] Both STEM best grade&0.01& 0.0000 &0.0006\\[4pt] Both STEM pref. &0.01& 0.0000 &0.0006\\[4pt] Both altruism learner&0.01& -0.0000 &0.0003\\[4pt] Both right&0.01& 0.0000 &0.0005\\[4pt] Observations & 8483 & 8483& 8483\\ \hline \\ \end{tabular} \begin{minipage}{14cm} \footnotesize \textit{Notes}: Column 1 presents the posterior inclusion probability ($PIP$). Robust determinants are those with $PIP \geq 0.8$ in bold. Column 2 shows the posterior mean. Column 3 reports the posterior standard deviation. The dependent variable is equal to 1 if $i$ and $j$ are friends in Period 2, 0 otherwise. The results are obtained by using a uniform prior for the prior model probability and a BRIC prior for the hyperparameter that measures the degree of prior uncertainty on coefficients, $g=1/max(N,K^{2})$. \end{minipage} \label{Table BMAall_es} \end{table} \section{Discussion} \label{sec: discussion} In this study, we argue that the literature has proposed a large array of dimensions of homophily without accounting for the model uncertainty problem inherent in the identification of the determinants of friendship formation. This research aims at initializing such a research line. Focusing on 20 particular individual characteristics that cover a large share of the determinants of segmentation of heterogeneous populations analyzed so far, we find that the robust determinants of friendship formation are gender, smoking habits, and geographical/spatial proximity. Hence, our data suggest that, in our context of an emerging social structure, all robust predictors of friendships are directly visible by a naked eye. Due to the homogeneity of our sample, we do not consider other observable attributes such as race, ethnicity, and religion, but the literature shows that these characteristics are key predictors of links in many contexts \citep{lazarsfeld1954friendship, ibarra1995race, mcpherson2001birds, moody2001race, marmaros2006friendships, currarini2010identifying}. We thus positions them among the robust predictors of friendships generally. In contrast, we find little evidence for unobservable features to predict who befriends whom in an emerging social network among University freshmen. Neither political orientation, family background, nor cognitive and behavioral traits survive our robustness tests. Our findings corroborate some existing findings and contradict others. First, gender belongs among the most robust findings and we are no exception. Since gender segregation might have contributed to e.g. the gender wage and output gaps \citep{ductor2018gender, lindenlaub2019}, we corroborate that policies aiming at a large gender integration in social and professional networks could reduce gender inequalities. As for smoking, we corroborate \cite{hsieh2018} and \cite{christakis2008collective} that smoking influences friendship formation. However, our results suggest that this factor is more important for women. If corroborated in other studies, this finding calls for gender-specific policies combating smoking. Last, we confirm that friendship formation can be to a large extend malleable and shaped by policy interventions as random distribution of people into groups have large consequences on who befriends whom (see also \cite{marmaros2006friendships, girard2015}). In contrast, the fact that the unobservable factors under scrutiny, like personality, economic preferences, family background, political orientation are not robust predictors of link formation in the students' networks contradict others studies (e.g. \cite{almack1922influence, marmaros2006friendships, christakis2007spread, mayer2008old, girard2015}). We do not argue that none of them is correlated with friendship. Rather, we argue that whether they can predict friendship depends on other covariates in the link formation model, suggesting that the unobservable, imperceptible attributes are not the primary reason for how people become friends. Our results have implications for several strands of network literature. First, it is well documented that friendship networks have far-reaching and long-lasting socio-economic consequences in human life (see \cite{burt1997contingent, christakis2007spread, christakis2008collective, cohen2008small, conti2013popularity, jackson2017economic} for a few examples of the large literature on the topic). We contribute to this literature by emphasizing which individual attributes robustly predict who befriends whom. Understanding the determinants of friendship and segregation patterns is crucial to evaluate diffusion of information, behaviors, and diseases in general populations and to design optimal policies aimed at maximizing or minimizing the diffusion, targeting integration of minorities, and combating gender and race inequalities. Second, our results deliver an important message for the estimation of network effects. One of the main empirical challenges for causal estimation of network effects is to account for all factors that influence friendship and network formation \citep{manski1993identification, boucher2016some, jackson2017economic}. Nevertheless, many individual characteristics are typically not observed by the researchers or policy-makers and they are believed to influence both the outcomes under study and the link formation. These concerns have motivated a series of methodological contributions enabling the estimation of network effects causally, each under different assumptions (examples include \cite{bramoulle2009identification, de2010identification, hsieh2016social}). It is commonly assumed that people exhibit homophily in the unobserved attributes. Our results suggest that such concerns might be less serious than expected and that one can account for the endogeneity of the network structure by controlling for observable personal attributes, at least in the context of friendship networks. Future research should determine to what extent our results replicate in other data and contexts. Third, we may also indirectly inform the literature on the origins of homophily by testing the relevance of different classes of attributes in friendship formation. In particular, our results suggest that observable characteristics might be the primary driving force behind homophilous interactions, while unobservable, imperceptible features might be solely a by-product of the many correlations between the observable and imperceptible attributes. Naturally, this claim requires further analysis using different data, contexts, and definitions of links before one can widely accept its validity. \clearpage \phantomsection\addcontentsline{toc}{section}{Bibliography}{ \bibliographystyle{chicago} {\footnotesize \setstretch{0}
2024-02-18T23:40:24.976Z
2022-06-29T02:04:08.000Z
algebraic_stack_train_0000
2,344
8,762
proofpile-arXiv_065-11471
\section{Introduction} In this work we present preliminary results of RBC/UKQCD's $D$ and $D_s$ mesons decay constant studies using $N_f=2+1$ flavour gauge field ensembles with domain wall fermions, updating refs ~\cite{Boyle:2015rka,Boyle:2015kyy}. We aim to predict the $D$ and $D_s$ decay constants with a fully controlled systematic error budget. The decay constant $f_{D_q}$ of the $D_{(s)}$ meson is defined as \eq{ \matrixel{0}{A_{cq}^\mu}{D_q(p)} = f_{D_q} p^\mu_{D_q}, \label{eq:decayconst} } where $q=d,s$ and the axial current is defined as $A_{cq}^\mu = \overline{c}\gamma_\mu \gamma_5 q$. Combined with experimental measurements of the decay widths $\Gamma \left(D_{(s)} \to l\nu_l\right)$ knowledge of the decay constants allows to extract the CKM matrix elements $\abs{V_{cd}}$ and $\abs{V_{cs}}$ and hence test the unitarity of the CKM matrix. For this we use seven ensembles specified in Tables \ref{tab:ensembles} and \ref{tab:light_strange} with the Iwasaki gauge action~\cite{Iwasakiaction2} and the domain wall fermion action~\cite{Kaplan:1992bt,Shamir:1993} using the Moebius-kernel~\cite{Brower:Mobius}. We use ensembles with three different lattice spacings in the range $a^{-1}=1.73-2.77\,\mathrm{GeV}$, including two ensembles with near physical pion masses~\cite{PhysicalPoint}. After a brief summary of our ensembles and set-up in Section \ref{sec:ensembles} we will outline our analysis strategy to obtain predictions for physical masses in the continuum limit and give preliminary results in Section \ref{sec:analysis}. Finally, we will outline the status of current projects and conclude in Section \ref{sec:conclusions}. \section{Ensembles and Run Set-Up} \label{sec:ensembles} \begin{table} \begin{center} \begin{tabular}{|cc|ccccc|cc|} \hline $\beta$ & Name & $L^3 \times T /a^4$ & $a^{-1}[\mathrm{GeV}]$ & $L\,[\mathrm{fm}]$ & $m_\pi[\mathrm{MeV}$] & $m_\pi L $ & $N_{\mathrm{conf}}$ & $N_\mathrm{meas}$\\\hline 2.13 & C0 & $48^3 \times 96$ & 1.7295(38) & 5.4760(24) & {\bf 139} & 3.863(05) & 88& 4224\\ 2.13 & C1 & $24^3 \times 64$ & 1.7848(50) & 2.6532(15) & 340 & 4.570(10) &100& 3200\\ 2.13 & C2 & $24^3 \times 64$ & 1.7848(50) & 2.6532(15) & 430 & 5.790(10) &101& 3232\\\hline 2.25 & M0 & $64^3 \times 128$ & 2.3586(70) & 5.3539(31) & {\bf 139} & 3.781(05) &80& 2560\\ 2.25 & M1 & $32^3 \times 64$ & 2.3833(86) & 2.6492(19) & 300 & 4.072(11) &83& 2656 \\ 2.25 & M2 & $32^3 \times 64$ & 2.3833(86) & 2.6492(19) & 360 & 4.854(12) &76& 1216\\\hline 2.31 & F1 & $48^3 \times 96$ & 2.774(10)$\hphantom{0}$ & 3.4141(24) & 235 & 4.053(06) &82& 3936\\ \hline \end{tabular} \end{center} \caption{This table summarises the main parameters of the used ensembles. All of these use the Iwasaki gauge action with $2+1$ flavours of domain wall fermions in the sea. $N_\mathrm{conf}$ and $N_\mathrm{meas}$ refer to the number of (decorrelated) configurations used and the number of total measurements, respectively.} \label{tab:ensembles} \end{table} Table \ref{tab:ensembles} provides a brief summary of the ensembles used for this study. The sea quarks of all ensembles are Moebius domain wall fermions~\cite{Brower:Mobius}. The ensembles C1, C2, M1 and M2 use the Moebius scale factor $\alpha=1$ which reproduces Shamir domain wall fermions~\cite{Shamir:1993}, whilst C0, M0 and F1 have been generated with $\alpha=2$ ensuring the same approach to the continuum, as outlined in ref \cite{PhysicalPoint}. The domain wall parameters of the sea quarks (i.e. the domain wall height $M_5$ and extent of the fifth dimension $L_s$) are listed in Table \ref{tab:light_strange}. We keep these same values for the valence light and strange quarks. The physical value of the strange quark mass in bare lattice units is determined by reproducing the procedure of ref \cite{PhysicalPoint} including the new ensemble F1 and the results are listed in Table \ref{tab:light_strange}. Whilst the light quark mass is always kept unitary, we adjust the valence strange quark mass to its physical value (compare Table \ref{tab:light_strange}) on measurements on the ensembles C1, C2, M1 and M2. On the remaining ensembles the valence strange quark mass is kept to be the unitary one, forcing a small correction to its physical value, which will be discussed below. \begin{table} \begin{center} \begin{tabular}{|c||l|ll|l||r|l|} \hline & \multicolumn{4}{c||}{Moebius/Shamir with $M_5=1.8$} & \multicolumn{2}{c|}{Moebius with $M_5=1.6$} \\\hline Name & $L_s$ & $am_l^\mathrm{sea}$ & $am_s^\mathrm{sea}$ & $am_s^\mathrm{phys}$ & $L_s$ & $am_h$\\\hline C0 & 24 & 0.00078 & 0.0362 & 0.03580(16) & 12 & 0.30, 0.35, 0.40 \\ C1 & 16 & 0.005 & 0.04 & 0.03224(18) & 12 & 0.30, 0.35, 0.40 \\ C2 & 16 & 0.01 & 0.04 & 0.03224(18) & 12 & 0.30, 0.35, 0.40 \\\hline M0 & 12 & 0.000678 & 0.02661 & 0.02540(17) & 8 & 0.22, 0.28, 0.34, 0.40 \\ M1 & 16 & 0.004 & 0.03 & 0.02477(18) & 12 & 0.22, 0.28, 0.34, 0.40\\ M2 & 16 & 0.006 & 0.03 & 0.02477(18) & 12 & 0.22, 0.28, 0.34, 0.40\\\hline F1 & 12 & 0.002144 & 0.02144 & 0.02132(17) & 12 & 0.18, 0.23, 0.28, 0.33, 0.40\\ \hline \end{tabular} \end{center} \caption{Ensemble and measurement parameters for the propagators used on RBC/UKQCD's $N_f=2+1$ flavour Iwasaki gauge action ensembles. The choice of $(M_5,L_s)$ differs between the light and strange (left) and the heavy (right) propagators, leading to a mixed action.} \label{tab:light_strange} \end{table} In ref \cite{Boyle:2016imm} we studied the Moebius domain wall parameter space with the intention of finding a region in which discretisation effects for charmed meson observables are small. We found that the choice of the domain wall height $M_5=1.6$ yields optimal results, provided the bound $am^\mathrm{bare}_h \lesssim 0.4$ is observed. In this work we adopt these choices for the charm quark. We note that this leads to a mixed action since the two quarks entering the current have a different discretisation due to the different choice of $M_5$. However, we show effects from this are small~\cite{dynamical2016}. We use stochastic $\mathbb{Z}_2\times\mathbb{Z}_2$-sources (Z2-Wall)~\cite{Foster:1998vw} on every fourth ($\Delta T=4$) time plane for M0 and M2 and every second ($\Delta T=2$) time plane for all other ensembles, yielding a stochastic estimate ($L^3 \times T/\Delta T$) for the full volume average of the correlation functions. The detailed statistics are listed in Table \ref{tab:ensembles}. Light and strange quark propagators have been computed with the HDCG~\cite{HDCG} algorithm, whilst for the heavy quark propagators a CG inverter was used. \section{Decay Constant Analysis} \label{sec:analysis} To determine the decay constant we are interested in the matrix element $\matrixel{0}{A_{cq}^4}{D_q({\bf p}={\bf 0})}$ (compare eq \eqref{eq:decayconst}). Masses and matrix elements can be extracted from fits to correlation functions. We simultaneously fitted the ground state and the first excited state of the meson two-point functions $\avg{AP}$ and $\avg{PP}$ where $A$ is the operator for the temporal component of the axial current and $P$ is the pseudoscalar operator. We studied the impact of the choice of fit ranges by systematically varying the initial ($t_\mathrm{min}$) and final ($t_\mathrm{max}$) time slices entering the fit and conservatively chose these values as shown in Figure \ref{fig:fitranges}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{{C0_lh_0.4_un_ex_fit_variation}.pdf} \includegraphics[width=0.45\textwidth]{{C0_sh_0.4_un_ex_fit_variation}.pdf} \end{center} \caption{Variation of fit ranges illustrated for the heaviest mass point of the ensemble C0 for a light-heavy (left) and a strange-heavy (right) meson. The three panels correspond to fit results of the ground state. The $x$-axis shows the initial time slice $t_\mathrm{min}$ entering the fit, whilst the choice of symbol and colour indicates the final time slice $t_\mathrm{max}$ included in the fit. The grey band shows the (conservatively) chosen fit for the subsequent analysis with fit ranges $[t_\mathrm{min},t_\mathrm{max})=[8,30)$ (light-heavy) and $[12,37)$ (strange-heavy), respectively.} \label{fig:fitranges} \end{figure} From the matrix elements we can extract the decay constants $f_P$ with $P=D,D_{s}$ and their ratio. The decay amplitude $\Phi_P = f_P\sqrt{m_P}$ displays a nearly linear behaviour with the inverse heavy quark mass, so for convenience we carry out the remaining analysis in terms of this quantity. The following procedure is two-fold. We first renormalise the decay constants and correct for the small mistuning in the strange quark mass mentioned above. We then carry out a combined fit to obtain physical heavy and light quark masses (by using appropriate meson masses) and remove lattice artifacts. \subsection{Renormalisation and strange quark mistuning} Due to the mixed action approach, we do not have a conserved heavy-light current. Instead we determine the renormalisation constant $\mc{Z}_A$ from the ratio of local and conserved light-light currents and renormalise the data using this. From an NPR study we find that the correction due to this is below the \%-level~\cite{dynamical2016}. We already corrected for the valence strange quark mistuning on the ensembles C1, C2, M1 and M2 by simulating directly at the physical strange quark mass. On the other ensembles (C0, M0, F1) we need to correct the strange quark mass by -1.1(5)\%, -4.8(7)\% and -0.6(8)\%, respectively. We define dimensionless coefficients $\alpha_\mc{O}$ which we assume to be independent of the light quark mass, but take into account their lattice spacing and heavy quark mass dependence. They are defined as the slope of an observable with the strange quark mass, \eq{ \mc{O}^\mathrm{phys}(a,am_h) = \mc{O}^\mathrm{uni}(a,am_h) \left(1 +\alpha_\mc{O}(a,am_h) \frac{m_s^\mathrm{phys}-m_s^\mathrm{sea}}{m_s^\mathrm{phys}} \right). } Having found the coefficients $\alpha_\mc{O}$ for the relevant observables $\mc{O}$ we can correct C0 and M0 directly. For F1 we first interpolate to the corresponding values of the lattice spacing and the heavy quark mass, set by $m_{\eta_c}$. A first impression of the renormalised data at the physical strange quark mass is shown in Figure \ref{fig:data}. We notice the linear behaviour in the inverse heavy quark mass and the fact that the bound $am_h\lesssim 0.4$ prevents us from reaching the physical charm quark mass on the coarse ensembles. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{{fmD_charmfit_un_ex_mh_inv_etac}.pdf} \includegraphics[width=0.49\textwidth]{{fmDs_charmfit_un_ex_mh_inv_etac}.pdf} \end{center} \caption{$\Phi_D$ (left) and $\Phi_{D_s}$ (right) as a function of $1/m_{\eta_c}$. The red (blue, green) data points correspond to the coarse (medium, fine) ensembles. The vertical black line indicated the experimental value of $m_{\eta_c}$.} \label{fig:data} \end{figure} In the final step we extrapolate to $a=0$ and mesons corresponding to physical light and heavy quark masses. We use an expansion around the physical values given by \eq{ \mc{O}(a,m_\pi,m_H) = \mc{O}(0,m_\pi^\mathrm{phys},m_H^\mathrm{phys}) + \left[C_{CL}^0 + C_{CL}^1 \, \Delta m_H^{-1} \right] a^2 + \left[C_{\chi}^0 + C_{\chi}^1 \, \Delta m_H^{-1} \right] \Delta m_\pi^2 + C_h^0 \, \Delta m_H^{-1}, \label{eq:globalchiCL} } where $\Delta m_H^{-1} = 1/m_H - 1/m_H^\mathrm{phys}$, $\Delta m_\pi^2=m_\pi^2 - {m_\pi^{2}}^{\mathrm{phys}}$ and $H = D, D_s$ or $\eta_c$. We carried out this fit with different pion mass cuts ($m_\pi^\mathrm{max}=350, 400, 450\, \mathrm{MeV}$), different ways of setting the heavy quark mass ($H = D, D_s, \eta_c$) and the variations of whether or not mass dependent discretisation and physical pion mass extrapolations ($C^1_{CL}$ and $C^1_\chi$) are included. Figure \ref{fig:PhiDsys} shows these variations for $\Phi_D$ (left) and $\Phi_{D_s}$ (right). The spread of the different results will be used to estimate the systematic error attached to the choice of fit ansatz. We postpone a detailed discussion of all systematic errors (which are at most as large as the statistical error), to a forthcoming publication~\cite{dynamical2016}. For now we simply state the statistical error for the preferred fit ansatz in both cases \al{ \Phi_D = 0.2853(38)_\mathrm{stat}\,\mathrm{GeV}^{3/2} \qquad &\Rightarrow \qquad f_{D} = 208.7(2.8)_\mathrm{stat}\,\mathrm{MeV}\\ \Phi_{D_s} = 0.3457(26)_\mathrm{stat}\,\mathrm{GeV}^{3/2} \qquad &\Rightarrow \qquad f_{D_s} = 246.4(1.9)_\mathrm{stat}\,\mathrm{MeV}.\\ } \begin{figure} \begin{center} \includegraphics[width=0.495\textwidth]{{fmD_systematics}.pdf} \includegraphics[width=0.495\textwidth]{{fmDs_systematics}.pdf} \end{center} \caption{Comparison of the different choices in the global fit. The grey and magenta bands highlight the preferred fit. The different symbols indicate different ways of fixing the heavy quark mass, i.e. $H=\,$ $D(\Diamond),\, D_s (\ocircle),$ and $\eta_c^\mathrm{connected}(\square)$. Fainter data points indicate that at least one of the heavy mass dependent coefficients is compatible with zero at the one sigma level. The label on the $x$-axis describes the number of coefficients for the continuum limit (CL) and pion mass limit ($\chi$), respectively. E.g. a fit labelled (2, 1) correspond to keeping two coefficients for the continuum limit extrapolation, but setting $C^1_\chi$ to zero. The magenta data point at the very left is the result of the analysis as obtained by the method presented in ref \cite{Boyle:2015kyy}.} \label{fig:PhiDsys} \end{figure} \section{Outlook and Conclusions} \label{sec:conclusions} Following the strategy outlined at the end of ref \cite{Boyle:2015kyy} we are currently exploring how to extend the reach in the heavy quark mass by employing gauge link smearing. We find that with three hits of stout smearing with the standard parameter $\rho=0.1$~\cite{Morningstar:2003gk} and the choice $M_5=1.0$ we are able to extend the reach in the bare input quark mass up to $am_h \sim 0.69$. This is in agreement with findings of the JLQCD collaboration~\cite{Kaneko:2013jla,Hashimoto:2014gta}. With this we are able to simulate the physical charm quark mass even on the coarsest ensemble as can be seen from our first results presented in Figure \ref{fig:smeared}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{{smeared_fDsfD}.pdf} \end{center} \caption{First results for the ratio of decay constants $f_{D_s}/f_D$. Here the light and strange quarks use the unitary action, whilst the heavy quark is produced from a stout link smeared gauge field with a domain wall height of $M_5=1.0$. Similarly to the above, the solid black line corresponds to the physical value of $m_{\eta_c}$ and only serves to guid the eye.} \label{fig:smeared} \end{figure} We presented our strategy to obtain decay constants $f_D$ and $f_{D_s}$ from $N_f=2+1$ flavour domain wall fermion ensembles with three lattice spacings in the range $a^{-1}=1.73-2.77\,\mathrm{GeV}$ and physical pion masses. We perform several different variations of the fit to obtain results at physical quark masses and in the continuum. Furthermore we outline our current strategy to reach further in the heavy quark mass, allowing to simulate directly at the physical charm quark mass even on the coarse ensembles. First results look promising for the calculation of $D$ and $D_s$ phenomenology, such as semi-leptonic decays, and extrapolations of decay constants and other observables from the charm mass region to the bottom mass. \clearpage \acknowledgments{The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC Grant agreement 279757, the Marie Sk{\l}odowska-Curie grant agreement No 659322, the SUPA student prize scheme, Edinburgh Global Research Scholarship and STFC, grants ST/M006530/1, ST/L000458/1, ST/K005790/1, and ST/K005804/1, ST/L000458/1, and the Royal Society, Wolfson Research Merit Award, grants WM140078 and WM160035 and the Alan Turing Institute. The authors gratefully acknowledge computing time granted through the STFC funded DiRAC facility (grants ST/K005790/1, ST/K005804/1, ST/K000411/1, ST/H008845/1). \bibliographystyle{JHEP}
2024-02-18T23:40:25.607Z
2016-11-22T02:11:45.000Z
algebraic_stack_train_0000
2,371
2,776
proofpile-arXiv_065-11509
\subsection{Contributions to control samples} As described in \ifthenelse{\boolean{cms@external}}{Section~4 of the main document}{Section~\ref{sec:EstBkg}}, the small contribution to the $\Pe\Pe$ control sample from \cPqt\cPaqt~events and the contribution to the ff sample from $\cPZ \to \Pgn\Pagn$ + jets events are estimated with simulation. The size of the contributions are listed in Table~\ref{tab:BkgContributions}. \begin{table}[ht] \topcaption{Percent contributions from background samples to the $\Pe\Pe$ and ff control samples.} \centering \begin{tabular}{ lcc } $\ETmiss$ & $<$100\GeV & $\geq$100\GeV \\ \cline{2-3} $\cPqt\cPaqt$ events & 0.17\% & 24.3\% \\ $\cPZ \to\Pgn\Pagn$ + jets events & 0.03\% & 5.0\% \\ \hline \end{tabular} \label{tab:BkgContributions} \end{table} \subsection{Reweighting distributions} Figure~\ref{fig:diEmPtRatio} shows the di-EM \pt distributions of the $\gamma\gamma$ candidate sample and ee and ff control samples, as well as the ratios used for reweighting. \begin{figure}[tbh] \centering \includegraphics[width=\cmsFigWidth]{Figure_005.pdf} \caption{Di-EM \pt distribution of the $\gamma\gamma$ candidate sample and $\Pe\Pe$ and ff control samples. The ratios of the candidate sample to each of the control samples are shown in the bottom pane. These ratios serve as the reweighting factors for the events.} \label{fig:diEmPtRatio} \end{figure} \section{Introduction} \label{sec:Introduction} Final states in proton-proton collisions containing photons with high transverse momentum \pt and significant missing transverse energy \ETmiss emerge naturally from a variety of new-physics scenarios, particularly in models of supersymmetry (SUSY) broken via gauge mediation that require a stable, weakly interacting lightest supersymmetric particle (LSP)~\cite{Nappi:1982bc, Dine:1982cd, Alvarez:1982de, Dine:1993yw, Dine:1994vc, Dine:1995ag}. The \ETmiss in an event, defined as the magnitude of the vector sum of the transverse momenta of all visible particles, is a consequence of undetected particles such as neutrinos or LSPs. Models with general gauge mediation (GGM)~\cite{Dimopoulos:1996fg, Martin:1997hi, Poppitz:1997gh, Meade:2008wd, Buican:2008ws, Abel:2009ve, Carpenter:2009jk, Dumitrescu:2010ef} can have a wide range of features, but typically entail a nearly massless gravitino LSP, \PXXSG, and a next-to-lightest supersymmetric particle (NLSP) often taken to be a neutralino \PSGczDo. Photons in the final state arise when the neutralino decays to a gravitino and a photon, $\PSGczDo \to \PXXSG\PGg$. In this Letter we present a search for GGM SUSY in final states involving two photons and significant \ETmiss. The data sample, corresponding to an integrated luminosity of 2.3\fbinv of proton-proton collisions at $\sqrt{s} = 13\TeV$, was collected with the CMS detector at the CERN LHC in 2015. The increased center-of-mass energy substantially improves the sensitivity of the analysis compared to searches performed at the LHC in Run 1 at $\sqrt{s} = 8\TeV$~\cite{Aad:2015hea, Khachatryan:2015exa}. A similar analysis was performed by the ATLAS Collaboration at $\sqrt{s} = 13\TeV$~\cite{ATLAS:2016aa}. For the interpretation of the results we use the T5gg and T6gg simplified models~\cite{expPaper}. The T5gg (T6gg) simplified model assumes gluino \PSg~(squark \PSq) pair production, with subsequent decays as shown in Fig.~\ref{fig:gluinoSquarkDecay}. The branching fraction of the NLSP neutralino to decay to a gravitino and a photon, $\PSGczDo \to \PXXSG\PGg$, resulting in characteristic events with two photons and large \ETmiss, is assumed to be unity. In more general GGM SUSY models, a bino-like neutralino could also decay to a gravitino and a Z boson, $\PSGczDo \to \PXXSG\cPZ$. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{Figure_001-a.pdf} \includegraphics[width=0.45\textwidth]{Figure_001-b.pdf} \caption{Diagrams showing the production of signal events in the collision of two protons with four momenta ${P}_{1}$ and ${P}_{2}$. In gluino \PSg~pair production in the T5gg simplified model (\cmsLeft), the gluino decays to an antiquark \cPaq, quark \cPq, and neutralino \PSGczDo. In squark \PSq~pair production in the T6gg simplified model (\cmsRight), the squark decays to a quark and a neutralino. In both cases, the neutralino subsequently decays to a photon \PGg and a gravitino \PXXSG. In the second diagram, we do not distinguish between squarks and antisquarks.} \label{fig:gluinoSquarkDecay} \end{figure} Events with two photons and \ETmiss can also arise from several standard model (SM) processes, including direct diphoton production with initial-state radiation and multijet events (possibly with associated photon production). These processes lack intrinsic \ETmiss but can emulate the signal if the hadronic activity in the event is mismeasured. In the latter case, photons may be reconstructed in the event as a result of the misidentification of electromagnetically rich jets. A smaller background comes from events with intrinsic \ETmiss, principally $\PW\PGg$ and W+jet production, where an electron is misidentified as a photon in $\PW\to\Pe\PGn$ decays. \section{Detector, data, and simulated samples} \label{sec:DataSimSamples} The data were collected with the CMS detector in 2015. The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker covering the pseudorapidity region $\abs{\eta}< 2.5$, as well as a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections and covering the range $\abs{\eta}< 3.0$. Forward calorimeters extend the coverage up to $\abs{\eta}< 5.0$. Muons are measured in gas-ionization detectors embedded in the iron flux-return yoke outside the solenoid and cover the range $\abs{\eta}< 2.4$. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. The data used in this analysis were selected with a diphoton trigger requiring a leading photon with $\pt > 30\GeV$ and a subleading photon with $\pt > 18\GeV$. In order to keep the trigger rate low and to exclude $\cPZ\to\Pe\Pe$ events, a combined invariant mass $M_{\gamma\gamma} > 95\GeV$ was also required. In addition, the photons were required to pass isolation and cluster shape requirements. A sample of $\cPZ\to\Pe\Pe$ events for additional studies was collected with a trigger nearly identical to the diphoton trigger, but with an invariant mass requirement $M_{\Pe\Pe} > 70\GeV$ and with the additional requirement that both electromagnetic (EM) objects be matched to a pixel detector seed (at least two measurements in the pixel detector consistent with a track from a charged particle). Monte Carlo simulations of the signal and background processes are used to validate the performance of the analysis and determine signal efficiencies, as well as to determine the contributions of some of the smaller backgrounds, as described in Section 4. The leading-order event generator \MADGRAPH 5.1.3.30~\cite{Alwall:2014hca} is used to simulate the signal samples, which were generated with either two gluinos or two squarks and up to two additional partons in the matrix element calculation. The parton showering, hadronization, multiple-parton interactions, and the underlying event were described by the \PYTHIA 8~\cite{Sjostrand:2007gs} event generator. The parton distribution functions are obtained from NNPDF3.0~\cite{Ball:2014uwa}. For the background processes, the detector response is simulated using \GEANTfour~\cite{Agostinelli:2002hh}, while the CMS fast simulation~\cite{Abdullin:2011zz} is used for the signal events. The signal events were generated using the T5gg and T6gg simplified models and are characterized by the masses of the particles in the decay chain. For the gluino (squark) mass we simulate a range of values from 1.0 to 1.8\TeV (1.2 to 2.0\TeV) in steps of 50\GeV. These mass ranges were selected to overlap and expand upon the mass ranges excluded by previous searches~\cite{Aad:2015hea,Khachatryan:2015exa,ATLAS:2016aa}. For each gluino (squark) mass, the \PSGczDo~mass ranges from 100\GeV to 1.9\TeV in 100\GeV increments, with the requirement that $M_{\PSGczDo} < M_{\PSg}$ ($M_{\PSGczDo} < M_{\PSQ}$). We assume branching fractions of unity for the decays $\PSg \to \cPq\cPq\PSGczDo$, $\PSQ \to \cPq\PSGczDo$ and $\PSGczDo \to \PXXSG\PGg$. For the T6gg model, the gluino mass is set to 10\TeV, and t-channel production is not considered. The production cross sections for these processes are calculated as functions of $M_{\PSg}$ and $M_{\PSQ}$ at next-to-leading-order (NLO) accuracy including the resummation of soft gluon emission at next-to-leading-logarithmic (NLL) accuracy~\cite{Kulesza:2009kq, Beenakker:2009ha}, and the uncertainties are calculated as described in Ref.~\cite{Borschensky:2014cia}. \section{Event selection} \label{sec:EventSelect} Photon, electron, muon, charged hadron, and neutral hadron candidates are reconstructed with the particle-flow (PF) algorithm~\cite{CMS:2009nxa, CMS-PAS-PFT-10-001}, which reconstructs particles produced in a collision based on information from all detector subsystems. Photons are reconstructed from energy deposits in the ECAL. We require the shape of ECAL clusters to be consistent with that of an electromagnetic object, and we require that the energy present in the corresponding region of the HCAL not exceed 5\% of the ECAL energy, as electromagnetic showers are expected to be contained almost entirely within the ECAL. In order to ensure that the photons pass the trigger with high efficiency, all photons are required to satisfy $\et > 40\GeV$. Because the SUSY signal models used in this analysis produce photons primarily in the central region of the detector and because the magnitude of the background increases considerably at high $\abs{\eta}$, we consider only photons within the barrel fiducial region of the detector ($\abs{\eta} < 1.44$). To suppress photon candidates originating from quark and gluon jets, photons are required to be isolated from other reconstructed particles. Separate requirements are made on the scalar \pt sums of charged hadrons, neutral hadrons, and electromagnetic objects in a cone of radius $\Delta R \equiv \sqrt{\smash[b]{(\Delta\eta)^{2} + (\Delta\phi)^{2}}}$ = 0.3 (where $\phi$ is the azimuthal angle measured in radians) around the photon candidate. Each momentum sum is corrected for the effect of additional proton-proton collisions in the event (pileup), and in each case the momentum of the photon candidate itself is excluded. We further require that the photon candidate have no pixel track seed, to distinguish the candidate from an electron. Due to the similarity of the ECAL response to electrons and photons, $\cPZ\to\Pe\Pe$ events are used to measure the photon identification efficiency. The selection of electron candidates is identical to that of photons, with the exception that the candidate is required to be matched to a pixel seed consistent with a track, to ensure that the selection is orthogonal to that of photons. The photon efficiency is measured via the tag-and-probe method~\cite{Khachatryan:2015iwa} in both data and simulation. The ratio of the efficiency in data and simulation was measured as a function of the \pt and $\eta$ of the electron and the $\Delta R$ separation between the electron and the nearest jet. It is determined that this ratio does not depend significantly on any measured kinematic variables, and the overall ratio is computed to be $\epsilon_{\Pe}^{\text{data}} / \epsilon_{\Pe}^{\text{sim}} = 0.983 \pm 0.012$. Muon candidates, which are included among the objects counted in the photon isolation requirement, are reconstructed by performing a global fit that requires consistent hit patterns in the tracker and the muon system~\cite{Chatrchyan:2012xi}. We require muons to have $\pt > 30\GeV$ and to satisfy track quality and isolation requirements. Photons and electrons that overlap within $\Delta R < 0.3$ of any muons are rejected, but otherwise no requirement is made on the number of muons in the event. In addition, photons must be separated by $\Delta R > 0.3$ from electrons. Jets are reconstructed from PF candidates using the anti-\kt clustering algorithm~\cite{Cacciari:2008gp} with a distance parameter of 0.4. The jet energy and momentum are corrected both for the nonlinear response of the detector and for the effect of pileup via the procedure described in Ref.~\cite{1748-0221-6-11-P11002}. Jets are required to have corrected $\pt > 30\GeV$ and to be reconstructed within $\abs{\eta} < 2.4$. In addition, jets are required to be separated from other objects in the event by $\Delta R > 0.4$. For the purpose of defining the various control regions used in the analysis, we apply an additional set of selection criteria. Misidentified photons are defined as those photon candidates passing the photon selection but failing either the shape requirement for the ECAL clusters or the charged-hadron isolation requirement, but not both. In order to ensure that misidentified photons do not differ too much from our photon selection, upper limits are applied to both the charged-hadron isolation and cluster shape requirements. Events are then sorted into one of four mutually exclusive categories depending on the selection of their highest-\pt electromagnetic objects: $\gamma\gamma$, ee, two misidentified (``fake'') photons (ff), and $\Pe\gamma$. Due to the trigger requirements described in Section 2, the invariant mass of the two electromagnetic objects is required to be greater than 105\GeV. The size of the data sample limits any improvements in the sensitivity of the analysis from categorizing events by jet multiplicity. Therefore, no requirements are made on the number of jets in the event. The signal region is defined by the events in the $\gamma\gamma$ category with $\ETmiss \geq 100\GeV$ and is split into four bins: $100 \leq \ETmiss < 110\GeV$, $110 \leq \ETmiss < 120\GeV$, $120 \leq \ETmiss < 140\GeV$, and $\ETmiss \geq 140\GeV$. The bins are chosen in such a way that there is a sufficient amount of data in each bin in the ee and ff control samples used for background estimation. The bin with $\ETmiss < 100\GeV$ is used as a control region. \section{Estimation of backgrounds} \label{sec:EstBkg} The dominant background for this analysis comes from multijet production from quantum chromodynamics (QCD) processes without intrinsic \ETmiss, where the high-\ETmiss signature is mimicked by the mismeasurement of the hadronic activity in the event. A subdominant contribution comes from electroweak (EWK) processes that include intrinsic \ETmiss from neutrino production. The contribution from the QCD background is modeled in a fully data-driven way from the ee and ff control samples. Both of these control samples are dominated by processes without intrinsic \ETmiss and can therefore be used to model the \ETmiss in the QCD background. These control samples differ in hadronic activity from the candidate $\gamma\gamma$ sample due to different event topologies. In particular, the ee control sample has a large contribution from $\cPZ \to \Pe\Pe$ events, where the electromagnetic objects come from one parent particle. In contrast, the ff control samples are primarily multijet events where the two electromagnetic objects are produced independently. To account for this difference, the di-EM \pt variable, defined as the magnitude of the vector sum of the transverse momenta of the two electromagnetic objects, is used to model the hadronic recoil in the event. Events in the ee and ff control samples are reweighted by the di-EM \pt distribution of the $\gamma\gamma$ events to correct for any differences in hadronic recoil. The \ETmiss distributions of these di-EM \pt reweighted control samples are then normalized to that of the $\gamma\gamma$ sample in the region $\ETmiss < 50\GeV$ and used to predict the contribution of QCD processes to the high-\ETmiss signal region. A comparison of the reweighted \ETmiss distributions to the distribution of $\gamma\gamma$ events is shown in Fig.~\ref{fig:reweightEffect} in the sideband of the search region ($\ETmiss < 100\GeV$). There is an agreement within statistical uncertainties between the $\gamma\gamma$ and each of the reweighted distributions. \begin{figure}[htbp] \centering \includegraphics[width=\cmsFigWidth]{Figure_002.pdf} \caption{The \ETmiss distributions of the candidate $\gamma\gamma$, reweighted ee, and reweighted ff samples in the $\ETmiss < 100\GeV$ sideband.} \label{fig:reweightEffect} \end{figure} Similarly, we consider differences in the \ETmiss distribution due to the number of jets in the event. A direct comparison of the candidate sample and the two control samples shows little dependence on the jet multiplicity $N_{\text{jets}}$ at low \ETmiss, so we take the difference as a systematic uncertainty in the prediction, as described in Section 5. In addition, there is a small contribution in the QCD control samples from comparatively rare processes with intrinsic \ETmiss, including \cPqt\cPaqt~events and $\cPZ \to \Pgn\Pagn$ + jets events. Due to their small cross sections, these processes are estimated with simulation, and their contributions are subtracted from the ee and ff control samples for the final prediction. The primary estimate of the QCD contribution comes from the reweighted ee distribution. The reweighted ff control sample serves as a cross-check, and the difference between them is taken as a symmetric systematic uncertainty on the prediction. Due to the limited number of ff events with $\ETmiss > 100\GeV$, a looser misidentification definition is used. In the looser definition, misidentified photons are not required to pass any photon isolation or neutral-hadron isolation cuts, and the upper limits on charged-hadron isolation and the shape requirement for the ECAL clusters are loosened further. The looser ff sample is used to obtain the shape of the ff distribution in the $\ETmiss > 100\GeV$ signal region, while the normalization comes from the tighter, more photon-like misidentification definition. As an additional cross-check on this background estimation method, the ratio of the candidate $\gamma\gamma$ distribution to the unweighted ff distribution as a function of \ETmiss is fit with different functional forms. The predicted number of QCD background events in each \ETmiss bin is then given by the function multiplied by the number of ff events seen in that bin. The primary prediction from the ee sample is consistent with the prediction from this cross-check within the fit uncertainties, and we conclude that the predictions from these two methods are compatible. The electroweak background comes from $\PW\gamma$ events where the W decays to an electron and a neutrino, and the electron is misidentified as a photon. We estimate this misidentification rate by comparing the Z-boson mass peak in the ee invariant mass spectrum to the peak in the $\Pe\gamma$ spectrum. The data are modeled using an extended likelihood fit to the mass spectrum for the signal plus background hypothesis. The misidentification rate $f_{\Pe \to \gamma}$ is then computed from the signal events as $f_{\Pe \to \gamma} = N_{\Pe\gamma} / (2N_{\Pe\Pe} + N_{\Pe\gamma}) = (2.13 \pm 0.21)\%$. This rate is used to compute a scaling factor $f_{\Pe \to \gamma} / (1 - f_{\Pe \to \gamma})$, which is then applied to the sample of $\Pe\gamma$ events with $\ETmiss > 100\GeV$ to obtain an estimate of the electroweak background in the signal region. \section{Sources of systematic uncertainty} \label{sec:SystUncert} We evaluate systematic uncertainties from each of the background predictions, the signal efficiency, and the integrated luminosity. For each source of uncertainty, we give the uncertainty value and describe the method used for its estimation. The largest systematic uncertainties come from the QCD background estimation method. We consider three sources of systematic uncertainty from the QCD background estimate: the di-EM \pt reweighting, the jet multiplicity dependence, and the \ETmiss shape difference between the $\Pe\Pe$ and ff control samples. The magnitudes of these uncertainties for each of the \ETmiss bins in the signal region are shown in Table~\ref{tab:SystUncerts}. \begin{table*}[ht] \topcaption{Systematic uncertainties from the QCD background estimation.} \centering \begin{tabular}{ rcccc } \multicolumn{1}{c}{$\ETmiss$ bin ($\GeVns{}$)} & Di-EM $\pt$ & Jet multiplicity & Shape difference & Statistical uncertainty \\ [0.5ex] & reweighting & reweighting & between ee and ff & of ee sample \\ [0.5ex] \hline $100 \leq \ETmiss < 110$ & 15\% & 34\% & 18\% & 31\% \\ $110 \leq \ETmiss < 120$ & 17\%& 15\% & 12\% & 33\% \\ $120 \leq \ETmiss < 140$ & 33\%& 29\% & 14\% & 42\% \\ $\ETmiss \geq 140$ & 39\%& 20\% & 150\% & 71\% \\ \end{tabular} \label{tab:SystUncerts} \end{table*} The uncertainty from di-EM \pt reweighting is estimated from the distributions of the di-EM \pt ratio in simulated pseudo-experiments, allowing the ratio to vary bin by bin according to a Gaussian distribution with a standard deviation computed from the statistical uncertainty of unweighted events in the bin. The \ETmiss distribution of the ee control sample is then reweighted by each of these distributions, and the standard deviation is determined for the prediction. The magnitude of this uncertainty ranges from 15\% to 39\%. The effect of the difference in the \ETmiss distribution as a function of the jet multiplicity is determined directly by taking the difference between the ee estimate with di-EM \pt and $N_{\text{jets}}$ reweighting and with di-EM \pt reweighting alone. The resulting systematic uncertainty ranges from 15\% to 34\% in the four signal \ETmiss bins. The shape uncertainty of the ee control sample is determined by fitting the high-\ETmiss tails of the ee and ff samples to the empirical three-parameter function $\ddinline{N}{\ETmiss} = (\ETmiss)^{p_{0}} \re^{p_{1}(\ETmiss)^{p_{2}}}$. The systematic uncertainty in the shape is symmetric and taken to be the fractional difference in each \ETmiss bin between these fitted functions. This yields a systematic effect between 12\% and 18\% in the lower three \ETmiss signal bins, and a systematic effect of 150\% in the final bin that covers \ETmiss above 140\GeV. The main source of uncertainty in the electroweak background estimate comes from the uncertainty in the extended likelihood fit used to calculate the misidentification rate. This is computed by shifting the rate up and down by its uncertainty and scaling the \ETmiss distribution of the $\Pe\gamma$ control sample by the altered rates. The difference between the estimates from the two shifted misidentification rates gives the systematic uncertainty in the rate of electroweak events. Because this represents an uncertainty in the overall normalization, it is constant across \ETmiss bins. The uncertainty is a constant 19\% across the \ETmiss bins. The signal efficiency uncertainties are related to the statistical uncertainty from the finite size of the T5gg and T6gg signal samples (0--16\%), knowledge of the jet energy scale (0--23\% depending on the \PSg--\PSGczDo mass difference), parton distribution function uncertainties (13--22\% depending on the signal point), and photon identification and reconstruction efficiencies (2\%). The uncertainty related to the integrated luminosity of the data sample is 2.7\%~\cite{CMS:2016eto}. \section{Results} \label{sec:Results} The measured \ETmiss distribution and corresponding background predictions are shown in Fig.~\ref{fig:metDistribution}. The expected number of events from the QCD and EWK backgrounds, as well as the total number of expected and observed events, are shown in Table~\ref{tab:ExpObs} for each bin in the signal region. We observe 9 events total in the signal region, compared to an expected background of $7.2\pm2.5$ events. The number of events in the signal region agrees with the background estimate within the uncertainties. \begin{figure}[!htbp] \centering \includegraphics[width=\cmsFigWidth]{Figure_003.pdf} \caption{Measured \ETmiss distribution in comparison with the background prediction. The four bins with $\ETmiss \geq 100\GeV$ constitute the signal region, and the $\ETmiss < 100 \GeV$ bins serve mainly to normalize the background. The systematic uncertainty on the background prediction and the ratio of the data to the prediction are also shown. The last bin includes all events with $\ETmiss \geq 140\GeV$, but for normalization by bin width, the bin is taken to be from $140 \leq \ETmiss < 300\GeV$. The distributions for two signal model points are overlaid for comparison.} \label{fig:metDistribution} \end{figure} \begin{table*}[htb] \topcaption{Numbers of expected and observed events in the signal region. The last row shows the total number of expected and observed events in the inclusive bin $\ETmiss \geq 100\GeV$. The expected numbers of events for two T5gg mass points are also shown. For Signal A, $M_{\PSg} = 1400\GeV$ and $M_{\PSGczDo} = 600\GeV$. For Signal B, $M_{\PSg} = 1600\GeV$ and $M_{\PSGczDo} = 600\GeV$. The uncertainties include all of the systematic uncertainties described in Section~\ref{sec:SystUncert}. } \centering \begin{tabular}{ rcccccc } \multicolumn{1}{c}{$\ETmiss$ bin ($\GeVns{}$) }& QCD & EWK & Total background & Signal A & Signal B & Observed\\ [0.5ex] \hline $100 \leq \ETmiss < 110$ & 1.9$\pm$1.0 & 0.4$\pm$0.1 & 2.3$\pm$1.0 & 0.12$\pm$0.01 & 0.04$\pm$0.01 & 4 \\ $110 \leq \ETmiss < 120$ & 1.5$\pm$0.6 & 0.3$\pm$0.1 & 1.8$\pm$0.6 & 0.13$\pm$0.02 & 0.04$\pm$0.01 & 2 \\ $120 \leq \ETmiss < 140$ & 1.0$\pm$0.6 & 0.5$\pm$0.2 & 1.5$\pm$0.6 & 0.31$\pm$0.04 & 0.08$\pm$0.01 & 2 \\ $\ETmiss \geq 140$ & 0.6$\pm$2.2 & 1.0$\pm$0.3 & 1.6$\pm$2.2 & 13.0$\pm$0.7 & 4.4$\pm$0.2 & 1 \\ $\ETmiss \geq 100$ & 5.0$\pm$2.5 & 2.2$\pm$0.3 & 7.2$\pm$2.5 & 13.6$\pm$0.7 & 4.6$\pm$0.2 & 9 \\ \end{tabular} \label{tab:ExpObs} \end{table*} We determine 95\% confidence level (CL) upper limits on gluino pair and squark pair production cross sections using the modified frequentist CL$_{\mathrm{s}}$ method~\cite{Junk:1999kv, Read:2002hq} based on a log-likelihood test statistic that compares the likelihood of the SM-only hypothesis to the likelihood of the presence of a signal in addition to the SM contributions. The likelihood function is constructed from the background and signal \ETmiss distributions across the four bins described in Section~3. The systematic uncertainties described in Section~\ref{sec:SystUncert} are included in the test statistic as nuisance parameters, with log-normal probability distributions. In Fig.~\ref{fig:limit} we present 95\% CL upper limits on the cross section as a function of the mass pair values for the two models considered in this analysis, $M_{\PSGczDo}$ versus $M_{\PSg}$ and $M_{\PSGczDo}$ versus $M_{\PSQ}$ for gluino pair and squark pair production, respectively. From the NLO+NLL predicted cross sections and their uncertainties we derive contours representing lower limits in the SUSY mass plane. We also show expected limit contours based on the expected experimental cross section limits and their uncertainties. For typical values of the neutralino mass, we expect to exclude gluino masses up to 1.60\TeV and squark masses up to 1.35\TeV, and we observe exclusions of 1.65 and 1.37\TeV respectively. The excluded mass ranges for gluino pair production have been improved by approximately 300\GeV with respect to previous searches performed at $\sqrt{s} = 8\TeV$~ \cite{Aad:2015hea,Khachatryan:2015exa}. The observed exclusions are consistent with the results of the ATLAS analysis performed at $\sqrt{s} = 13\TeV$~\cite{ATLAS:2016aa}. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figure_004-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_004-b.pdf} \caption{The 95\% CL upper limits on the gluino (\cmsLeft) and squark (\cmsRight) pair production cross sections as a function of neutralino versus gluino (squark) mass. The contours show the observed and median expected exclusions assuming the NLO+NLL cross sections, with their one standard deviation uncertainties. The limit curves terminate at the centers of the bins used to sample the cross section.} \label{fig:limit} \end{figure} \section{Summary} \label{sec:Summary} A search is performed for supersymmetry with general gauge mediation in proton-proton collisions yielding events with two photons and large missing transverse energy. The data were collected at a center-of-mass energy of 13\TeV with the CMS detector in 2015, and correspond to an integrated luminosity of 2.3\fbinv. The data are interpreted in the context of two simplified SUSY models with gauge-mediated supersymmetry breaking, one assuming gluino pair production and the second assuming squark pair production. In both models, the branching fraction of the NLSP neutralino to decay to a gravitino and a photon is assumed to be unity. Using background estimation methods based on control samples in data, limits are determined on the gluino and squark pair production cross sections, and those limits are used together with NLO+NLL cross section calculations to constrain the masses of gluinos, squarks, and neutralinos. Gluino masses below 1.65\TeV and squark masses below 1.37\TeV are excluded at a 95\% confidence level. This represents an improvement of approximately 300\GeV with respect to previous analyses performed at a center-of-mass energy of 8\TeV~\cite{Aad:2015hea,Khachatryan:2015exa} and is consistent with the results of the ATLAS analysis performed at a center-of-mass energy of 13\TeV ~\cite{ATLAS:2016aa}. \begin{acknowledgments} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). \hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845. \end{acknowledgments}
2024-02-18T23:40:25.785Z
2017-05-11T02:07:29.000Z
algebraic_stack_train_0000
2,378
5,298
proofpile-arXiv_065-11523
\section{Introduction} One major problem in fitting multivariate nonparametric regression models is the ``curse of dimensionality". To overcome the problem, the single-index model has played an important role in the literature. In this paper, we consider the single-index model of the form \begin{eqnarray} Y=g(\beta^TX)+\varepsilon,\label{SIM} \end{eqnarray} where $Y$ is the response variable, $X$ is a $p\times 1$ covariate vector, $g(\cdot)$ is the unknown link function, $\beta=(\beta_1,\ldots,\beta_p)^T$ is the unknown index parameter, and $\varepsilon$ is a random error with $E(\varepsilon|X)=0$ almost surely. We further assume the Euclidean norm $\|\beta\|=1$ for the identifiability purpose. Model (\ref{SIM}) reduces the covariate vector into an index which is a linear combination of covariates, and hence avoids the ``curse of dimensionality". Single-index models have been extensively studied in the literature. See, for example, \citeasnoun{HT1993}, \citeasnoun{HHI1993}, \citeasnoun{CFG1997}, \citeasnoun{XUEZ2006}, \citeasnoun{LZXF2010}, \citeasnoun{LLL2013}, \citeasnoun{LLL2015}, among others. For estimating the index parameter and the unknown link function, \citeasnoun{DL1991} developed the sliced inverse regression method. \citeasnoun{HT1993} proposed the average derivative method to obtain a root-$n$ consistent estimator of the index vector $\beta$. \citeasnoun{CFG1997} used the local linear method to estimate the unknown parameters and the unknown link function for generalized partially linear single-index models. \citeasnoun{NT2000} proposed the partial least squares estimator for single-index models. \citeasnoun{XUEZ2006} and \citeasnoun{ZX2006} proposed the bias-corrected empirical likelihood method to construct the confidence intervals or regions of the parameters of interest. \citeasnoun{LLL2010} proposed the semiparametrically efficient profile least-squares estimators of regression coefficients for partially linear single-index models. \citeasnoun{ZH2010} extended the generalized likelihood ratio test to the single-index model. \citeasnoun{CWZ2011} introduced the estimating function method to study the single-index models. \citeasnoun{PX2012} and \citeasnoun{YXL2014} investigated the single-index random effects models with longitudinal data. \citeasnoun{LPDT2014} constructed the simultaneous confidence bands for the nonparametric link function in single-index models. In this paper, we are interested in estimating the index parameter $\beta$ and the unknown link function $g(\cdot)$ in model (\ref{SIM}) when the covariate vector $X$ is measured with error. We assume an additive measurement error model as \begin{eqnarray} W=X+U,\label{EV} \end{eqnarray} where $W$ is the observed surrogate, $U$ follows $N(0,\Sigma_u)$ and is independent of $(X,Y)$. When $U$ is zero, there is no measurement error. For simplicity, we consider only the case where the measurement error covariance matrix $\Sigma_{u}$ is known. Otherwise, $\Sigma_{u}$ need to be first estimated, e.g., by the replication experiments method in \citeasnoun{CRSC2006}. We refer to the models characterized by (\ref{SIM}) and (\ref{EV}) as the single-index measurement error model. The measurement error models arise frequently in practice and are attracting attention in medical and statistical research. For example, covariates such as the blood pressure \cite{CRSC2006} and the CD4 count \cite{LC2000,LH2009} are often subject to measurement error. For a class of generalized linear measurement error models, \citeasnoun{Stefanski1989} and \citeasnoun{Nakamura1990} used a method of moment identities to construct the corrected score functions, \citeasnoun{YLT2015} further developed the corrected empirical likelihood method. \citeasnoun{CS1994} developed the SIMEX method to correct the effect estimates in the presence of additive measurement error. \citeasnoun{CLKS1996} further investigated the asymptotic distribution of the SIMEX estimator. Since then, the SIMEX method has become a standard tool for correcting the biases induced by measurement error in covariates for many complex models. \citeasnoun{CMR1999} and \citeasnoun{DH2008} applied the SIMEX technique to local polynomial nonparametric regression and spline-based regression. \citeasnoun{LR2005} applied the SIMEX technique to the generalized partially linear models with the linear covariate being measured with additive error. Other interesting works in SIMEX include, for example, \citeasnoun{CZ2003}, \citeasnoun{MC2006}, \citeasnoun{AC2009}, \citeasnoun{ML2010}, \citeasnoun{MY2011}, \citeasnoun{SM2014}, \citeasnoun{ZZZ2014}, \citeasnoun{CL2015}, and \citeasnoun{WW2015}. Note that the aforementioned SIMEX methods may not be able to handle the multivariate nonparametric measurement error regression models owing to the ``curse of dimensionality". In view of this, \citeasnoun{LW2005} considered the partially linear single-index measurement error models with the linear part containing the measurement error, where they applied the correction for attenuation approach to obtain the efficient estimators of the parameters of interest. Their method, however, is not applicable for the occurrence with measurement errors in the nonparametric part. This motivates us to develop a new SIMEX method to solve this problem. Specifically, we combine the SIMEX method, the local linear approximation method, and the estimating equation to handle the single-index measurement error model. Our method has several desirable features. First, our proposed method can deal with multivariate nonparametric measurement error regression and avoids ``curse of dimensionality" by introducing the index parameter. Second, we use the SIMEX technique to construct the efficient estimation and reduce the bias of the estimator, and do not assume the distribution of the unobservable $X$. Third, to obtain the efficient estimator of $\beta$, we regard the constraint $\|\beta\|=1$ as a piece of prior information and adopt the ``delete-one-component" method. The remainder of the paper is organized as follows. In Section 2, we develop the SIMEX algorithm to obtain the estimators of the index parameter and the unknown link function, and investigate their asymptotic properties. In Section 3, we present and compare the results from simulation studies and also apply the proposed method to a real data example for illustration. Some concluding remarks are given in Section 4, and the proofs of the main results are given in the Appendix. \section{Main Results} \subsection{Methodology} To conduct efficient estimation for $\beta$ in the presence of covariate measurement error, \citeasnoun{CS1994} introduced the SIMEX algorithm. The SIMEX algorithm consists of the simulation step, the estimation step, and extrapolation steps. It aims to add additional variability to the observed $W$ in order to establish the trend between the measurement error induced bias and the variance of induced measurement error, and then extrapolate this trend back to the case without measurement error \cite{CRSC2006}. In this section, we use the SIMEX algorithm, the local linear smoother and the estimating equation to estimate $\beta$ and $g(\cdot)$. First, we estimate $g(\cdot)$ as a function of $\beta$ by using the local linear smoother. We then estimate the parametric part based on the estimating equation. The proposed algorithm is described as follows. \vskip 12pt \noindent \textbf{(I) Simulation step} For each $i=1,\ldots,n$, we generate a sequence of variables \begin{eqnarray*} W_{ib}(\lambda)=W_i+(\lambda\Sigma_u)^{1/ 2}U_{ib}, ~~~b=1,\ldots, B, \end{eqnarray*} where $U_{ib}\sim N(0,I_p)$, $I_p$ is a $p\times p$ identity matrix, $B$ is a given integer, and $\lambda\in\Lambda=\{\lambda_1,\lambda_2,\ldots,\lambda_M\}$ is the grid of $\lambda$ in the extrapolation step. We set the range from 0 to 2. \vskip 12pt \noindent \textbf{(II) Estimation step} Suppose that $g(\cdot)$ has a continuous second derivative. For $t$ in a small neighborhood of $t_0$, $g(t)$ can be approximated as $g(t)\approx g(t_0)+g'(t_0)(t-t_0)\equiv a+b(t-t_0)$. With the simulated $W_{ib}(\lambda)$, we first estimate $g(t_0)$ as a function of $\beta$ by a local linear smoother, denoted by $\hat{g}(\beta, \lambda; t_0)$, in Step 1. We then propose a new estimator of $\beta(\lambda)$ in Steps 2 and 3, denoted by $\hat{\beta}(\lambda)$. The specific procedure is as follows. \vskip 6pt \textbf{Step 1.} For each fixed $t_0$ and $\beta$, $\hat{g}(\beta, \lambda; t_0)$ and $\hat{g}'(\beta,\lambda; t_0)$ are estimated by minimizing \begin{equation}\label{LS} \sum\limits_{i=1}^n\left\{Y_{i}-a-b[\beta^TW_{ib}(\lambda)-t_0]\right\}^2K_h(\beta^TW_{ib}(\lambda)-t_0), \end{equation} with respect to $a$ and $b$, where $K_h(\cdot)=h^{-1}K(\cdot/h)$, $K(\cdot)$ is a kernel function with $h$ the bandwidth. Let $\hat{a}$ and $\hat{b}$ be the solutions to problem (\ref{LS}). Then, $\hat{g}(\beta, \lambda; t_0)=\hat{a}$ and $\hat{g}'(\beta, \lambda; t_0)=\hat{b}$. Let \begin{align*} M_{ni}(\beta,\lambda;t_0) &= U_{ni}(\beta,\lambda;t_0)\Big/\sum\limits_{j=1}^nU_{nj}(\beta,\lambda;t_0), \\ \widetilde{M}_{ni}(\beta,\lambda;t_0) &= \widetilde{U}_{ni}(\beta,\lambda;t_0)\Big/\sum\limits_{j=1}^nU_{nj}(\beta,\lambda;t_0), \end{align*} where $U_{ni}(\beta,\lambda;t_0)=K_h(\beta^TW_{ib}(\lambda)-t_0)\{S_{n,2}(\beta,\lambda;t_0)-[\beta^TW_{ib}(\lambda)-t_0]S_{n,1}(\beta,\lambda;t_0)\}$, $\widetilde{U}_{ni}(\beta,\lambda;t_0)=K_h(\beta^TW_{ib}(\lambda)-t_0)\{[\beta^TW_{ib}(\lambda)-t]S_{n,0}(\beta,\lambda;t_0)-S_{n,1}(\beta,\lambda;t_0)\}$, and $S_{n,l}(\beta,\lambda;t_0)=\dfrac{1}{n}\sum\limits_{i=1}^n(\beta^TW_{ib}(\lambda)-t_0)^lK_h(\beta^TW_{ib}(\lambda)-t_0)$ for $l=0,1,2$. Simple calculation yields \begin{align} \hat{g}(\beta, \lambda; t_0) &= \sum\limits_{i=1}^nM_{ni}(\beta, \lambda; t_0)Y_{i}, \label{hatg} \\ \hat{g}'(\beta, \lambda; t_0) &= \sum\limits_{i=1}^n\widetilde{M}_{ni}(\beta, \lambda; t_0)Y_{i}. \label{hatg1} \end{align} \citeasnoun{CXZ2010} showed that the coverage rate of the estimator of $g'(t)$ is slower than that of $g(t)$ if the same bandwidth is used. Because of this, we have suggested another bandwidth $h_1$ to control the variability in the estimator of $g'(t)$. We use $h_1$ to replace $h$ in $\hat{g}'(\beta, \lambda; t_0)$ and write it as $\hat{g}'_{h_1}(\beta, \lambda; t_0)$. \vskip 6pt \textbf{Step 2.} To estimate $\beta$, we use the ``delete-one-component" method in \citeasnoun{ZX2006} to transform the boundary of a unit ball in $\mathbb{R}^p$ to the interior of a unit ball in $\mathbb{R}^{p-1}$. Let $\beta^{(r)}=(\beta_1,\ldots,\beta_{r-1},\beta_{r+1},\ldots,\beta_p)$ be a $(p-1)$ dimensional vector deleting the $r$th component $\beta_r$. Without loss of generality, we assume there is a positive component $\beta_r$; otherwise, we may consider $\beta_r=-(1-\|\beta^{(r)}\|^2)^{1/2}$. Let $$\beta=(\beta_1,\ldots, \beta_{r-1}, (1-\|\beta^{(r)}\|)^2)^{1/2},\beta_{r+1},\ldots,\beta_p)^T.$$ Note that $\beta^{(r)}$ satisfies the constraint $\|\beta^{(r)}\|<1$. We conclude that $\beta$ is infinitely differentiable in a neighborhood of $\beta^{(r)}$ and the Jacobian matrix is $J_{\beta^{(r)}}=(\gamma_1,\ldots,\gamma_p)^T$, where $\gamma_s (1\leq s \leq p, s\neq r)$ is a $(p-1)$ dimensional vector with the $s$th component being 1, and $\gamma_r=-(1-\|\beta^{(r)}\|^2)^{-\frac{1}{2}}\beta^{(r)}$. Given the estimators $\hat{g}(\beta, \lambda; t_0)$ and $\hat{g}'_{h_1}(\beta, \lambda; t_0)$ in (\ref{hatg}) and (\ref{hatg1}), respectively, an estimator of $\beta^{(r)}$, $\hat{\beta}_b^{(r)}(\lambda)$, is obtained by solving the following equation: \begin{eqnarray}\label{EF}Q_{nb}(\beta^{(r)},\lambda)= {1\over n}\sum\limits_{i=1}^n\hat{\eta}_{ib}(\beta^{(r)},\lambda)=0, \end{eqnarray} where \begin{align*} \hat{\eta}_{ib}(\beta^{(r)},\lambda) &= [Y_i-\hat{g}(\beta,\lambda; \beta^TW_{ib}(\lambda))]\hat{g}'_{h_1}(\beta,\lambda; \beta^TW_{ib}(\lambda))J^T_{\beta^{(r)}}W_{ib}(\lambda), \\ \beta^TW_{ib}(\lambda) &= {\beta^{(r)T}}W_{ib}^{(r)}(\lambda)+(1-\|\beta^{(r)}\|^2)^{1/2}W_{ib,r}(\lambda), \\ W_{ib}^{(r)}(\lambda) &= (W_{ib,1}(\lambda),\ldots,W_{ib,(r-1)}(\lambda),W_{ib,(r+1)}(\lambda),\ldots,W_{ib,p}(\lambda))^T. \end{align*} Next, we can obtain an estimator of $\beta$, say $\hat{\beta}_b(\lambda)$, by implementing the Fisher's method of scoring version of the Newton-Raphson algorithm to solve the estimating equation (\ref{EF}). We summarize the iterative algorithm in what follows. (1) Choose the initial values for $\beta$, denoted by $\widetilde{\beta}_b(\lambda)$, where $b=1,\ldots,B$. (2) Update $\widetilde{\beta}_b(\lambda)$ with $\widetilde{\beta}_b(\lambda)=\hat{\beta}_b^*(\lambda)/\|\hat{\beta}_b^*(\lambda)\|$ by $$\hat{\beta}_b^*(\lambda)=\widetilde{\beta}_b(\lambda)+J_{\widetilde{\beta}^{(r)}}B_{nb}^{-1}(\widetilde{\beta}^{(r)},\lambda)Q_{nb}(\widetilde{\beta}^{(r)},\lambda),$$ where $B_{nb}({\beta}^{(r)},\lambda)=\dfrac{1}{n}\sum\limits_{i=1}^nJ^T_{\beta^{(r)}}W_{ib}(\lambda)\hat{g}'^2_{h_1}(\beta,\lambda; \beta^TW_{ib}(\lambda))W_{ib}^T(\lambda)J_{\beta^{(r)}}$. (3) Repeat Step (2) until convergence. In the iterative algorithm, the initial values of $\beta$, $\beta_{{\rm {int}}}$, with norm 1 is obtained by fitting a linear model. \vskip 6pt {\emph{Remark 1.} Similar to \citeasnoun{CWZ2011}, we discuss the solution of the estimating equation. In fact, the solution of the estimating equation $Q_{nb}(\beta^{(r)},\lambda)$ is just the least-squares estimator of $\beta^{(r)}$. The least-squares objective function is defined by $${G}(\beta^{(r)},\lambda)=\sum\limits_{i=1}^n\{Y_i-\hat{g}(\beta,\lambda; \beta^TW_{ib}(\lambda))\}^2.$$ The minimum of the objective function ${G}(\beta^{(r)},\lambda)$ with respect to $\beta^{(r)}$ is the solution of the estimating equation $Q_{nb}(\beta^{(r)},\lambda)$ because the estimating equation $Q_{nb}(\beta^{(r)},\lambda)$ is the gradient vector of ${G}(\beta^{(r)},\lambda)$. Note that $\{\|\beta^{(r)}\|<1\}$ is an open, connected subset of $\mathbb{R}^{p-1}$. By the regularity condition (C2), we known that the least-squares objective function ${G}(\beta^{(r)},\lambda)$ is twice continuously differentiable on $\{\|\beta^{(r)}\|<1\}$ such that the global minimum of ${G}(\beta^{(r)},\lambda)$ can be achieved at some point. By some simple calculations, we have $$\frac{1}{n}\frac{\partial^2G(\beta^{(r)},\lambda)}{\partial\beta^{(r)}\beta^{(r)T} }=-\frac{\partial Q_{nb}(\beta^{(r)},\lambda)}{\partial\beta^{(r)}}=\mathcal{A}(\beta(\lambda),\lambda)+o_p(1),$$ where $\mathcal{A}(\beta(\lambda),\lambda)$ is a positive definite matrix for $\lambda\in\Lambda$ defined in Condition (C6). Then, the Hessian matrix $\dfrac{1}{n}\dfrac{\partial^2G(\beta^{(r)},\lambda)}{\partial\beta^{(r)}\beta^{(r)T} }$ is positive definite for all values of $\beta^{(r)}$ and $\lambda\in\Lambda$. Hence, the estimating equation (\ref{EF}) has a unique solution. \vskip 12pt \textbf{Step 3.} With the estimated values $\hat{\beta}_b(\lambda)$ over $b=1,\ldots, B$, we average them and obtain the final estimate of $\beta$ as $$\hat{\beta}(\lambda)={1\over B}\sum\limits_{b=1}^B\hat{\beta}_b(\lambda).$$ \vskip 6pt \noindent \textbf{(III) Extrapolation step} For the extrapolant function, we consider the widely used quadratic function $\mathcal{G}(\lambda,\Psi)=\psi_1+\psi_2\lambda+\psi_3\lambda^2$ with $\Psi=(\psi_1,\psi_2,\psi_3)^T$ \cite{LC2000,LR2005}. We fit a regression model of $\{\hat{\beta}(\lambda),\lambda\in\Lambda\}$ on $\{\lambda\in\Lambda\}$ based on $\mathcal{G}(\lambda,\Gamma)$, and denote $\hat{\Gamma}$ as the estimated value of $\Gamma$. The SIMEX estimator of $\beta$ is then defined as $\hat{\beta}_{{\rm SIMEX}}=\mathcal{G}(-1,\hat{\Gamma})$. When $\lambda$ shrinks to 0, the SIMEX estimator reduces to the naive estimator, $\hat{\beta}_{{\rm Naive}}=\mathcal{G}(0,\hat{\Gamma})$, that neglects the measurement error with a direct replacement of $X$ by $W$. The SIMEX estimator, $\hat{g}_{\rm SIMEX}(t_0)$, is obtained in the same way. $\beta$ in Step 1 of the estimation step is replaced by $\hat{\beta}_{\rm SIMEX}$ and the estimator $\hat{g}_b(\lambda; t_0)$ is obtained with the bandwidth $h_2$. $\hat{g}_b(\lambda; t_0)$ over $b=1,\ldots, B$ is averaged, then $\hat{g}(\lambda; t_0)$ is obtained by $$\hat{g}(\lambda; t_0)={1\over B}\sum\limits_{b=1}^B\hat{g}_b(\lambda; t_0).$$ The extrapolation step results in $\hat{\mathbb{A}}$, which minimizes $\sum_{\lambda\in\Lambda}\{\hat{g}(\lambda;t_0)-\mathcal{G}(\lambda;\mathbb{A})\}^2$ with respect to $\mathbb{A}$. The SIMEX estimator of $\hat{g}_{\rm SIMEX}(t_0)$ is given by $$\hat{g}_{\rm SIMEX}(t_0)=\mathcal{G}(-1,\hat{\mathbb{A}}).$$ \subsection{Asymptotic properties} To investigate the asymptotic properties of the estimators for the index parameter and the link function, we first present some regularity conditions. \begin{enumerate} \item[(C1)] The density function, $f(t)$, of $\beta^T X$ is bounded away from zero. It also satisfies the Lipschitz condition of order 1 on $\mathcal{T}= \{t = \beta^T x : x\in A\}$, where $A$ is the bounded support set of $X$. \item[(C2)] $g(\cdot)$ has a continuous second derivative on $\mathcal{T}$. \item[(C3)] The kernel $K(\cdot)$ is a bounded and symmetric density function with a bounded support satisfying the Lipschitz condition of order 1 and $\int_{-\infty}^{\infty} u^2K(u)du\neq 0$. \item[(C4)] $\sup\limits_x E(\varepsilon^2|X = x) < \infty$ and $\sup\limits_x E(\varepsilon^4|X = x) < \infty$. \item[(C5)] $nh^2/(\log n)^2\rightarrow \infty$, $nh^4 \log n\rightarrow 0$, $nhh_1^3/(\log n)^2\rightarrow \infty$, and $\lim \sup\limits_{n\rightarrow\infty}nh_1^5<\infty$. \item[(C6)] $\mathcal{A}(\beta(\lambda),\lambda)$ is a positive definite matrix for $\lambda\in\Lambda$, where $$\mathcal{A}(\beta(\lambda),\lambda)=E\Big\{\Big[g'\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)\Big]^2J_{\beta^{(r)}(\lambda)}^T{\widetilde{W}}_{ib}(\lambda){\widetilde{W}}^T_{ib}(\lambda)J_{\beta^{(r)}(\lambda)}\Big\}$$ with ${\widetilde{W}}_{ib}(\lambda)= {{W}}_{ib}(\lambda)-E[{{W}}_{ib}(\lambda)| \beta^T(\lambda) W_{ib}(\lambda)]$. \item[(C7)] The extrapolant function is theoretically exact. \end{enumerate} {\emph{Remark 2.} Condition (C1) ensures that the the density function of $\beta^TX$ is positive. Condition (C2) is the standard condition in smoothness. Condition (C3) is the common assumption for the second-order kernels. Condition (C4) is a necessary condition for deriving the asymptotic normality for the proposed estimator. Condition (C5) specifies some mild condition for the choice of bandwidth. Finally, Condition (C6) ensures that there is asymptotic variance for the estimator $\hat{\beta}_{\rm SIMEX}$, and Condition (C7) is the common assumption for the SIMEX method. To derive the theoretical results, we introduce some new definitions and notations. For the given $\Lambda=\{\lambda_1,\ldots,\lambda_M\}$, let $\hat{\beta}(\Lambda)$ be the vector of estimators $(\hat{\beta}(\lambda_1),\ldots,\hat{\beta}(\lambda_M))$, denoted by ${\rm vec}\{\hat{\beta}(\lambda),\lambda\in\Lambda\}$. Let also $\mathbf{\Gamma}=(\Gamma_1^T,\ldots,\Gamma_p^T)^T$, where $\Gamma_j$ is the parameter vector estimated in the extrapolation step for the $j$th component of $\hat{\beta}(\lambda)$ for $j=1,\ldots,p$. We define $\mathcal{G}(\Lambda,\mathbf{\Gamma})={\rm vec}\{\mathcal{G}(\lambda_m, \Gamma_j), j=1,\ldots,p, m=1,\ldots,M\}$, ${\rm Res}(\mathbf{\Gamma})=\hat{\beta}(\Lambda)-\mathcal{G}(\Lambda,\mathbf{\Gamma})$, $s^T(\mathbf{\Gamma})=\{\partial/\partial (\mathbf{\Gamma})^T\}{\rm Res}(\mathbf{\Gamma})$, $D(\mathbf{\Gamma})=s(\mathbf{\Gamma})s^T(\mathbf{\Gamma})$, $$\eta_{iB}(\beta(\lambda),\lambda)=\displaystyle {1\over B}\sum\limits_{b=1}^B\Big[Y_i-g\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)\Big]g'\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)J_{\beta^{(r)}(\lambda)}^T{\widetilde{W}}_{ib}(\lambda),$$ $$\Psi_{iB}\Big\{\beta(\Lambda),\Lambda\Big\}={\rm vec}\{\eta_{iB}(\beta(\lambda),\lambda),\lambda\in\Lambda\},$$ $$\mathcal{J}\Big\{\beta(\Lambda),\Lambda\Big\}={\rm diag}\{J_{\beta^{(r)}(\lambda)},\lambda\in\Lambda\},$$ $$\mathcal{A}_{11}\Big\{\beta(\Lambda),\Lambda\Big\}={\rm diag}\{\mathcal{A}(\beta(\lambda),\lambda),\lambda\in\Lambda\}$$ and $$ \Sigma=\mathcal{J}\Big\{\beta(\Lambda),\Lambda\Big\}\mathcal{A}^{-1}_{11}\Big\{\beta(\Lambda),\Lambda\Big\}C_{11}\Big\{\beta(\Lambda),\Lambda\Big\} \Big\{\mathcal{A}^{-1}_{11}\Big\{\beta(\Lambda),\Lambda\Big\}\Big\}^T\mathcal{J}^T\Big\{\beta(\Lambda),\Lambda\Big\} $$ with $$C_{11}\Big\{\beta(\Lambda),\Lambda\Big\}={\rm cov}\Big[\Psi_{iB}\Big\{\beta(\Lambda),\Lambda\Big\}\Big].$$ \begin{theo}\label{AN} Suppose that the regularity conditions $(C1)$--$(C7)$ hold. Then, as $n\rightarrow \infty$, we have $$\sqrt{n}(\hat{\beta}_{\rm SIMEX}-\beta)\stackrel{\mathcal {L}}{\longrightarrow}N\{0,\mathcal{G}_{\mathbf{\Gamma}}(-1,\mathbf{\Gamma})\Sigma(\mathbf{\Gamma}) \{\mathcal{G}_{\mathbf{\Gamma}}(-1,\mathbf{\Gamma})\}^T\},$$ where $\stackrel{\mathcal {L}}{\longrightarrow}$ denotes the convergence in distribution, $\mathcal{G}_{\mathbf{\Gamma}}(\lambda,\mathbf{\Gamma})=\{\partial/\partial(\mathbf{\Gamma})^T\}\mathcal{G}(\lambda,\mathbf{\Gamma})$, $\Sigma(\mathbf{\Gamma})=D^{-1}(\mathbf{\Gamma})s(\mathbf{\Gamma})\Sigma s^T(\mathbf{\Gamma})D^{-1}(\mathbf{\Gamma})$. \end{theo} Theorem \ref{AN} indicates that $\hat{\beta}_{\rm SIMEX}$ is a root-$n$ consistent estimator. Its asymptotic distribution is similar to that of the parametric estimator of $\beta$ without measurement error, whereas the asymptotic covariance matrix of the resulting estimator is more complicated. Let $f_0(\cdot)$ be the density function of $\beta^TW$, $\mu_l=\int t^lK(t)dt$ and $\nu_l=\int K^l(t)dt$ for $l=1,2$. Define $$\gamma(\lambda,\mathbb{A})=\{\partial/\partial(\mathbb{A})\}\mathcal{G}(\lambda,\mathbb{A}),$$ $$C(\Lambda,\mathbb{A})=\gamma^T(-1,\mathbb{A})\Big\{\sum\limits_{\lambda\in\Lambda}\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})\Big\}^{-1},$$ and $D=E_q\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})E_q$, where $E_q$ is the $q\times q$ matrix of all elements being zero except for the first element being one and $q$ is the dimension of $\mathbb{A}$. \begin{theo}\label{ANG} Suppose that the regularity conditions $(C1)$--$(C7)$ hold, and assume that $nh_2^5=O(1)$. Then, as $n\rightarrow\infty$ and $B\rightarrow\infty$, the SIMEX estimator $\hat{g}_{\rm SIMEX}(t_0)$ is asymptotically equivalent to an estimator whose bias and variance are given respectively by $$C(\Lambda,\mathbb{A})\sum\limits_{\lambda\in\Lambda}{1\over 2}h_2^2 \mu_2g''(\lambda;t_0)\gamma(\lambda,\mathbb{A})$$ and $$[nh_2f_0(t_0)]^{-1}\nu_2 {\rm var}(Y|\beta^TW=t_0)C(\Lambda,\mathbb{A})DC^T(\Lambda,\mathbb{A}),$$ where $g(\lambda; t)=E(Y|\beta^TW_b(\lambda)=t).$ \end{theo} Theorem \ref{ANG} implies that the $\hat{\beta}_{\rm SIMEX}$ does not affect the estimator of $\hat{g}_{\rm SIMEX}(t_0)$ because $\hat{\beta}_{\rm SIMEX}$ is root-$n$ consistent. As pointed out in \citeasnoun{CMR1999}, the variance of $\hat{g}_{\rm SIMEX}(t_0)$ is asymptotically the same as if the measurement error was ignored, but multiplied by a factor, $C(\Lambda,\mathbb{A})DC^T(\Lambda,\mathbb{A})$, which is independent of the regression function. \section{Numerical studies } \subsection{Simulation study} In this section, we evaluate the finite sample performance of the proposed method via simulation studies. Consider the following model \begin{eqnarray*} \left\{ \begin{array}{ll} Y_i=-2(\beta^TX_i-1)^2+1+\varepsilon_i, \\ W_i=X_i+U_i,\qquad i=1,\ldots,n, \end{array} \right. \end{eqnarray*} where $\beta=(\beta_1,\beta_2)^T=(\sqrt{3}/3,\sqrt{6}/3)^T$, $X_i$ is a two-dimensional vector with independent $N(0,1)$ components, the error $\varepsilon_i$ is generated from $N(0,0.2^2)$, $Y_i$ is generated according to the model, $U_i$ is generated from $N(0, \hbox{diag}(\sigma_u^2,0))$. We take $\sigma_u = 0.2, 0.4$ and 0.6 to represent different levels of measurement errors. In simulation study, we compare the naive estimates (Naive) that ignore measurement errors and the SIMEX estimates with quadratic extrapolation function. The sizes of the samples are $n = 50, 100$ and 150. For each setting, we simulate 500 times to assess the performance. Using the SIMEX algorithm, we take $\lambda=0,0.2,\ldots,2$ and $B=50$. We use the Epanechnikov kernel $K(u)=0.75(1-u^2)_{+}$. As pointed out in \citeasnoun{LW2005}, the computation is quite expensive for the SIMEX method. In view of this, we apply a ``rule of thumb" to select the bandwidths, which is the same in spirit as the selection method in \citeasnoun{AC2009}. Specifically, the bandwidths $h$, $h_1$ and $h_2$ are taken to be $cn^{-1/4}(\log n)^{-1/2}$, $cn^{-1/5}$ and $cn^{-1/5}$, where $c$ is the standard deviation of $\beta_{\rm {int}}^TW$. To explained the rationality of the ``rule of thumb" (RT), we compare with the results of simulations by using the cross-validation (CV) method to select the bandwidths. We apply the same bandwidths for each $\lambda$ and $b$ since it is time consuming for the CV method. The CV statistic is given by $${\rm{CV}}(h)=\frac{1}{n}\sum\limits_{i=1}^n\{Y_i-\hat{g}_{[i]}(\hat{\beta}^T_{[i]}X_i)\}^2,$$ where $\hat{g}_{[i]}(\cdot)$ and $\hat{\beta}_{[i]}$ are the SIMEX estimators of ${g}(\cdot)$ and $\beta$ which are computed with all of the samples but the $i$th subject deleted. The $h_{\rm opt}$ is obtained by minimizing ${\rm{CV}}(h)$. It can be shown $h_{\rm opt} = Cn^{-1/5}$ for a constant $C>0$. Therefore, we use the bandwidths $$h=h_{\rm opt} n^{-1/20}(\log n)^{-1/2}, ~ h_1=h_{\rm opt},~ h_2=h_{\rm opt}.$$ To evaluate the performance of the bandwidth selection for the CV method, we first plot the ${\rm{CV}}(h)$ versus the bandwidth $h$. The simulation result is shown in Figure \ref{CVh} with $n=100$ and $\sigma_\mu=0.4$ for one run, and other cases are similar. Figure \ref{CVh} shows the relationship of ${\rm{CV} }(h)$ versus $h$ with $h$ ranging from [0.1, 1]. From Figure \ref{CVh}, we can see that the ${\rm{CV}}(h)$ function is convex, and reaches the minimum value when $h$ is around 0.35. Table \ref{htable} summarizes the biases and standard deviations (SD) of the parameter $\beta$ obtained by the SIMEX and naive estimators with the two different bandwidth selections. From Table \ref{htable}, the results of the SIMEX and naive estimators made by different bandwidths have little difference. Hence, to reduce the calculation time, we use the ``rule of thumb" to select the bandwidths in the real data analysis. \begin{figure}[htbp!] \centering \includegraphics[width=8cm,height=8cm]{cvh.eps} \caption{{ \it Plot of the ${\rm{CV}}(h)$ versus the bandwidth $h$ with $n=100$ and $\sigma_\mu=0.4$. }} \label{CVh} \end{figure} \begin{table}[h] \caption{\label{htable} {\rm The biases and standard deviations (SD) of the parameters $\beta_1$ and $\beta_2$ obtained by the SIMEX and naive estimators. }} \centering \tabcolsep 0.05cm \begin{tabular}{ cccccccc } \hline &&&\multicolumn{2}{c}{SIMEX}&& \multicolumn{2}{c}{Naive} \\ \cline{4-5}\cline{7-8} ~~~~ &~~~~&~~~~&$\beta_1$&$\beta_2$&&$\beta_1$&$\beta_2$~~ \\\cline{4-5}\cline{7-8} ~~~~$n$~~~~&~$h$~ &~$\sigma_u$~&Bias(SD)&Bias(SD)&&Bias(SD)&Bias(SD)~\\ \hline ~~~50~~~&&0.2&$-0.0084(0.0520)$&$0.0078(0.0377)$&&$-0.0177(0.0291)$&$0.0146(0.0203)$\\ &$h_{\rm RT}$&0.4&$-0.0405(0.0875)$&$0.0171(0.0638)$&&$-0.0764(0.0537)$&$0.0546(0.0388)$\\ &&0.6&$-0.0508(0.1253)$&$0.0342(0.0821)$&&$-0.1207(0.0680)$&$0.0700(0.0330)$\\ \cline{2-8} & &0.2&$-0.0094(0.0426)$&$0.0031(0.0389)$&&$-0.0182(0.0296)$&$0.0101(0.0206)$\\ &$h_{\rm CV}$&0.4&$-0.0398(0.0867)$&$0.0205(0.0702)$&&$-0.0795(0.0365)$&$0.0508(0.0262)$\\ & &0.6& $-0.0548(0.1254)$&$0.0300(0.0845)$&&$-0.1157(0.0710)$&$0.0707(0.0311)$\\ \hline 100&&0.2&$-0.0083(0.0384)$&$0.0074(0.0321)$&&$-0.0126(0.0203)$&$0.0084(0.0142)$\\ &$h_{\rm RT}$&0.4&$-0.0381(0.0581)$&$0.0158(0.0334)$&&$-0.0761(0.0397)$&$0.0434(0.0224)$\\ &&0.6&$-0.0394(0.0719)$&$0.0206(0.0567)$&&$-0.1154(0.0383)$&$0.0632(0.0210)$\\ \cline{2-8} & &0.2&$-0.0078(0.0456)$&$0.0076(0.0332)$&&$-0.0119(0.0244)$&$0.0115(0.0185)$\\ &$h_{\rm CV}$&0.4&$-0.0375(0.0521)$&$0.0165(0.0349)$&&$-0.0705(0.0365)$&$0.0454(0.0228)$\\ & &0.6&$-0.0391(0.0679)$&$0.0236(0.0449)$&&$-0.1048(0.0379)$&$0.0638(0.0209)$ \\ \hline 150&&0.2&$-0.0077(0.0203)$&$0.0050(0.0141)$&&$-0.0187(0.0136)$&$0.0127(0.0093)$\\ &$h_{\rm RT}$&0.4&$-0.0178(0.0283)$&$0.0117(0.0193)$&&$-0.0497(0.0279)$&$0.0324(0.0177)$\\ &&0.6&$-0.0279(0.0599)$&$0.0163(0.0394)$&&$-0.1088(0.0315)$&$0.0563(0.0171)$ \\ \cline{2-8} & &0.2&$-0.0079(0.0279)$&$0.0044(0.0115)$&&$-0.0181(0.0179)$&$0.0122(0.0119)$\\ &$h_{\rm CV}$&0.4&$-0.0201(0.0294)$&$0.0089(0.0206)$&&$-0.0411(0.0267)$&$0.0383(0.0196)$\\ & &0.6& $-0.0252(0.0583)$&$0.0205(0.0371)$&&$-0.0973(0.0357)$&$0.0599(0.0196)$\\ \hline \end{tabular}% \end{table} Next, we compare the naive estimators and the SIMEX estimators. From Table \ref{htable}, we can see that the SIMEX estimates of $\beta_1$ and $\beta_2$ have smaller biases than the naive estimates. However, the standard deviations based on the SIMEX estimates are larger than those based on the naive estimates. We can also see that the bias and SD decrease as $n$ increases and the estimators depend on the measurement error. The performance of the estimator for the link function $g(t)$ is discussed by 500 replications. The estimator $\hat{g}(t)$ is $\hat{g}(t)=\dfrac{1}{500}\sum\limits_{m=1}^{500}\hat{g}_m(t)$. To assess the estimator $\hat{g}(t)$, we use the root mean squared error (RMSE), which is given by $$\hbox{RMSE}=\left[n_{\rm grid}^{-1}\sum\limits_{k=1}^{n_{\rm grid}}\{\hat{g}(t_k)-g(t_k)\}^2\right]^{1/2},$$ where $n_{\rm grid}$ is the number of grid points, and $\{t_k, k=1,2,\ldots, n_{\rm grid}\}$ are equidistant grid points. In the simulation study, we take $n_{\rm grid}=15$. The estimated link function and the boxplot for the 500 RMSEs are given in Figure \ref{fighatg}. From Figure \ref{fighatg} (a), we see that the SIMEX estimated curve is closer to the real link function curve than the naive estimated curve. Figure \ref{fighatg} (b) shows that the RMSEs of the SIMEX and naive estimators for the link function are not large, but the RMSEs of the SIMEX estimator are slightly larger than the naive estimator. Note that the SD and RMSE based on the SIMEX estimators are larger than the naive estimators for the parameter $\beta$ and the link function $g(\cdot)$, respectively. This can be intuitively illustrated with the linear model. Consider the linear model $Y=\beta_0+\beta_x x+\epsilon$, where $E(\epsilon)=0$ and ${\rm Var}(\epsilon)=\sigma_\epsilon^2$. If replacing $x$ with $W+\sqrt{\lambda}\sigma_e e_b$, where $e_b\sim N(0,1)$ and $W=x+e$ with $e$ have mean 0 and variance $\sigma_e^2$, then $\hat{\beta}_{x}(b,\lambda)$ has the asymptotic variance $\{\sigma_\epsilon^2/[\sigma_x^2+(1+\lambda)\sigma_e^2]\}$. If $\lambda=-1$, then ${\beta}_{x}(b,-1)$ is identical to the true parameter, with the asymptotic variance $\sigma_\epsilon^2/\sigma_x^2$. If $\lambda=0$, ${\beta}_{x}(b,0)$ is just the naive estimator, with the asymptotic variance $\sigma_\epsilon^2/(\sigma_x^2+\sigma_e^2)$. Hence, it can be seen easily that the SD or RMSE of the naive estimators is smaller than that of the SIMEX estimators. \begin{figure}[htbp!] \centering \includegraphics[width=6cm,height=6cm]{hatg.eps} \includegraphics[width=6cm,height=6cm]{boxplot.eps} \caption{{ \it (a) The real curve (solid curve), the naive estimated curve (dashed curve) and the SIMEX estimated curve (dotted-dashed curve) for the link function $g(t)$ when $n=100$ and $\sigma_u=0.4$. (b) The boxplots of the 500 RMSE values for the estimate of $g(t)$. }} \label{fighatg} \end{figure} \subsection{Real data analysis} We now analyze a data set from the Framingham Heart Study to illustrate the proposed method. The data set contains 5 variables with 1615 males and it has been used by many authors to illustrate semiparametric partially linear models (see \citeasnoun{LH1999}, \citeasnoun{WBC2011}). We are interested in whether the age and the serum cholestoral have an effect to the blood pressure. We use the proposed model to analyze the Framingham data to compare the SIMEX and naive estimators. We use the Epanechnikov kernel and the bandwidths $h=0.0589$ and $h_1=h_2=0.2309$. Let $Y$ be their average blood pressure in a fixed two-year period, $W_1$ and $W_2$ be the standardized variable for the logarithm of the serum cholestoral level ($\log(SC)$) and age, respectively. Similar to \citeasnoun{LH1999}, $W_1$ is subject to the measurement error $U$ and $\sigma_u^2$ is estimated to be 0.2632 by two replicates experiments. Figure \ref{figSC} shows the duplicated serum cholestoral level measurements from 1615 males. The estimators of $\beta$ and $g(\cdot)$ based on the SIMEX and naive methods are reported in Table \ref{hp}, Figure \ref{figpr} and Figure \ref{fignpr}. \begin{figure}[htbp] \centering \includegraphics[width=6cm,height=6cm]{SCL.eps} \caption{{ \it Duplicated serum cholestoral level measurements from 1615 males in Framingham Heart Study. }} \label{figSC} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=6cm,height=6cm]{SClamda.eps} \includegraphics[width=6cm,height=6cm]{agelamda.eps} \caption{{ \it The extrapolated point estimators for the Framingham data. The simulated estimates $\{\hat{\beta}(\lambda),\lambda\}$ are plotted (dots), and the fitted quadratic function (solid lines) is extrapolated to $\lambda=-1$. The extrapolation results are the SIMEX estimates (squares). }} \label{figpr} \end{figure} \begin{table}[h] \caption{\label{hp} {\rm The estimators of the parameters obtained by the SIMEX and naive methods for the Framingham data. }} \vskip 8pt \centering \tabcolsep 1.40cm \begin{tabular}{ ccc } \hline Method &$\log(SC)$& Age\\ \hline ~~SIMEX~ &$0.5237$&$0.8502$\\ ~~Naive~ &$0.4194$&$0.9099$ \\\hline \end{tabular}% \end{table} \begin{figure}[htbp] \centering \includegraphics[width=9cm,height=6cm]{hatgfd.eps} \caption{{ \it The link function estimators for the Framingham data: the naive estimated curve (solid curve) and the SIMEX estimated curve (dashed curve). }} \label{fignpr} \end{figure} From Table \ref{hp}, we can see that the SIMEX estimate of the index coefficient $\log (SC)$ is larger, while the SIMEX estimate of Age is smaller than the naive estimate. The results also show that the serum cholestoral and the age are statistically significant. Figure \ref{figpr} shows the trace of the extrapolation step for the SIMEX algorithm. The estimates of the two index coefficients for the different $\lambda$ values are plotted. The SIMEX estimates of index coefficients correspond to $-1$ on the horizontal axis, while the naive estimates correspond to 0 on the horizontal axis. Figure \ref{fignpr} shows that the estimates of $g(\cdot)$ are obtained by the SIMEX method and the naive method. The patterns of the two curves are similar. Table \ref{hp} and Figure \ref{fignpr} show that the age and the serum cholestoral have a positive association with the blood pressure. As expected, when the measurement error is taken into account, we find a somewhat stronger positive association between the serum cholestoral and the blood pressure. \citeasnoun{LH1999} also analyzed the relationship among the blood pressure, the age, and the logarithm of serum cholesterol level by the partially linear errors-in-variables model, where the logarithm of serum cholesterol level was the covariate of the corresponding parameter and the age was a scalar covariate of the corresponding unknown function. When they accounted for the measurement error, the estimator of the parameter was larger than that of ignoring the measurement error. It implied that the blood pressure and the serum cholestoral had a stronger positive correlation when considering the measurement error. The estimator of the unknown function showed that the age was positively associated with the blood pressure. Our findings basically agree with those discovered in \citeasnoun{LH1999}. \section{Conclusion} We propose the SIMEX estimation of the index parameter and the unknown link function for single-index models with covariate measurement error. The asymptotic normality of the estimator of the index parameter and the asymptotic bias and variance of the estimator of the unknown link function are derived under some regularity conditions. The proposed index parameter estimator is root-$n$ consistent, which is similar to that of the estimator of a parameter without measurement error, but the asymptotic covariance has a complicated form. The asymptotic variance of the estimator of the unknown link function is of order $(nh_2)^{-1}$. Our simulation studies indicate that the proposed method works well in practice. The proposed method can be extended to some other models, including partially linear single-index models with measurement error in nonparametric components and generalized single-index models with covariate measurement error. We can also extend to single-index measurement error models with cluster data by assuming working independence in the estimating equations. Future study is needed to investigate how to take into account the within-cluster correlation for cluster data to improve the efficiency of the estimator of the index parameter for single-index measurement error models with cluster data. \vskip 24pt \section*{Appendix} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} The following notation will be used in the proofs of the lemmas and theorems. Set $\beta_0$ be true value, $\mathcal{B}_n=\{\beta: \|\beta\|=1, \|\beta-\beta_0\|\leq c_1 n^{-1/2}\}$ for some positive constant $c_1$. Let $f_\lambda(\cdot)$ be the density function of $\beta^TW_b(\lambda)$. Note that if $\lambda=0$, $f_0(\cdot)$ is the density function of $\beta^TW$. \begin{lemm}\label{CR} Let $(\zeta_1, \eta_1),\ldots, (\zeta_n, \eta_n)$ be i.i.d. random vectors, where $\eta_i$'s are scalar random variables. Assume further that $E|\eta_1|^s<\infty$, and $\sup_x\int|y|^sf(x,y)dy<\infty$, where $f(\cdot,\cdot)$ denotes the joint density of $(\zeta_1,\eta_1)$. Let $K(\cdot)$ be a bounded positive function with a bounded support, satisfying a Lipschitz condition. Then $$\sup\limits_x\Big|{1\over n}\sum\limits_{i=1}^n\left\{K_h(\zeta_i-x)\eta_i-E[K_h(\zeta_i-x)\eta_i]\right\}\Big|=O_p\left(\left\{{\log(1/h)\over nh}\right\}^{1/2}\right),$$ provided that $n^{2\epsilon-1}h\rightarrow\infty$ for some $\epsilon<1-s^{-1}$. \end{lemm} \textbf{Proof:} This follows immediately from the result that was obtained by \citeasnoun{MS1982}. \begin{lemm}\label{CGR} Suppose that conditions (C1)--(C4) hold. Then $$\sup\limits_{t\in\mathcal{T},\beta\in \mathcal{B}_n}\big|\hat{g}(\beta,\lambda;t)-g(\lambda;t)\big|=O_p\big((nh/\log n)^{-1/2}+h^2\big)$$ and $$\sup\limits_{t\in\mathcal{T},\beta\in \mathcal{B}_n}\big|\hat{g}'(\beta,\lambda;t)-g'(\lambda;t)\big|=O_p\big((nh^3/\log n)^{-1/2}+h\big).$$ \end{lemm} \textbf{Proof:} By the theory of least squares, we have \begin{eqnarray}\label{HAG} (\hat{g}(\beta,\lambda;t),h\hat{g}'(\beta,\lambda;t))^T=S_n^{-1}(\beta,\lambda;t)\xi_n(\beta,\lambda;t), \end{eqnarray} where $$S_n(\beta,\lambda;t)=\left(\begin{array}{cc}S_{n,0}(\beta,\lambda;t)&h^{-1}S_{n,1}(\beta,\lambda;t)\\ h^{-1}S_{n,1}(\beta,\lambda;t)&h^{-2}S_{n,2}(\beta,\lambda;t) \end{array}\right)$$ and $$\xi_n(\beta,\lambda;t)=(\xi_{n,0}(\beta,\lambda;t)),\xi_{n,1}(\beta,\lambda;t))^T$$ with $$\xi_{n,l}(\beta,\lambda;t)={1\over n}\sum\limits_{i=1}^nY_i\left({\beta^TW_{ib}(\lambda)-t}\over h\right)^lK_h(\beta^TW_{ib}(\lambda)-t)$$ for $l=0,1,2.$ A simple calculation yields, for $l=0,1,2,3,$ \begin{eqnarray}\label{ESN}E[h^{-1}S_{n,l}(\beta,\lambda;t)]=f_\lambda(t)\mu_l+O(h).\end{eqnarray} By Lemma \ref{CR}, we have $$h^{-1}S_{n,l}(\beta,\lambda;t)-E[h^{-1}S_{n,l}(\beta,\lambda;t)]=O_p\left(\left\{{\log(1/h)\over nh}\right\}^{1/2}\right),$$ which, combining with (\ref{ESN}), proves that, for $t\in\mathcal{T}$ and $\beta\in \mathcal{B}_n$, \begin{eqnarray}\label{SN} h^{-1}S_{n,l}(\beta,\lambda;t)=f_\lambda(t)\mu_l+O_p\left(\left\{{\log(1/h)\over nh}\right\}^{1/2}+h\right), ~~~l=0,1,2,3. \end{eqnarray} It can be obtained immediately that $$S_n(\beta,\lambda;t)=S(\lambda;t)+O_p\left(\left\{{\log(1/h)\over nh}\right\}^{1/2}+h\right),$$ where $S(\lambda;t)=f_\lambda(t)\otimes {\rm diag}(1,\mu_2)$, and $\otimes$ is the Kronecker product. Denote $$\xi_{n,l}^*(\beta,\lambda;t)={1\over n}\sum\limits_{i=1}^n[Y_i-g(\lambda;\beta^TW_{ib}(\lambda))]\left({\beta^TW_{ib}(\lambda)-t}\over h\right)^lK_h(\beta^TW_{ib}(\lambda)-t)$$ and $$\xi_n^*(\beta,\lambda;t)=\Big(\xi_{n,0}^*(\beta,\lambda;t), \xi_{n,1}^*(\beta,\lambda;t)\Big)^T.$$ Note that \begin{eqnarray}\label{EXi}E(\xi_n^*(\beta,\lambda;t))=O(n^{-1/2}).\end{eqnarray} By Lemma 1 and (\ref{EXi}), it can be shown that \begin{eqnarray}\label{XIN}\xi_n^*(\beta,\lambda;t)=O_p\left(\left\{{\log(1/h)\over nh}\right\}^{1/2}+n^{-1/2}\right). \end{eqnarray} By applying Taylor's expansion for $g(\lambda;\beta^TW_{ib}(\lambda))$ at $t$, we can prove that \begin{eqnarray*}\xi_{n,0}(\beta,\lambda;t)-\xi_{n,0}^*(\beta,\lambda;t)&=&S_{n,0}(\beta,\lambda;t)g(\lambda;t)+S_{n,1}(\beta,\lambda;t)hg'(\lambda;t)\\ &&~+{1\over 2}h^2S_{n,2}(\beta,\lambda;t)g''(\lambda;t)+o_p\{h^2+(nh)^{-1/2}\} \end{eqnarray*} and \begin{eqnarray*}\xi_{n,1}(\beta,\lambda;t)-\xi_{n,1}^*(\beta,\lambda;t)&=&S_{n,1}(\beta,\lambda;t)g(\lambda;t)+S_{n,2}(\beta,\lambda;t)hg'(\lambda;t)\\ &&~+{1\over 2}h^2S_{n,3}(\beta,\lambda;t)g''(\lambda;t)+o_p\{h^2+(nh)^{-1/2}\} \end{eqnarray*} uniformly hold in $t\in\mathcal{T}$ and $\beta\in \mathcal{B}_n$. Hence \begin{eqnarray*}\xi_{n}(\beta,\lambda;t)-\xi_{n}^*(\beta,\lambda;t)&=&S_{n}(\beta,\lambda;t) \left(\begin{array}{c}g(\lambda;t)\\ h g'(\lambda;t) \end{array}\right)+{1\over2}h^2\left(\begin{array}{c}S_{n,2}(\beta,\lambda;t)g''(\lambda;t)\\ S_{n,3}(\beta,\lambda;t)g''(\lambda;t) \end{array}\right)\\ &&~+o_p\{h^2+(nh)^{-1/2}\}. \end{eqnarray*} Combining this with (\ref{HAG})--(\ref{SN}) yields \begin{eqnarray}\label{gg1} \left(\begin{array}{c}\hat{g}(\lambda;t)-g(\lambda;t)\\\nonumber h \{\hat{g}'(\lambda;t)-g'(\lambda;t)\} \end{array}\right)&=&S^{-1}(\lambda;t)\xi_n^*((\beta,\lambda;t))\\ &&+{1\over2}h^2\left(\begin{array}{c}\mu_{2}g''(\lambda;t)\\ {\mu_{3}\over \mu_2}g''(\lambda;t) \end{array}\right)+o_p(h^2+(nh)^{-1/2}). \end{eqnarray} This together with (\ref{XIN}) proves Lemma \ref{CGR}. \textbf{Proof of Theorem \ref{AN}:} Assume $\beta(\lambda)$ is the true value based on the model $E[Y|\beta^T(\lambda) W_b(\lambda)]=g(\beta^T(\lambda) W_b(\lambda))$. Using Lemma \ref{CGR} and the similar method in Theorem 1 of \citeasnoun{CXZ2010}, we have \begin{eqnarray*} \sqrt{n}\Big(\hat{\beta}_b(\lambda)-\beta(\lambda)\Big)=\sqrt{n}J_{\beta^{(r)}(\lambda)}A_n^{-1}(\beta(\lambda), \lambda)B_n(\beta(\lambda),\lambda)+o_p(1), \end{eqnarray*} where \begin{eqnarray*} A_n(\beta(\lambda),\lambda)={1\over n}\sum\limits_{i=1}^n\Big[g'\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)\Big]^2J_{\beta^{(r)}(\lambda)}^T{\widetilde{W}}_{ib}(\lambda){\widetilde{W}}^T_{ib}(\lambda)J_{\beta^{(r)}(\lambda)} \end{eqnarray*} and \begin{eqnarray*} B_n(\beta(\lambda),\lambda)={1\over n}\sum\limits_{i=1}^n\epsilon_{ib}(\lambda)g'\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)J_{\beta^{(r)}(\lambda)}^T{\widetilde{W}}_{ib}(\lambda) \end{eqnarray*} with $\epsilon_{ib}(\lambda)=Y_i-g\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)$. Extrapolation step deduces that \begin{eqnarray}\label{betala} \sqrt{n}\Big(\hat{\beta}(\lambda)-\beta(\lambda)\Big)=J_{\beta^{(r)}(\lambda)}\mathcal{A}^{-1}(\beta(\lambda),\lambda)n^{-{1\over 2}}\sum\limits_{i=1}^n\eta_{iB}(\beta(\lambda),\lambda)+o_p(1), \end{eqnarray} where $\eta_{iB}(\beta(\lambda),\lambda)=\displaystyle {1\over B}\sum\limits_{b=1}^B\epsilon_{ib}(\lambda)g'\Big(\lambda; \beta^T(\lambda) W_{ib}(\lambda)\Big)J_{\beta^{(r)}(\lambda)}^T{\widetilde{W}}_{ib}(\lambda).$ Then, using (\ref{betala}), the limit distribution of $\sqrt{n}\Big(\hat{\beta}(\Lambda)-\beta(\Lambda)\Big)$ is multivariate normal distribution with mean zero and covariance $\Sigma$. $\hat{\mathbf{\Gamma}}$ in the extrapolation step is obtained by minimizing $\{{\rm Res}(\mathbf{\Gamma})\}\{{\rm Res}(\mathbf{\Gamma})\}^T$. The estimating equation for $\hat{\mathbf{\Gamma}}$ is $0=s(\mathbf{\Gamma}){\rm Res}(\mathbf{\Gamma})$, where $s^T(\mathbf{\Gamma})=\{\partial/\partial (\mathbf{\Gamma})^T\}{\rm Res}(\mathbf{\Gamma})$. Then, we have $$\sqrt{n}(\hat{\mathbf{\Gamma}}-\mathbf{\Gamma})\stackrel{\mathcal {L}}{\longrightarrow}N\{0,\Sigma(\mathbf{\Gamma})\}.$$ Because $\hat{\beta}_{\rm SIMEX}=\mathcal{G}(-1,\hat{\mathbf{\Gamma}})$, the SIMEX estimator is asymptotically normal with asymptotic variance $$ \mathcal{G}_{\mathbf{\Gamma}}(-1,\mathbf{\Gamma})\Sigma(\mathbf{\Gamma})\{\mathcal{G}_{\mathbf{\Gamma}}(-1,\mathbf{\Gamma})\}^T. $$ \textbf{Proof of Theorem \ref{ANG}:} Note that $\|\hat{\beta}_{\rm SIMEX}-\beta\|=O_p(n^{-1/2})$, similar to the proof of (\ref{gg1}), we have \begin{eqnarray}\nonumber &&\hat{g}_b(\lambda; t_0)-g(\lambda;t_0)-{1\over 2}h_2^2\mu_2g''(\lambda; t_0)\\\label{hatgb} &=&[f_\lambda(t_0)]^{-1}\displaystyle {1\over n}\sum\limits_{i=1}^n\left\{[Y_i-g(\lambda;\beta^TW_{ib}(\lambda))]K_{h_2}(\beta^TW_{ib}(\lambda)-t_0)\right\}\\\nonumber &&+o_p\{h_2^2+(nh_2)^{-1/2}\}. \end{eqnarray} Using (\ref{hatgb}) and the decomposition of \citeasnoun{CLKS1996}, since $B$ is fixed and $\hat{g}(\lambda; t_0) =B^{-1}\sum\limits_{b=1}^B hat{g}_b(\lambda; t_0)$, we have \begin{eqnarray}\nonumber &&\hat{g}(\lambda; t_0)-g(\lambda;t_0)-{1\over 2}h_2^2\mu_2g''(\lambda; t_0)\\\label{hatgll} &=&[f_\lambda(t_0)]^{-1}\displaystyle {1\over n}\sum\limits_{i=1}^n\left\{B^{-1}\sum\limits_{b=1}^B[Y_i-g(\lambda;\beta^TW_{ib}(\lambda))]K_{h_2}(\beta^TW_{ib}(\lambda)-t_0)\right\}\\\nonumber &&+o_p\{h_2^2+(nh_2)^{-1/2}\}. \end{eqnarray} If $\lambda=0$, (\ref{hatgll}) becomes \begin{eqnarray*} &&\hat{g}(0; t_0)-g(0;t_0)-{1\over 2}h_2^2\mu_2g''(0; t_0)\\ &=&[nf_0(t_0)]^{-1}\displaystyle {1\over n}\sum\limits_{i=1}^n[Y_i-g(0;\beta^TW_i)]K_{h_2}(\beta^TW_{i}-t_0) +o_p\{h_2^2+(nh_2)^{-1/2}\}, \end{eqnarray*} which has mean zero and the following asymptotic variance \begin{eqnarray}\label{vairance0} [nh_2f_0(t_0)]^{-1}{\rm var}(Y|\beta^TW=t_0)\nu_2. \end{eqnarray} For $\lambda>0$, using the similar argument of (A8) in \citeasnoun{CMR1999}, we have $${\rm var}(\hat{g}(\lambda; t_0))=O\{(nh_2B)^{-1}\}+O(n^{-1}),$$ while for $\lambda=0$, $${\rm var}(\hat{g}(\lambda; t_0))=O\{(nh_2)^{-1}\}.$$ Then, for $B$ sufficiently large, the variability of $\hat{g}(\lambda; \cdot)$ is negligible for $\lambda>0$ compared to $\lambda=0$. Hence, in what follows, we will ignore this variability by treating $B$ as if it was equal to infinity. We obtain $\hat{\mathbb{A}}$ by solving the following equation \begin{eqnarray}\label{GEQ}0= \sum\limits_{\lambda\in\Lambda}\{\hat{g}(\lambda;t_0)-\mathcal{G}(\lambda,\mathbb{A})\}\gamma(\lambda,\mathbb{A}).\end{eqnarray} Applying the Taylor expansion for the left side of (\ref{GEQ}), we obtain $$0=\sum\limits_{\lambda\in\Lambda}\{\hat{g}(\lambda;t_0)-\mathcal{G}(\lambda,\mathbb{A})\}\gamma(\lambda,\mathbb{A})- \sum\limits_{\lambda\in\Lambda}\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})(\hat{\mathbb{A}}-\mathbb{A}), $$ Hence, \begin{eqnarray}\label{hataa} \hat{\mathbb{A}}-\mathbb{A}=\left\{\sum\limits_{\lambda\in\Lambda}\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})\right\}^{-1} \sum\limits_{\lambda\in\Lambda}\{\hat{g}(\lambda;t_0)-\mathcal{G}(\lambda,\mathbb{A})\}\gamma(\lambda,\mathbb{A}). \end{eqnarray} The left side of (\ref{hataa}) has approximate mean \begin{eqnarray*} \left\{\sum\limits_{\lambda\in\Lambda}\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})\right\}^{-1} \sum\limits_{\lambda\in\Lambda}{1\over 2}h_2^2 \mu_2g''(\lambda;t_0)\gamma(\lambda,\mathbb{A}), \end{eqnarray*} and its approximate variance is given by $$[nh_2f_0(t_0)]^{-1}\nu_2 {\rm var}(Y|\beta^TW=t_0)\left\{\sum\limits_{\lambda\in\Lambda}\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})\right\}^{-1}D \left\{\sum\limits_{\lambda\in\Lambda}\gamma(\lambda,\mathbb{A})\gamma^T(\lambda,\mathbb{A})\right\}^{-1}.$$ Because $\hat{g}_{\rm SIMEX}(t_0)=\mathcal{G}(-1,\hat{\mathbb{A}})$, so that its asymptotic bias is $$C(\Lambda,\mathbb{A})\sum\limits_{\lambda\in\Lambda}{1\over 2}h_2^2 \mu_2g''(\lambda;t_0)\gamma(\lambda,\mathbb{A}),$$ and its asymptotic variance is $$[nh_2f_0(t_0)]^{-1}\nu_2 {\rm var}(Y|\beta^TW=t_0)C(\Lambda,\mathbb{A})DC^T(\Lambda,\mathbb{A}).$$ This completes the proof. \bibliographystyle{dcu}
2024-02-18T23:40:25.834Z
2016-11-22T02:08:44.000Z
algebraic_stack_train_0000
2,382
8,237
proofpile-arXiv_065-11625
\section*{Abstract} While urban systems demonstrate high spatial heterogeneity, many urban planning, economic and political decisions heavily rely on a deep understanding of local neighborhood contexts. We show that the structure of 311 Service Requests enables one possible way of building a unique signature of the local urban context, thus being able to serve as a low-cost decision support tool for urban stakeholders. Considering examples of New York City, Boston and Chicago, we demonstrate how 311 Service Requests recorded and categorized by type in each neighborhood can be utilized to generate a meaningful classification of locations across the city, based on distinctive socioeconomic profiles. Moreover, the 311-based classification of urban neighborhoods can present sufficient information to model various socioeconomic features. Finally, we show that these characteristics are capable of predicting future trends in comparative local real estate prices. We demonstrate 311 Service Requests data can be used to monitor and predict socioeconomic performance of urban neighborhoods, allowing urban stakeholders to quantify the impacts of their interventions. \section*{Introduction} Cities can be seen as a complex system composed of multiple layers of activity and interactions across various urban domains; therefore, discovering a parsimonious description of urban function is quite difficult\cite{batty2008size,bettencourt2010urbscaling, bettencourt2013origins, arcaute2013citybound}. However, urban planners, policy makers and other types of urban stakeholders, including businesses and investors, could benefit from an intuitive proxy of neighborhood conditions across the city and over time\cite{const2015, maimon_datamining, townsend2013}. At the same time, such simple indicators could provide not only valuable information to support urban decision-making, but also to accelerate the scalability of successful approaches and practices across different neighborhood and cities, as urban scaling patterns have become an increasing topic of interest \cite{powell2007food, bettencourt2007growth, albeverio2008, sobolevsky2014mining, sobolevsky2015scaling}. As the volume and heterogeneity of urban data have increased, machine learning has become a viable tool for enhancing our knowledge of urban space and in developing predictive analytics to inform city management and policy\cite{nelder,bb2003, macqueen1967kmeans, rousseeuw1987}. The non-trivial challenge is to identify a consistent, quantifiable metric that provides comprehensive insights across multiple layers of urban operations and planning \cite{allwinkle2011} and to locate readily-available data to support its implementation across a range of cities. Fortunately, urban data collected by various agencies and companies provide an opportunity to respond to this challenge\cite{batty2012, bettencourt2014bigdata}. In the age of ubiquitous digital media, numerous aspects of human activity are being analyzed by means of their digital footprints, such as mobile call records \cite{ girardin2008digital, gonzalez2008uih, quercia2010rse, sobolevsky2013delineating, amini2014impact, reades2007, const2016}, vehicle GPS traces \cite{santi2014}, bank card transactions \cite{sobolevsky2016prism, shen2014, scholnick2013}, payment patterns\cite{boeschoten1998, bounie2006, hayhoe2000}, smart card usage \cite{bagchi2005, lathia2012, chan1999frauddetection, rysman2007}, or social media activity \cite{szell2013, frank2013happiness, hawelka2014, paldino2015, lenormand_bcards_2014}. Such data sets have been successfully applied to investigate urban \cite{louail2014citystruct} and regional structure \cite{ratti2010redrawing, sobolevsky2013delineating}, land use \cite{grauwin2014towards, pei2014new}, financial activities\cite{sobolevsky2014money}, mobility \cite{noulas2012tale, kung2014exploring}, or well-being \cite{lathia2012, sobolevsky2015predicting}. However, one of the major limitations to widespread adoption of such analytics in the practice of urban management and planning is the extreme heterogeneity of the data coverage: different types of data are available for different areas and periods of time, which undermine efforts to develop universal and reliable analytic approaches. Privacy considerations are another significant issue that create additional practical and legal obstacles, restricting data access and preventing their use out of a concern for confidentiality\cite{lane2014, christin2011, belanger2011, krontiris2010}. Increasingly, cities are introducing systems to collect service requests and complaints from their citizens. These data, commonly referred to as 311 requests, reflect a wide range of concerns raised by city residents and visitors, offering a unique indicator of local urban function, condition, and service level. In many cities, 311 requests are publicly available through city-managed open data platforms as part of a broader movement in local government to increase transparency and good governance \cite{walker2013}. Although potentially biased by the self-reported nature of the requests and complaints, these data provide a comparable measure of perceived local quality of life across space and time. In this article, we develop a method for classifying urban locations based on the categorical and temporal structure of 311 Service Requests for a given neighborhood, exploring whether these spatio-temporal patterns can reveal characteristic signatures of the area. For New York, Boston, and Chicago, we present applications of this new urban classifier for predicting socioeconomic and demographic characteristics of a neighborhood and estimating the economic performance and well-being of a defined spatial agglomeration. The paper begins with a discussion of the data and methodology, followed by specific use cases relating to demographics and real estate values, and concluding with opportunities for future research. \section{Materials and Resources} \subsection{The 311 data} { 311 service request and complaint data are being collected across more than 30 cities in the United States, including New York, Boston and Chicago. Through the 311 system, local government agencies offer non-emergency services to residents, visitors and businesses and respond to reported service disruptions, unsafe conditions, or quality-of-life disturbances. These 311 service requests and complaints cover a wide range of concerns, including, but not limited to, noise, building heat outages, rodent sightings, etc. Thus, these data serves as an extremely useful resource in understanding the delivery of critical city services and neighborhood conditions. We explore the 311 datasets from New York, Chicago, and Boston as major urban centers where 311 systems are in place and commonly used. We consider a time frame between 2012 and 2015 during which the data are available for all three cities selected. In table 1, we provide descriptive statistics of the data. Note that the number of total requests has been increasing from 2012 to 2015 in each city. Conceivably, the number of requests in New York City (which now approaches 2 million per year) is higher than the others because of its population size. However, Boston has a substantially smaller number of requests compared to the similar-sized city of Chicago, which shows the discrepancies in the use of the system across cities. Unfortunately, each city uses a different complaint/request coding convention, thus there is little consistency in the classification of particular complaint types. This fact raises certain difficulties for analysis between cities, a common challenge in comparative urban analytics given the lack of data standardization. For example, in 2015, New York City's 311 data are categorized into 182 types, where Chicago has only 12. Even within a particular city, request categories are subject to change over time, especially in NYC where only approximately 70\% of the entire service request activity belong to common categories present in all four years. Additional adjustments are needed to re-classify complaint types into standardized categories across the different cities and over the time period of the analysis. } The original data set provided by 311 Services contains one record for each customer's call. For most cities, these records include information such as: service request type, service request open/close time and date and location(longitude and latitude). Therefore, for any given time period and area(census tract area/zipcode area), we can aggregate the 311 service requests and group by type. \begin{table}[ht] \centering \begin{tabular}{|*{10}{c|}} \hline \multicolumn{1}{|c}{Year} & \multicolumn{9}{|c|}{New York City}\\ \hline \multicolumn{1}{|c}{} & \multicolumn{3}{|c|}{Total Requests}& \multicolumn{3}{|c|}{Requests Categories}& \multicolumn{3}{|c|}{Share of common categories' activity}\\ \hline \multicolumn{1}{|c}{2012} & \multicolumn{3}{|c|}{1414392}& \multicolumn{3}{|c|}{165}& \multicolumn{3}{|c|}{0.69}\\ \hline \multicolumn{1}{|c}{2013} & \multicolumn{3}{|c|}{1431729}& \multicolumn{3}{|c|}{162}& \multicolumn{3}{|c|}{0.69}\\ \hline \multicolumn{1}{|c}{2014} & \multicolumn{3}{|c|}{1654913}& \multicolumn{3}{|c|}{179}& \multicolumn{3}{|c|}{0.73}\\ \hline \multicolumn{1}{|c}{2015} & \multicolumn{3}{|c|}{1806560}& \multicolumn{3}{|c|}{182}& \multicolumn{3}{|c|}{0.73}\\ \hline \multicolumn{1}{|c}{Year} & \multicolumn{9}{|c|}{Chicago}\\ \hline \multicolumn{1}{|c}{} & \multicolumn{3}{|c|}{Total Requests}& \multicolumn{3}{|c|}{Requests Types}& \multicolumn{3}{|c|}{Share of common categories' activity}\\ \hline \multicolumn{1}{|c}{2012} & \multicolumn{3}{|c|}{478532}& \multicolumn{3}{|c|}{13}& \multicolumn{3}{|c|}{0.85}\\ \hline \multicolumn{1}{|c}{2013} & \multicolumn{3}{|c|}{507956}& \multicolumn{3}{|c|}{14}& \multicolumn{3}{|c|}{0.82}\\ \hline \multicolumn{1}{|c}{2014} & \multicolumn{3}{|c|}{515258}& \multicolumn{3}{|c|}{14}& \multicolumn{3}{|c|}{0.82}\\ \hline \multicolumn{1}{|c}{2015} & \multicolumn{3}{|c|}{568576}& \multicolumn{3}{|c|}{12}& \multicolumn{3}{|c|}{0.9}\\ \hline \multicolumn{1}{|c}{Year} & \multicolumn{9}{|c|}{Boston}\\ \hline \multicolumn{1}{|c}{} & \multicolumn{3}{|c|}{Total Requests}& \multicolumn{3}{|c|}{Requests Types}& \multicolumn{3}{|c|}{Share of common categories' activity}\\ \hline \multicolumn{1}{|c}{2012} & \multicolumn{3}{|c|}{92855}& \multicolumn{3}{|c|}{155}& \multicolumn{3}{|c|}{1}\\ \hline \multicolumn{1}{|c}{2013} & \multicolumn{3}{|c|}{112727}& \multicolumn{3}{|c|}{165}& \multicolumn{3}{|c|}{0.99}\\ \hline \multicolumn{1}{|c}{2014} & \multicolumn{3}{|c|}{112785}& \multicolumn{3}{|c|}{183}& \multicolumn{3}{|c|}{0.96}\\ \hline \multicolumn{1}{|c}{2015} & \multicolumn{3}{|c|}{161498}& \multicolumn{3}{|c|}{180}& \multicolumn{3}{|c|}{0.83}\\ \hline \end{tabular} \caption{General properties of the 311 data for NYC, Chicago and Boston} \end{table} \subsection{Demographic and socio-economic data} As we are attempting to use 311 data as a proxy for the socioeconomic characteristics and real estate values of urban neighborhoods, ground-truth data are needed to train and validate our models. For socioeconomic and demographic features, we use data from the U.S. Census 2014 American Community Survey (ACS). For real estate values, we collect housing price data from the online real estate listing site Zillow. Both are described below. \subsubsection{2014 census data} The 2014 ACS contains survey data on a number of socieconomic and demographic variables, at the spatial aggregation of the Census Block. For this analysis, we have selected common features representing important phenomena in population diversity, education, and income and employment. For example, our selection covers the number of population in the following categories: "Non-Hispanic White", "African-American", "Asian", "High school degree", "College degree", "Graduate degree", "Uninsured ratio", "Unemployment ratio", "Poverty ratio", and mean for "Income (all)", "Income of No Family", "Income of Families" and "Income of Households". One important consideration is the level of spatial aggregation for this analysis. Having considered zip code, census tract and census block, we decided to proceed with census tracts providing the best trade-off between spatial granularity, in terms of having a sufficient number of sub-areas within each city, and having a statistically significant sample of 311 complaints for each areal unit. In Boston and Chicago, there are too few zipcodes within in each city to create a useful sample, and there is not a significant density of 311 complaints at the census block level (please refer to SI4 for details). In addition, given the survey methodology of the ACS data, census block data include non-trivial margins-of-error for each variable. \subsubsection{Zillow Housing price} One important indicator of local economic conditions is housing prices \cite{const2012}. We utilize Zillow housing price data that contain monthly average residential real estate sales prices by zip code. Although housing prices are a lagging indicator of neighborhood economic strength, since recorded sales occur as much as two to more than six months after a contract is signed, we use these values as one of the targets for our 311 predictions. Our spatial level of analysis will be the zipcode, rather than census tracts, given the coverage area of the Zillow aggregate data. \subsubsection{Normalization method and some notations} In order to better compare various areas, the census data need to be normalized. Take income per capita and population with bachelor degree for example. Firstly, these two features have different measurement units (dollars versus number of people). Secondly, this number can be affected by the area's total population. For an area with high population, there should be a higher possibility to have higher population with bachelor degree. Therefore, the normalization process is important in order to compare different features and different areas with heterogeneous population. For our analysis, we normalize our census tract data set in the following way. Let $p_i$ be the total population in census tract, while $v_i$ denotes one feature recorded in the same census tract $i$, for example, "the total population who holds graduate degree in census tract $i$". Next, we normalize it by defining $$y_i=\frac{v_i p_i}{\sum_{j\in{\Omega}}v_j p_j}$$ \\\ We define $\Omega$ as a set of all census tracts in New York City. In section 2, we use 311 complaint frequency categorized by census tracts to cluster and investigate the difference in local socioeconomic features $y$. In section 3 we use machine learning regression models to predict these features $y$ using normalized 311 data . \section{Classification based on 311 service categories} In order to get initial insights on the usage of 311 service across the considered cities, we define for each census tract a 311 service request signature - a vector of the relative frequencies of 311 requests of different types. Specifically, let the total number of service requests of each type $t$ within an area $a$ be $s(a,t)$ and let $s(a)=\sum_t s(a,t)$ be the total number of service requests in the area $a$. Then a vector $S(a)=(s(a,t)/s(a), t=1..T)$, where $T$ is the total number of service request types, serves as a signature of the location's aggregated 311 service request behavior. The vector $S$ highlights the primary reasons for service requests or complaints in the specific area, as well as allowing for straightforward comparison across tracts and cities. Signatures $S(a)$ serve as unique characteristics of each location $a$, and we would expect similar spatio-temporal patterns to emerge in 311 service requests across a city or cities. Our hypothesis here is that these similarities also suggest similarities in the socioeconomic characteristics of the areas. In order to explore this further, we apply a $k$-means clustering approach to the set of multi-dimensional vectors $S(a)$. In order to ensure we get an optimal clustering we run the $k$-mean 100 times, selecting the best solution in terms of cumulative square sum of distances from centroids. One crucial step in this approach is to pick up an appropriate number of clusters to consider. For that purpose we have evaluated the clustering model with both Silhouette method and Elbow method. While different methods give a slightly different optimal number of clusters for the cities in our sample, in most cases it is within a range of two to four clusters. Given the socioeconomic diversity across neighborhoods in the selected cities, we determine that a minimum of four clusters is an appropriate value. Readers can find more details in SI. \begin{figure*}[h] \centering \includegraphics[width=13cm, height=13cm]{Fig0.pdf} \caption{\label{fig::category_classification} Classification of urban locations based on the categorical structure of the 311 requests.} \end{figure*} We consider NYC first. In Figure 1, we see below with approximate 2000 census tracts divided into four clusters based on our clustering results. Midtown Manhattan, downtown Brooklyn and several outliers such as JFK and LGA airports belong to cluster 1; Staten Island and eastern Brooklyn/Queens constitute cluster 2; Northern Manhattan, the Bronx, and central Brooklyn are included in cluster 3; and Southern Brooklyn, Flushing and some eastern parts of Bronx comprise cluster 4. \begin{figure*}[h] \centering \includegraphics[width=14cm, height=7cm]{Fig2.eps} \caption{\label{fig::category_classification} Top 20 requests distribution among clusters.} \end{figure*} In order to evaluate how different each cluster is with respect to the nature of 311 service requests, (see figure 2) we present the distribution of top service requests over the four clusters. We observe clear variation in this distribution. For example, complaints/requests within cluster 1 more often report noise concerns than others, cluster 2 experiences more issues relating to residential heating, cluster 3 has the highest relative complaints about blocked driveways, while cluster 4 reports concerns about street conditions. Similarly, we repeat the same clustering process for Chicago and Boston and the clustering results for census tracts in those cities are shown in Figure 3. \begin{figure*}[h] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{Fig4.pdf} \caption{Chicago} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{Fig3.eps} \caption{Boston} \label{fig:sfig2} \end{subfigure} \caption{\label{fig::category_classification} Classification of urban locations based on the categorical structure of the 311 service requests for Chicago and Boston.} \label{fig:fig} \end{figure*} \section{Socioeconomic features among clusters} Given knowledge of the local spatial contexts for the analyzed cities, the clusters that emerge make certain intuitive sense. However, in order to quantitatively address the hypothesis formulated in the previous section - that similarities in local 311 service request signatures also imply similarities in the socioeconomic profiles of those areas - here we summarize and analyze the socioeconomic characteristics for each of the discovered clusters. Recall that thus far the clustering results are obtained based on the 311 service requests frequency alone with no socioeconomic information considered. Next we summarize 14 socioeconomic features and compare the normalized mean level for each feature in each of the considered clusters. The results for our three cities are presented in the radar plots in Figures 4-6. From the output, we can see that the socioeconomic features among the defined clusters are quite distinctive. Take NYC for example: \begin{itemize} \item Education and Income: People with higher levels of education (with graduate degree and above) are found in cluster 1, which, as expected, also has highest income level. Cluster 3 appears to show the opposite results. \item Racial diversity: There are above average concentrations of Non-Hispanic Whites living in clusters 1 and 2, of Asian origin in cluster 4, and African-American populations in cluster 3. \end{itemize} Similarly we have (for both Chicago and Boston): \begin{itemize} \item Cluster 1 has the highest income and education level, while cluster 3 is the lowest. \item Cluster 2 is predominantly Asian and African-Americans, while Non-Hispanic Whites tend to live in clusters 1 and 4. \end{itemize} The observations above provide some evidence for our hypothesis, revealing links between socioeconomic features and 311 service request data structure. Indeed, while the clustering is performed based on the 311 data alone, the socioeconomic features happen to be quite distinctive among the clusters. Of course this only reveals the existence of a certain relation in principle, which might not be that practical. However this gives rise to another hypothesis - can one use 311 service request data to actually model socioeconomic features at the local scale? \begin{figure}[H] \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=7cm]{Fig1.pdf} \label{fig:sfig2} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=6cm]{Fig5.eps} \label{fig:sfig2} \end{subfigure} \caption{Normalized ratio of socioeconomic features among clusters in New York} \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=6cm]{Fig4.pdf} \label{fig:sfig2} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=6cm]{Fig6.eps} \label{fig:sfig2} \end{subfigure} \caption{Normalized ratio of socioeconomic features among clusters in Chicago} \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=6cm]{Fig3.eps} \label{fig:sfig2} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=6cm]{Fig7.eps} \label{fig:sfig2} \end{subfigure} \caption{Normalized ratio of socioeconomic features among clusters in Boston} \label{fig:fig} \end{figure} \section{Modeling the socioeconomic features} We find that 311 service request signatures allow the city to be divided into clusters based on distinctive patterns of socioeconomic characteristics. Following this, we explore whether 311 service requests can be used to model these socioeconomic patterns. Such a model could be useful as socioeconomic data are often unavailable or inconsistent at a given spatio-temporal scale, and therefore having a proxy derived from a model based on regularly-updated open data could have considerable potential for city operations and neighborhood planning. We train regression models over the relative frequencies of 311 service requests of each type in each census tract in order to estimate the selected socioeconomic features described in subsection 1.2.1. The service requests frequencies $s(a,t)/s(a)$ (components of the signature vectors) constitute our feature space, including 179 different features in the case of NYC, across 2000 census tracts following the data cleaning/filtering process. We consider six target variables including income per capita, percentage of residents with a graduate degree, percentage of unemployed residents, percentage of residents living below the poverty level, as well as demographic characteristics including percentage of Non-Hispanic White and African-American populations. The objective of the modeling is to use partial information about the target variables defined in a certain part of the city to train the model so that it can explain the target variables over the rest of the city. For the purpose of a comprehensive model evaluation we use a cross-validation procedure. We try several models including Lasso\cite{hans2009}, Neural Networks with regularisation (NN)\cite{Haykin2009, Girosi1995, Jin2004}, Random Forests Regression (RF)\cite{liaw2002} and Extra Trees Regression (ETR)\cite{sklearn, Geurts2006}. For each model, we treat the different set of hyper parameters as different models. For Neural Networks, we try 5, 10, 20, 40 hidden unites and for each hidden unit, we try penalization lambda for 0.0005, 0.005, 0.05, 0.5. As to the learning process, we use mini batch size 20 and we use the following learning rate and epochs: (0.1,100), (0.05,200), (0.01,500), (0.005,1000). For RF and ETR, we use 500 trees(since increasing trees does not help) and try maximum leaf nodes: 10, 20, 30, ... 100. In total we have 64 sets of hyper parameters for NN and 10 for RF and 10 for ERF. More details on the model selection process is presented in SI 3. Generally speaking, we select the final model with suitable hyper parameters with the help of cross-validation. We divide the data set into training and testing set by the ratio 7:3. As described above, we have 84 different models. For each model, we randomly divide our training set into training and validation sets and train the model on the training set and report the R-squared on the validation set. We repeat this process 20 times for each model and get the average R-squared. We pick the best model and use it for prediction on the test set. Finally, we report the out-of-sample R-squared in Table 2. Generally speaking, RF and ETR usually give us best performance based on R-squared. The resulting out-of-sample R-squared for the models selected are summarized in Table 2. We consider this modeling result important because: \begin{itemize} \item it indicates that a relationship exists between 311 request signature and the local socioeconomic features of each area; \item it enables possible prediction and estimation of other local socioeconomic features by using 311 requests data, particularly those features for which data are collected at low temporal frequency, such as Census data; and, \item it can be easily scaled by geographic aggregation for various research, operational, or planning purposes. \end{itemize} \begin{table}[ht] \centering \resizebox{\textwidth}{!}{\begin{tabular}{rrrrrrrrr} \hline City & White/European & Afro-American & Graduate Degree & Income per cap&Below Poverty&Unemployment\\ \hline NYC & 0.54 & 0.50 & 0.48 & 0.70 & 0.44 & 0.26 \\ Chicago& 0.76 & 0.85 & 0.45 & 0.55 &0.52 & 0.65 \\ Boston & 0.54& 0.68 & 0.26& 0.62 &0.63 & 0.36 \\ \hline \end{tabular}} \caption{Out of Sample R-squared} \end{table} \section{Prediction of the real estate prices} Following our previous analysis, we attempt to understand the practical applicability of the prediction models. Although the findings above once again highlight a strong relation between 311 service request data and socioeconomic context of urban locations, this by itself has limited practical implications except for filling gaps in the data availability. In this section we show that 311 service request data could be also used to predict future socioeconomic variations, which may have more important practical implications for urban analytics. As an example, consider the annual average sale price of housing per square foot in different neighborhoods of NYC as the target variable for our prediction. Our housing price is reported by Zillow at the zip code level; therefore, we rescale our 311 service request frequencies to this spatial aggregation. To match available housing price data, we only include those 311 service categories that were recorded consistently between 2012 and 2015. New York City has 145 of such categories, covering about 70 percent of total service requests. The target variable is updated annually and is available for each year from 2012 to 2015. The Zillow data cover 112 of the 145 zip codes in New York City where the density and frequency of 311 requests is sufficient to satisfy the filtering procedure described in the Data section. Thus, our sample for this prediction is based on data from 112 zipcodes. We do not attempt to predict the absolute level of prices, but changes over time relative to the NYC mean. Our output therefore indicates how much more (less) expensive the housing price in a given zip code area is going to be compared to the average relative increase (decrease) in housing prices across NYC from the previous year. This way we define a new log-scale target variable $Y^t(z)$ in year $t$ as $$Y^t(z)=log(P^t(z)/P_{mean}^t)$$ where $P^t(z)$ is the average price per square foot in zip code $z$ during the year $t$, while $P_{mean}^t$ is the average price per square foot across the entire city during the year $t$, estimated as the mean of $P^t(z)$ for all the locations $z$ weighted by residential population of the locations used as a proxy for the locations' size. We begin by modeling the output variable $Y^{2015}$. We train the model using 2012 and 2013 data (both - features and output variable) over the entire NYC and use 2014 data for tuning hyper-parameters, then apply it to 2015 using the features defined based on 2015 service requests. To reiterate, the feature space as before consists of the relative service requests frequencies $s(a,t)/s(a)$ , but now including only 145 categories of service requests, while the number of observations is 112 zip codes. We subsequently train four different machine learning regression models as before: Lasso\cite{hans2009}, Neural Networks with regularisation (NN)\cite{Haykin2009, Girosi1995, Jin2004}, Random Forests Regression (RF)\cite{liaw2002} and Extra Trees Regression (ETR)\cite{sklearn, Geurts2006}. The results are reported in table 3 (we also include Boston and Chicago here just for comparison, although the number of zip codes in these cities is much smaller and thus the model becomes less significant). \begin{table}[ht] \centering \resizebox{.5\textwidth}{!}{\begin{tabular}{||c c c c||} \hline Models & NYC & Chicago & Boston\\ \hline Lasso & 0.49 & 0.57 & 0.38 \\ NN(Regularized)& 0.70 & 0.65 & 0.68\\ RF& 0.78& 0.81 & 0.79 \\ ETR& 0.79& 0.90 & 0.83 \\ \hline \end{tabular}} \caption{Out of Sample R-squared} \end{table} As one can see from the table 3, we achieve reasonable predictive power, especially with RF and ETR approaching $R^2$ values of 0.80 for all three cities. However, note that modeling housing prices in 2015 is not our objective here, since a simplified model $Y^{2015}=Y^{2014}$ would achieve better results given the serial correlation in the time series and the relatively small year-to-year variation in price levels. Instead, we rather focus on the model's ability to predict the magnitude and direction of those fluctuations, forecasting price trends at the zip code level. Let $Y_P^t(z)$ be the predicted value of $Y^t(z)$. We define $D(z)=Y^{2015}(z)-Y^{2014}(z)$ as the actual tendency of relative real estate prices in the zip code $z$ and $D_P(z)=Y^{2015}_P-Y^{2014}_P$ as the predicted tendency of comparative housing price. We classify the 112 zip codes of NYC into three groups based on the predicted tendency strength $D^{2015}_P$:\\ $G_{Positive}=\{z: D_P^i>m\cdot \sigma(D_P) \text{, where } i = 1,2,...,112\}$: group of areas with strong positive tendency;\\ $G_{Negative}=\{z: D_P^i<-m\cdot \sigma(D_P) \text{, where } i = 1,2,...,112\}$: group of areas with strong negative tendency;\\ $G_{Neutral}=\{z: -m\cdot\sigma(D_P)<D_P(z)<m\cdot \sigma(D_P) \text{, } i = 1,2,...,112\}$: group of areas with close to neutral tendency,\\ where $m$ is a certain threshold and $\sigma(D_P)$ indicates the standard deviation of $D_P(z)$. Additionally we classify the zip codes based on the actual tendency strength, i.e. let us introduce $G_{Positive}',G_{Negative}',G_{Neutral}'$ in the same way as above but replacing the estimated $D_P(z)$ with the real $D(z)$ in the corresponding. In this way, compared to defining strong tendency using predicted results, we define strong tendency by the real values and then test the performance of our model by the following indicators. For each group $G_{Positive},G_{Negative},G_{Neutral}$, we calculate its the normalized population weighted average value of actual $D(z)$ using the following formulae: $$ \overline{D}_{Positive}= (\frac{\sum_{i\in G_{Positive}}D(z) \cdot N(z)}{\sum_{z=1}^{112}D(z)\cdot N(z)})/ \sigma(D(z)), $$$$ \overline{D}_{Negative}= (\frac{\sum_{i\in G_{Negative}}D(z) \cdot N(z)}{\sum_{z=1}^{112}D(z)\cdot N(z)})/ \sigma(D(z)), $$$$ \overline{D}_{Neutral}= (\frac{\sum_{i\in G_{Neutral}}D(z) \cdot N(z)}{\sum_{z=1}^{112}D(z)\cdot N(z)})/ \sigma(D(z)), $$ where $N(z)$ is the population of the zip code $z$. Similarly for each of the groups $G_{Positive}',G_{Negative}',G_{Neutral}'$ we calculate the average prediction $$ \overline{D}_{Positive}'= (\frac{\sum_{i\in G_{Positive}}'D_p(z) \cdot N(z)}{\sum_{z=1}^{112}D_p(z)\cdot N(z)})/ \sigma(D(z)), $$$$ \overline{D}_{Negative}'= (\frac{\sum_{i\in G_{Negative}}'D_p(z) \cdot N(z)}{\sum_{z=1}^{112}D_p(z)\cdot N(z)})/ \sigma(D(z)), $$$$ \overline{D}_{Neutral}'= (\frac{\sum_{i\in G_{Neutral}}'D_p(z) \cdot N(z)}{\sum_{z=1}^{112}D_p(z)\cdot N(z)})/ \sigma(D(z)), $$ The values of those quantities for different values of the threshold ($m=0.15$, example of a very loose threshold classifying most of the predictions as strong, $m=0.35, 0.65, 1$) are reported in the Tables 4 and 5 and we can see consistent inequalities $$ \overline{D}_{Positive}>0>\overline{D}_{Negative} $$ and $$ \overline{D}_{Positive}'>0>\overline{D}_{Negative}' $$ holding for all the values of the threshold $m$, which means that our predicted trend directions are consistent with the real trends on average. Moreover, we compare the signs of the predicted values of $D_P(z)$ for the strong predicted trends $G_{Positive}\cup G_{Negative}$ vs the ground-truth $D(z)$, as well as the actual values $D(z)$ for the strong actual trends $G_{Positive}'\cup G_{Negative}'$, reporting the accuracy ratio of predicting the correct trend direction for strong actual trends and the accuracy ratio for having strong predicted trends to reveal correct trend directions ($D_P(z)D(z)>0$). Those indicators are listed in Table 4 and Table 5 demonstrating the model's performance. From Tables 4 and 5, we see that, for around 40 percent of strongest tendency observations or predictions (m=0.65), our prediction accuracy of a trend direction is higher than 80 percent compared to around 43/62(69\%) percent random guess baseline model in Table 4 and 31/51(60.7\%) baseline in Table 5. Moreover, in Table 4, we see that when the threshold m increases from 0.15 to 0.65, the accuracy ratio of prediction goes up from 70 percent to 82 percent, meaning that the stronger the actual trend, the more likely to achieve correct prediction. In Table 5, we see that while $m$ increases from 0.15 to 1, the accuracy ratio of prediction goes up from 72 percent to 90 percent, hence the stronger the predicted trend, the more accurately our prediction reflects the reality. \begin{table}[ht] \centering \begin{tabular}{|*{13}{c|}} \hline \multicolumn{1}{|c}{Threshold} & \multicolumn{6}{|c|}{m=0.15}& \multicolumn{6}{|c|}{m=0.35}\\ \hline \multicolumn{1}{|c}{+/-:Strong Positive/Negative} & \multicolumn{2}{|c|}{+}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral}& \multicolumn{2}{|c|}{ +}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral} \\ \hline \multicolumn{1}{|c}{Number of Observations} & \multicolumn{2}{|c|}{23}& \multicolumn{2}{|c|}{75}& \multicolumn{2}{|c|}{14}& \multicolumn{2}{|c|}{ 20}& \multicolumn{2}{|c|}{62}& \multicolumn{2}{|c|}{30} \\ \hline \multicolumn{1}{|c}{$\overline{D}_{Positive}'/\overline{D}_{Negative}'/\overline{D}_{Neutral}'$} & \multicolumn{2}{|c|}{134.57}& \multicolumn{2}{|c|}{-84.28}& \multicolumn{2}{|c|}{-3.75}& \multicolumn{2}{|c|}{ 148.60}& \multicolumn{2}{|c|}{-95.41}& \multicolumn{2}{|c|}{-7.97} \\ \hline \multicolumn{1}{|c}{Accuracy for Strong P/N} & \multicolumn{6}{|c|}{ 0.7}& \multicolumn{6}{|c|}{0.72}\\ \hline \multicolumn{1}{|c}{Threshold} & \multicolumn{6}{|c|}{m=0.65}& \multicolumn{6}{|c|}{m=1}\\ \hline \multicolumn{1}{|c}{+/-:Strong Positive/Negative} & \multicolumn{2}{|c|}{+}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral}& \multicolumn{2}{|c|}{ +}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral} \\ \hline \multicolumn{1}{|c}{Number of Observations} & \multicolumn{2}{|c|}{19}& \multicolumn{2}{|c|}{43}& \multicolumn{2}{|c|}{50}& \multicolumn{2}{|c|}{14}& \multicolumn{2}{|c|}{24}& \multicolumn{2}{|c|}{74} \\ \hline \multicolumn{1}{|c}{$\overline{D}_{Positive}'/\overline{D}_{Negative}'/\overline{D}_{Neutral}'$} & \multicolumn{2}{|c|}{156.73}& \multicolumn{2}{|c|}{-114.82}& \multicolumn{2}{|c|}{-24.5}& \multicolumn{2}{|c|}{179.69}& \multicolumn{2}{|c|}{-137.11}& \multicolumn{2}{|c|}{-32.56}\\ \hline \multicolumn{1}{|c}{Accuracy for Strong P/N} & \multicolumn{6}{|c|}{ 0.82}& \multicolumn{6}{|c|}{0.77}\\ \hline \end{tabular} \caption{Accuracy of discovering actual strong relative real estate price trends by the predictive model} \end{table} \begin{table}[ht] \centering \begin{tabular}{|*{13}{c|}} \hline \multicolumn{1}{|c}{Threshold} & \multicolumn{6}{|c|}{m=0.15}& \multicolumn{6}{|c|}{m=0.35}\\ \hline \multicolumn{1}{|c}{+/-:Strong Positive/Negative} & \multicolumn{2}{|c|}{+}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral}& \multicolumn{2}{|c|}{ +}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral} \\ \hline \multicolumn{1}{|c}{Number of Observations} & \multicolumn{2}{|c|}{43}& \multicolumn{2}{|c|}{58}& \multicolumn{2}{|c|}{11}& \multicolumn{2}{|c|}{ 32}& \multicolumn{2}{|c|}{42}& \multicolumn{2}{|c|}{38} \\ \hline \multicolumn{1}{|c}{$\overline{D}_{Positive}/\overline{D}_{Negative}/\overline{D}_{Neutral}$} & \multicolumn{2}{|c|}{22.61}& \multicolumn{2}{|c|}{-75.99}& \multicolumn{2}{|c|}{-4.56}& \multicolumn{2}{|c|}{ 42.23}& \multicolumn{2}{|c|}{-71.18}& \multicolumn{2}{|c|}{-40.78} \\ \hline \multicolumn{1}{|c}{Accuracy for Strong P/N} & \multicolumn{6}{|c|}{ 0.72}& \multicolumn{6}{|c|}{0.77}\\ \hline \multicolumn{1}{|c}{Threshold} & \multicolumn{6}{|c|}{m=0.65}& \multicolumn{6}{|c|}{m=1}\\ \hline \multicolumn{1}{|c}{+/-:Strong Positive/Negative} & \multicolumn{2}{|c|}{+}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral}& \multicolumn{2}{|c|}{ +}& \multicolumn{2}{|c|}{-}& \multicolumn{2}{|c|}{Neutral} \\ \hline \multicolumn{1}{|c}{Number of Observations} & \multicolumn{2}{|c|}{20}& \multicolumn{2}{|c|}{31}& \multicolumn{2}{|c|}{61}& \multicolumn{2}{|c|}{15}& \multicolumn{2}{|c|}{12}& \multicolumn{2}{|c|}{85} \\ \hline \multicolumn{1}{|c}{$\overline{D}_{Positive}/\overline{D}_{Negative}/\overline{D}_{Neutral}$} & \multicolumn{2}{|c|}{44.93}& \multicolumn{2}{|c|}{-70.55}& \multicolumn{2}{|c|}{-29.83}& \multicolumn{2}{|c|}{110.80}& \multicolumn{2}{|c|}{-76.29}& \multicolumn{2}{|c|}{-41.17}\\ \hline \multicolumn{1}{|c}{Accuracy for Strong P/N} & \multicolumn{6}{|c|}{ 0.83}& \multicolumn{6}{|c|}{0.90}\\ \hline \end{tabular} \caption{Accuracy of the correspondence of the predicted strong relative real estate price trends to the actual ones} \end{table} The results presented in this section demonstrate that the 311-based model can indeed predict future fluctuations of socio-economic characteristics, including real estate price trends. This serves as an initial proof of concept for multiple potential urban applications using 311 data as a proxy for local socio-economic conditions. \section*{Conclusions} A quantitative understanding of urban neighborhoods can be quite challenging for urban planners and policy-makers given significant gaps in the spatial and temporal resolution of data and data collection modalities. However, this subject is crucial for urban planning and decision making, as well as for the study of urban economic and neighborhood change. In this paper, we provide an approach to quantify local signatures of urban function via 311 service request data collected in various cities across the US. These datasets, which can be easily scaled by spatial (zip code, census tracts/blocks, etc.) and temporal level of aggregation, are open to the public and updated regularly. Importantly, we demonstrate consistent relationships between socioeconomic features of urban neighborhoods and their 311 service requests. For all three cities analyzed - New York City, Boston and Chicago - we demonstrate how clustering of census tracts by the relative frequency vectors of different types of 311 requests reveal distinctive socioeconomic patterns across the city. Moreover, those frequency vectors allow us to train and cross-validate regression models successfully explaining selected socioeconomic features, such as education level, income, unemployment and racial composition of urban neighborhoods. For example, the accuracy of the model explaining local average income in NYC is characterized by a R-squared value of 0.7, while Extra Trees Regression results in a 0.9 out of sample R-squared in explaining housing prices in Chicago (although this must be considered with respect to the smaller sample size). Finally, we illustrate the predictive capacity of the approach by training and validating the model to detect comparative average real estate price trends for zip codes in New York City. In the nascent field of urban science and more traditional disciplines of economics and urban planning, there is increasing attention on how data collected by cities can be combined with novel machine learning approaches to yield insight for researchers and policy-makers. It is possible that such data can be used to better understand the dynamics of local areas in cities, and support more informed decision-making. In addition, it is conceivable that a set of efficient instrumental variables based on widely-available 311 data can be used to replace survey-based socioeconomic statistics at spatio-temporal scale where such official survey data is non-existent or inconsistent, thus broadening opportunities for urban analytics. \section*{Acknowledgments} The authors would like to thank Brendan Reilly and other colleagues at the Center For Urban Science And Progress at NYU for stimulating discussions which helped further shaping this research and manuscript. \newpage
2024-02-18T23:40:26.287Z
2016-11-22T02:08:51.000Z
algebraic_stack_train_0000
2,403
6,560
proofpile-arXiv_065-11662
\subsection{Baselines} \paragraph{Sentence-Concat:} To demonstrate the difference between sentence-level and paragraph captions, this baseline samples and concatenates five sentence captions from a model~\cite{karpathy2015deep} trained on MS COCO captions~\cite{lin2014microsoft}. The first sentence uses beam search (beam size $=2$) and the rest are sampled. The motivation for this is as follows: the image captioning model first produces the sentence that best describes the image as a whole, and subsequent sentences use sampling in order to generate a diverse range of sentences, since the alternative is to repeat the same sentence from beam search. We have validated that this approach works better than using either only beam search or only sampling, as the intent is to make the strongest possible comparison at a task-level to standard image captioning. We also note that, while Sentence-Concat is trained on MS COCO, all images in our dataset are also in MS COCO, and our descriptions were also written by users on Amazon Mechanical Turk. \paragraph{Image-Flat:} This model uses a flat representation for both images and language, and is equivalent to the standard image captioning model NeuralTalk~\cite{karpathy2015deep}. It takes the whole image as input, and decodes into a paragraph token by token. We use the publically available implementation of \cite{karpathy2015deep}, which uses the 16-layer VGG network~\cite{simonyan2015very} to extract CNN features and projects them as input into an LSTM~\cite{hochreiter1997long}, training the whole model jointly end-to-end. \paragraph{Template:} This method represents a very different approach to generating paragraphs, similar in style to an open-world version of more classical methods like BabyTalk~\cite{kulkarni2011baby}, which converts a structured representation of an image into text via a handful of manually specified templates. The first step of our template-based baseline is to detect and describe many regions in a given target image using a pre-trained dense captioning model~\cite{johnson2016densecap}, which produces a set of region descriptions tied with bounding boxes and detection scores. The region descriptions are parsed into a set of subjects, verbs, objects, and various modifiers according to part of speech tagging and a handful of TokensRegex~\cite{chang2014tokensregex} rules, which we find suffice to parse the vast majority ($\geq 99\%$) of the fairly simplistic and short region-level descriptions. Each parsed word is scored by the sum of its detection score and the log probability of the generated tokens in the original region description. Words are then merged into a coherent graph representing the scene, where each node combines all words with the same text and overlapping bounding boxes. Finally, text is generated using the top $N = 25$ scored nodes, prioritizing \verb|subject-verb-object| triples first in generation, and representing all other nodes with existential ``there is/are'' statements. \paragraph{DenseCap-Concat:} This baseline is similar to Sentence-Concat, but instead concatenates DenseCap~\cite{johnson2016densecap} predictions as separate sentences in order to form a paragraph. The intent of analyzing this method is to disentangle two key parts of the Template method: captioning and detection (\ie DenseCap), and heuristic recombination into paragraphs. We combine the top $n=14$ outputs of DenseCap to form DenseCap-Concat's output based on validation CIDEr+METEOR. \paragraph{Other Baselines:} ``Regions-Flat-Scratch'' uses a flat language model for decoding and initializes it from scratch. The language model input is the projected and pooled region-level image features. ``Regions-Flat-Pretrained'' uses a pre-trained language model. These baselines are included to show the benefits of decomposing the image into regions and pre-training the language model. \subsection{Implementation Details} All baseline neural language models use two layers of LSTM~\cite{hochreiter1997long} units with 512 dimensions. The feature pooling dimension $P$ is 1024, and we set $\lambda_{sent}=5.0$ and $\lambda_{word}=1.0$ based on validation set performance. Training is done via stochastic gradient descent with Adam~\cite{kingma2015adam}, implemented in Torch. Of critical note is that model checkpoint selection is based on the best combined METEOR and CIDEr score on the validation set -- although models tend to minimize validation loss fairly quickly, it takes much longer training for METEOR and CIDEr scores to stop improving. \subsection{Main Results} \begin{figure*} \includegraphics[width=0.90\textwidth]{ResultsFigure.pdf} \caption{ Example paragraph generation results for our model (Regions-Hierarchical) and the Sentence-Concat and Template baselines. The first three rows are positive results and the last row is a failure case.} \label{fig:qualitative} \end{figure*} We present our main results at generating paragraphs in Tab.~\ref{tab:main_results}, which are evaluated across six language metrics: CIDEr~\cite{vedantam2015cider}, METEOR~\cite{denkowski2014meteor}, and BLEU-\{1,2,3,4\}~\cite{papineni2002bleu}. The Sentence-Concat method performs poorly, achieving the lowest scores across all metrics. Its lackluster performance provides further evidence of the stark differences between single-sentence captioning and paragraph generation. Surprisingly, the hard-coded template-based approach performs reasonably well, particularly on CIDEr, METEOR, and BLEU-1, where it is competitive with some of the neural approaches. This makes sense: the template approach is provided with a strong prior about image content since it receives region-level captions~\cite{johnson2016densecap} as input, and the many expletive ``there is/are'' statements it makes, though uninteresting, are safe, resulting in decent scores. However, its relatively poor performance on BLEU-3 and BLEU-4 highlights the limitation of reasoning about regions in isolation -- it is unable to produce much text relating regions to one another, and further suffers from a lack of ``connective tissue'' that transforms paragraphs from a series of disconnected thoughts into a coherent whole. DenseCap-Concat scores worse than Template on all metrics except CIDEr, illustrating the necessity of Template's caption parsing and recombination. Image-Flat, trained on the task of paragraph generation, outperforms Sentence-Concat, and the region-based reasoning of Regions-Flat-Scratch improves results further on all metrics. Pre-training results in improvements on all metrics, and our full model, Regions-Hierarchical, achieves the highest scores among all methods on every metric except BLEU-4. One hypothesis for the mild superiority of Regions-Flat-Pretrained on BLEU-4 is that it is better able to reproduce words immediately at the end and beginning of sentences more exactly due to their non-hierarchical structure, providing a slight boost in BLEU scores. To make these metrics more interpretable, we performed a human evaluation by collecting an additional paragraph for 500 randomly chosen images, with results in the last row of Tab.~\ref{tab:main_results}. As expected, humans produce superior descriptions to any automatic method, performing better on all language metrics considered. Of particular note is the large gap between humans our the best model on CIDEr and METEOR, which are both designed to correlate well with human judgment~\cite{vedantam2015cider,denkowski2014meteor}. Finally, we note that we have also tried the SPICE evaluation metric~\cite{anderson2016spice}, which has shown to correlate well with human judgements for sentence-level image captioning. Unfortunately, SPICE does not seem well-suited for evaluating long paragraph descriptions -- it does not handle coreference or distinguish between different instances of the same object category. These are reasonable design decisions for sentence-level captioning, but is less applicable to paragraphs. In fact, human paragraphs achieved a considerably lower SPICE score than automated methods. \subsection{Qualitative Results} \vspace{-0mm} We present qualitative results from our model and the Sentence-Concat and Template baselines in Fig.~\ref{fig:qualitative}. Some interesting properties of our model's predictions include its use of coreference in the first example (``a bus'', ``it'', ``the bus'') and its ability to capture relationships between objects in the second example. Also of note is the order in which our model chooses to describe the image: the first sentence tends to be fairly high level, middle sentences give some details about scene elements mentioned earlier in the description, and the last sentence often describes something in the background, which other methods are not able to capture. Anecdotally, we observed that this follows the same order with which most humans tended to describe images. The failure case in the last row highlights another interesting phenomenon: even though our model was wrong about the semantics of the image, calling the girl ``a woman'', it has learned that ``woman'' is consistently associated with female pronouns (``she'', ``she'', ``her hand'', ``behind her''). It is also worth noting the general behavior of the two baselines. Paragraphs from Sentence-Concat tend to be repetitive in sentence structure and are often simply inaccurate due to the sampling required to generate multiple sentences. On the other hand, the Template baseline is largely accurate, but has uninteresting language and lacks the ability to determine which things are most important to describe. In contrast, Regions-Hierarchical stays relevant and furthermore exhibits more interesting patterns of language. \begin{table*}[t] \begin{tabular}{lcccccccc} \hline & \multiline{Average\\Length} & \multiline{Std. Dev.\\Length} & Diversity & Nouns & Verbs & Pronouns & \multiline{Vocab\\Size} \\ \hline Sentence-Concat & 56.18 & 4.74 & 34.23 & 32.53 & 9.74 & 0.95 & 2993 \\ Template & 60.81 & 7.01 & 45.42 & 23.23 & 11.83 & 0.00 & 422 \\ Regions-Hierarchical\hspace{-2mm} & 70.47 & 17.67 & 40.95 & 24.77 & 13.53 & 2.13 & 1989 \\ Human & 67.51 & 25.95 & 69.92 & 25.91 & 14.57 & 2.42 & 4137 \\ \hline \end{tabular} \caption{Language statistics of test set predictions. Part of speech statistics are given as percentages, and diversity is calculated as in Section~\ref{sec:paragraphs}. ``Vocab Size'' indicates the number of unique tokens output across the entire test set, and human numbers are calculated from ground truth. Note that the diversity score for humans differs slightly from the score in Tab.~\ref{tab:data_stats}, which is calculated on the entire dataset. } \label{tab:output_language_stats} \end{table*} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{RegionFigure.pdf} \caption{Examples of paragraph generation from only a few regions. Since only a small number of regions are used, this data is extremely out of sample for the model, but it is still able to focus on the regions of interest while ignoring the rest of the image. \vspace{-4mm}} \label{fig:regions} \end{figure*} \subsection{Paragraph Language Analysis} To shed a quantitative light on the linguistic phenomena generated, in Tab.~\ref{tab:output_language_stats} we show statistics of the language produced by a representative spread of methods. Our hierarchical approach generates text of similar average length and variance as human descriptions, with Sentence-Concat and the Template approach somewhat shorter and less varied in length. Sentence-Concat is also the least diverse method, though all automatic methods remain far less diverse than human sentences, indicating ample opportunity for improvement. According to this diversity metric, the Template approach is actually the most diverse automatic method, which may be attributed to how the method is hard-coded to sequentially describe each region in the scene in turn, regardless of importance or how interesting such an output may be (see Fig.~\ref{fig:qualitative}). While both our hierarchical approach and the Template method produce text with similar portions of nouns and verbs as human paragraphs, only our approach was able to generate a reasonable quantity of pronouns. Our hierarchical method also had a much wider vocabulary compared to the Template approach, though Sentence-Concat, trained on hundreds of thousands of MS COCO~\cite{lin2014microsoft} captions, is a bit larger. \subsection{Generating Paragraphs from Fewer Regions} As an exploratory experiment in order to highlight the interpretability of our model, we investigate generating paragraphs from a smaller number of regions than the $M=50$ used in the rest of this work. Instead, we only give our method access to the top few detected regions as input, with the hope that the generated paragraph focuses only on those particularly regions, preferring not to describe other parts of the image. The results for a handful of images are shown in Fig.~\ref{fig:regions}. Although the input is extremely out of sample compared to the training data, the results are still quite reasonable -- the model generates paragraphs describing the detected regions without much mention of objects or scenery outside of the detections. Taking the top-right image as an example, despite a few linguistic mistakes, the paragraph generated by our model mentions the batter, catcher, dirt, and grass, which all appear in the top detected regions, but does not pay heed to the pitcher or the umpire in the background. \subsection{Region Detector} The region detector receives an input image of size $3\times H\times W$, detects regions of interest, and produces a feature vector of dimension $D=4096$ for each region. Our region detector follows \cite{ren2015faster,johnson2016densecap}; we provide a summary here for completeness: The image is resized so that its longest edge is 720 pixels, and is then passed through a convolutional network initialized from the 16-layer VGG network~\cite{simonyan2015very}. The resulting feature map is processed by a region proposal network~\cite{ren2015faster}, which regresses from a set of anchors to propose regions of interest. These regions are projected onto the convolutional feature map, and the corresponding region of the feature map is reshaped to a fixed size using bilinear interpolation and processed by two fully-connected layers to give a vector of dimension $D$ for each region. Given a dataset of images and ground-truth regions of interest, the region detector can be trained in an end-to-end fashion as in \cite{ren2015faster} for object detection and \cite{johnson2016densecap} for dense captioning. Since paragraph descriptions do not have annotated groundings to regions of interest, we use a region detector trained for dense image captioning on the Visual Genome dataset~\cite{krishnavisualgenome}, using the publicly available implementation of \cite{johnson2016densecap}. This produces $M=50$ detected regions. One alternative worth noting is to use a region detector trained strictly for object detection, rather than dense captioning. Although such an approach would capture many salient objects in an image, its paragraphs would suffer: an ideal paragraph describes not only objects, but also scenery and relationships, which are better captured by dense captioning task that captures \emph{all} noteworthy elements of a scene. \subsection{Region Pooling} The region detector produces a set of vectors $v_1,\ldots,v_M\in\mathbb{R}^{D}$, each describing a different region in the input image. We wish to aggregate these vectors into a single pooled vector $v_p\in\mathbb{R}^P$ that compactly describes the content of the image. To this end, we learn a projection matrix $W_{pool}\in\mathbb{R}^{P\times D}$ and bias $b_{pool}\in\mathbb{R}^P$; the pooled vector $v_p$ is computed by projecting each region vector using $W_{pool}$ and taking an elementwise maximum, so that $v_p=\max_{i=1}^M(W_{pool}v_i + b_{pool})$. While alternative approaches for representing collections of regions, such as spatial attention~\cite{xu2015show}, may also be possible, we view these as complementary to the model proposed in this paper; furthermore we note recent work~\cite{qi2016pointnet} which has proven max pooling sufficient for representing any continuous set function, giving motivation that max pooling does not, in principle, sacrifice expressive power. \subsection{Hierarchical Recurrent Network} The pooled region vector $v_p\in\mathbb{R}^P$ is given as input to a hierarchical neural language model composed of two modules: a \emph{sentence RNN} and a \emph{word RNN}. The sentence RNN is responsible for deciding the number of sentences $S$ that should be in the generated paragraph and for producing a $P$-dimensional \emph{topic vector} for each of these sentences. Given a topic vector for a sentence, the word RNN generates the words of that sentence. We adopt the standard LSTM architecture~\cite{hochreiter1997long} for both the word RNN and sentence RNN. As an alternative to this hierarchical approach, one could instead use a non-hierarchical language model to directly generate the words of a paragraph, treating the end-of-sentence token as another word in the vocabulary. Our hierarchical model is advantageous because it reduces the length of time over which the recurrent networks must reason. Our paragraphs contain an average of 67.5 words (Tab.~\ref{tab:data_stats}), so a non-hierarchical approach must reason over dozens of time steps, which is extremely difficult for language models. However, since our paragraphs contain an average of 5.7 sentences, each with an average of 11.9 words, both the paragraph and sentence RNNs need only reason over much shorter time-scales, making learning an appropriate representation much more tractable. \vspace{-3mm} \paragraph{Sentence RNN} The sentence RNN is a single-layer LSTM with hidden size $H=512$ and initial hidden and cell states set to zero. At each time step, the sentence RNN receives the pooled region vector $v_p$ as input, and in turn produces a sequence of hidden states $h_1,\ldots,h_S\in\mathbb{R}^H$, one for each sentence in the paragraph. Each hidden state $h_i$ is used in two ways: First, a linear projection from $h_i$ and a logistic classifier produce a distribution $p_i$ over the two states $\{\texttt{CONTINUE}=0, \texttt{STOP}=1\}$ which determine whether the $i$th sentence is the last sentence in the paragraph. Second, the hidden state $h_i$ is fed through a two-layer fully-connected network to produce the topic vector $t_i\in\mathbb{R}^P$ for the $i$th sentence of the paragraph, which is the input to the word RNN. \paragraph{Word RNN} The word RNN is a two-layer LSTM with hidden size $H=512$, which, given a topic vector $t_i\in\mathbb{R}^{P}$ from the sentence RNN, is responsible for generating the words of a sentence. We follow the input formulation of~\cite{vinyals2015show}: the first and second inputs to the RNN are the topic vector and a special \texttt{START} token, and subsequent inputs are learned embedding vectors for the words of the sentence. At each timestep the hidden state of the last LSTM layer is used to predict a distribution over the words in the vocabulary, and a special \texttt{END} token signals the end of a sentence. After each Word RNN has generated the words of their respective sentences, these sentences are finally concatenated to form the generated paragraph. \subsection{Training and Sampling} Training data consists of pairs $(x, y)$, with $x$ an image and $y$ a ground-truth paragraph description for that image, where $y$ has $S$ sentences, the $i$th sentence has $N_i$ words, and $y_{ij}$ is the $j$th word of the $i$th sentence. After computing the pooled region vector $v_p$ for the image, we unroll the sentence RNN for $S$ timesteps, giving a distribution $p_i$ over the $\{\texttt{CONTINUE}, \texttt{STOP}\}$ states for each sentence. We feed the sentence topic vectors to $S$ copies of the word RNN, unrolling the $i$th copy for $N_i$ timesteps, producing distributions $p_{ij}$ over each word of each sentence. Our training loss $\ell(x, y)$ for the example $(x, y)$ is a weighted sum of two cross-entropy terms: a \emph{sentence loss} $\ell_{sent}$ on the stopping distribution $p_i$, and a \emph{word loss} $\ell_{word}$ on the word distribution $p_{ij}$: \vspace{-5mm} \begin{align} \ell(x, y) = &\lambda_{sent}\sum_{i=1}^S \ell_{sent}(p_i, \mathbf{I}\left[i = S\right])\\ + &\lambda_{word}\sum_{i=1}^S\sum_{j=1}^{N_i} \ell_{word}(p_{ij}, y_{ij}) \end{align} To generate a paragraph for an image, we run the sentence RNN forward until the stopping probability $p_i(\texttt{STOP})$ exceeds a threshold $T_{\texttt{STOP}}$ or after $S_{MAX}$ sentences, whichever comes first. We then sample sentences from the word RNN, choosing the most likely word at each timestep and stopping after choosing the \texttt{STOP} token or after $N_{MAX}$ words. We set the parameters \mbox{$T_{\texttt{STOP}}=0.5$}, \mbox{$S_{MAX}=6$}, and \mbox{$N_{MAX}=50$} based on validation set performance. \subsection{Transfer Learning} \vspace{-1mm} Transfer learning has become pervasive in computer vision. For tasks such as object detection~\cite{ren2015faster} and image captioning~\cite{donahue2015long,karpathy2015deep,vinyals2015show,xu2015show}, it has become standard practice not only to process images with convolutional neural networks, but also to initialize the weights of these networks from weights that had been tuned for image classification, such as the 16-layer VGG network~\cite{simonyan2015very}. Initializing from a pre-trained convolutional network allows a form of knowledge transfer from large classification datasets, and is particularly effective on datasets of limited size. Might transfer learning also be useful for paragraph generation? We propose to utilize transfer learning in two ways. First, we initialize our region detection network from a model trained for dense image captioning~\cite{johnson2016densecap}; although our model is end-to-end differentiable, we keep this sub-network fixed during training both for efficiency and also to prevent overfitting. Second, we initialize the word embedding vectors, recurrent network weights, and output linear projection of the word RNN from a language model that had been trained on region-level captions~\cite{johnson2016densecap}, fine-tuning these parameters during training to be better suited for the task of paragraph generation. Parameters for tokens not present in the region model are initialized from the parameters for the \texttt{UNK} token. This initialization strategy allows our model to utilize linguistic knowledge learned on large-scale region caption datasets~\cite{krishnavisualgenome} to produce better paragraph descriptions, and we validate the efficacy of this strategy in our experiments. \section{Introduction} \label{sec:intro} \input{intro.tex} \section{Related Work} \label{sec:related} \input{related.tex} \section{Paragraphs are Different} \label{sec:paragraphs} \input{paragraphs.tex} \section{Method} \label{sec:method} \input{method.tex} \section{Experiments} \label{sec:experiments} \input{experiments.tex} \section{Conclusion} \label{sec:discussion} \input{discussion.tex} {\small \bibliographystyle{ieee}
2024-02-18T23:40:26.396Z
2017-04-11T02:14:54.000Z
algebraic_stack_train_0000
2,410
3,642
proofpile-arXiv_065-11688
\section{Introduction} \blfootnote{ % % % % \hspace{-0.65cm} This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: \url{http://creativecommons.org/licenses/by/4.0/} % } Text classification is an essential component in many NLP applications, such as sentiment analysis \cite{socher2013recursive}, relation extraction \cite{zeng2014relation} and spam detection \cite{wang2010don}. Therefore, it has attracted considerable attention from many researchers, and various types of models have been proposed. As a traditional method, the bag-of-words (BoW) model treats texts as unordered sets of words \cite{wang2012baselines}. In this way, however, it fails to encode word order and syntactic feature. Recently, order-sensitive models based on neural networks have achieved tremendous success for text classification, and shown more significant progress compared with BoW models. The challenge for textual modeling is how to capture features for different text units, such as phrases, sentences and documents. Benefiting from its recurrent structure, RNN, as an alternative type of neural networks, is very suitable to process the variable-length text. RNN can capitalize on distributed representations of words by first converting the tokens comprising each text into vectors, which form a matrix. This matrix includes two dimensions: the time-step dimension and the feature vector dimension, and it will be updated in the process of learning feature representation. Then RNN utilizes 1D max pooling operation \cite{lai2015recurrent} or attention-based operation \cite{zhou2016attention}, which extracts maximum values or generates a weighted representation over the time-step dimension of the matrix, to obtain a fixed-length vector. Both of the two operators ignore features on the feature vector dimension, which maybe important for sentence representation, therefore the use of 1D max pooling and attention-based operators may pose a serious limitation. Convolutional Neural Networks (CNN) \cite{kalchbrenner2014convolutional,kim2014convolutional} utilizes 1D convolution to perform the feature mapping, and then applies 1D max pooling operation over the time-step dimension to obtain a fixed-length output. However the elements in the matrix learned by RNN are not independent, as RNN reads a sentence word by word, one can effectively treat the matrix as an 'image'. Unlike in NLP, CNN in image processing tasks \cite{lecun1998gradient,krizhevsky2012imagenet} applies 2D convolution and 2D pooling operation to get a representation of the input. It is a good choice to utilize 2D convolution and 2D pooling to sample more meaningful features on both the time-step dimension and the feature vector dimension for text classification. Above all, this paper proposes Bidirectional Long Short-Term Memory Networks with Two-Dimensional Max Pooling (BLSTM-2DPooling) to capture features on both the time-step dimension and the feature vector dimension. It first utilizes Bidirectional Long Short-Term Memory Networks (BLSTM) to transform the text into vectors. And then 2D max pooling operation is utilized to obtain a fixed-length vector. This paper also applies 2D convolution (BLSTM-2DCNN) to capture more meaningful features to represent the input text. The contributions of this paper can be summarized as follows: \begin{itemize} \item This paper proposes a combined framework, which utilizes BLSTM to capture long-term sentence dependencies, and extracts features by 2D convolution and 2D max pooling operation for sequence modeling tasks. To the best of our knowledge, this work is the first example of using 2D convolution and 2D max pooling operation in NLP tasks. \item This work introduces two combined models BLSTM-2DPooling and BLSTM-2DCNN, and verifies them on six text classification tasks, including sentiment analysis, question classification, subjectivity classification, and newsgroups classification. Compared with the state-of-the-art models, BLSTM-2DCNN achieves excellent performance on $4$ out of $6$ tasks. Specifically, it achieves highest accuracy on Stanford Sentiment Treebank binary classification and fine-grained classification tasks. \item To better understand the effect of 2D convolution and 2D max pooling operation, this paper conducts experiments on Stanford Sentiment Treebank fine-grained task. It first depicts the performance of the proposed models on different length of sentences, and then conducts a sensitivity analysis of 2D filter and max pooling size. \end{itemize} The remainder of the paper is organized as follows. In Section 2, the related work about text classification is reviewed. Section 3 presents the BLSTM-2DCNN architectures for text classification in detail. Section 4 describes details about the setup of the experiments. Section 5 presents the experimental results. The conclusion is drawn in the section 6. \section{Related Work} Deep learning based neural network models have achieved great improvement on text classification tasks. These models generally consist of a projection layer that maps words of text to vectors. And then combine the vectors with different neural networks to make a fixed-length representation. According to the structure, they may divide into four categories: Recursive Neural Networks (RecNN\footnote{To avoid confusion with RNN, we named Recursive Neural Networks as RecNN.}), RNN, CNN and other neural networks. \textbf{Recursive Neural Networks}: RecNN is defined over recursive tree structures. In the type of recursive models, information from the leaf nodes of a tree and its internal nodes are combined in a bottom-up manner. \newcite{socher2013recursive} introduced recursive neural tensor network to build representations of phrases and sentences by combining neighbour constituents based on the parsing tree. \newcite{irsoy2014deep} proposed deep recursive neural network, which is constructed by stacking multiple recursive layers on top of each other, to modeling sentence. \textbf{Recurrent Neural Networks}: RNN has obtained much attention because of their superior ability to preserve sequence information over time. \newcite{tang2015target} developed target dependent Long Short-Term Memory Networks (LSTM \cite{hochreiter1997long}), where target information is automatically taken into account. \newcite{tai2015improved} generalized LSTM to Tree-LSTM where each LSTM unit gains information from its children units. \newcite{zhou2016attention} introduced BLSTM with attention mechanism to automatically select features that have a decisive effect on classification. \newcite{yang2016hierarchical} introduced a hierarchical network with two levels of attention mechanisms, which are word attention and sentence attention, for document classification. This paper also implements an attention-based model BLSTM-Att like the model in \newcite{zhou2016attention}. \textbf{Convolution Neural Networks}: CNN \cite{lecun1998gradient} is a feedforward neural network with 2D convolution layers and 2D pooling layers, originally developed for image processing. Then CNN is applied to NLP tasks, such as sentence classification \cite{kalchbrenner2014convolutional,kim2014convolutional}, and relation classification \cite{zeng2014relation}. The difference is that the common CNN in NLP tasks is made up of 1D convolution layers and 1D pooling layers. \newcite{kim2014convolutional} defined a CNN architecture with two channels. \newcite{kalchbrenner2014convolutional} proposed a dynamic $k$-max pooling mechanism for sentence modeling. \cite{zhang2015sensitivity} conducted a sensitivity analysis of one-layer CNN to explore the effect of architecture components on model performance. \newcite{yin2016multichannel} introduced multichannel embeddings and unsupervised pretraining to improve classification accuracy. \cite{zhang2015sensitivity} conducted a sensitivity analysis of one-layer CNN to explore the effect of architecture components on model performance. Usually there is a misunderstanding that 1D convolutional filter in NLP tasks has one dimension. Actually it has two dimensions $(k, d)$, where $k$, $d \in \mathbb{R}$. As $d$ is equal to the word embeddings size $d^w$, the window slides only on the time-step dimension, so the convolution is usually called 1D convolution. While $d$ in this paper varies from 2 to $d^w$, to avoid confusion with common CNN, the convolution in this work is named as 2D convolution. The details will be described in Section~\ref{cnn_detail}. \textbf{Other Neural Networks}: In addition to the models described above, lots of other neural networks have been proposed for text classification. \newcite{iyyer2015deep} introduced a deep averaging network, which fed an unweighted average of word embeddings through multiple hidden layers before classification. \newcite{zhou2015c} used CNN to extract a sequence of higher-level phrase representations, then the representations were fed into a LSTM to obtain the sentence representation. The proposed model BLSTM-2DCNN is most relevant to DSCNN \cite{zhang2016dependency} and RCNN \cite{wen2016learning}. The difference is that the former two utilize LSTM, bidirectional RNN respectively, while this work applies BLSTM, to capture long-term sentence dependencies. After that the former two both apply 1D convolution and 1D max pooling operation, while this paper uses 2D convolution and 2D max pooling operation, to obtain the whole sentence representation. \section{Model} \begin{figure*}[t] \centering \includegraphics[width=1.\linewidth]{coling2.eps} \caption{A BLSTM-2DCNN for the seven word input sentence. Word embeddings have size 3, and BLSTM has 5 hidden units. The height and width of convolution filters and max pooling operations are 2, 2 respectively.} \end{figure*} As shown in Figure 1, the overall model consists of four parts: BLSTM Layer, Two-dimensional Convolution Layer, Two dimensional max pooling Layer, and Output Layer. The details of different components are described in the following sections. \subsection{BLSTM Layer} LSTM was firstly proposed by \newcite{hochreiter1997long} to overcome the gradient vanishing problem of RNN. The main idea is to introduce an adaptive gating mechanism, which decides the degree to keep the previous state and memorize the extracted features of the current data input. Given a sequence $S =\{x_1, x_2, \dots, x_l\}$, where $l$ is the length of input text, LSTM processes it word by word. At time-step $t$, the memory $c_t$ and the hidden state $h_t$ are updated with the following equations: \begin{equation} \begin{bmatrix} i_t\\f_t\\o_t\\\hat{c}_t \end{bmatrix} = \begin{bmatrix} \sigma\\ \sigma\\ \sigma\\ \tanh \end{bmatrix} W\cdot \lbrack h_{t-1}, x_t \rbrack\\ \end{equation} \begin{eqnarray} c_t & = & f_t \odot c_{t-1} + i_t \odot \hat{c}_t\\ h_t & = & o_t \odot \tanh(c_t) \end{eqnarray} where $x_t$ is the input at the current time-step, $i$, $f$ and $o$ is the input gate activation, forget gate activation and output gate activation respectively, $\hat{c}$ is the current cell state, $\sigma$ denotes the logistic sigmoid function and $\odot$ denotes element-wise multiplication. For the sequence modeling tasks, it is beneficial to have access to the past context as well as the future context. \newcite{schuster1997bidirectional} proposed BLSTM to extend the unidirectional LSTM by introducing a second hidden layer, where the hidden to hidden connections flow in opposite temporal order. Therefore, the model is able to exploit information from both the past and the future. In this paper, BLSTM is utilized to capture the past and the future information. As shown in Figure 1, the network contains two sub-networks for the forward and backward sequence context respectively. The output of the $i^{th}$ word is shown in the following equation: \begin{equation} h_i=[\overrightarrow{h_i} \oplus \overleftarrow{h_i}] \end{equation} Here, element-wise sum is used to combine the forward and backward pass outputs. \subsection{Convolutional Neural Networks} \label{cnn_detail} Since BLSTM has access to the future context as well as the past context, $h_i$ is related to all the other words in the text. One can effectively treat the matrix, which consists of feature vectors, as an 'image', so 2D convolution and 2D max pooling operation can be utilized to capture more meaningful information. \subsubsection{Two-dimensional Convolution Layer} A matrix $H = \{h_1, h_2, \dots, h_l\}$, $H \in \mathbb{R}^{l \times d^w}$, is obtained from BLSTM Layer, where $d^w$ is the size of word embeddings. Then narrow convolution is utilized \cite{kalchbrenner2014convolutional} to extract local features over $H$. A convolution operation involves a 2D filter $\mathbf{m} \in \mathbb{R}^{k \times d}$, which is applied to a window of k words and d feature vectors. For example, a feature $o_{i, j}$ is generated from a window of vectors $H_{i:i+k-1, \; j:j+d-1}$ by \begin{equation} o_{i, j} = f(\mathbf{m} \cdot H_{i:i+k-1, \; j:j+d-1} + b) \end{equation} where $i$ ranges from 1 to $(l-k+1)$, $j$ ranges from 1 to $(d^w-d+1)$, $\cdot$ represents dot product, $b \in \mathbb{R}$ is a bias and an $f$ is a non-linear function such as the hyperbolic tangent. This filter is applied to each possible window of the matrix $H$ to produce a feature map $O$: \begin{equation} O = [o_{1,1}, o_{1,2}, \cdots, o_{l-k+1, d^w-d+1}] \end{equation} with $O \in \mathbb{R}^{(l-k+1) \times (d^w-d+1)}$. It has described the process of one convolution filter. The convolution layer may have multiple filters for the same size filter to learn complementary features, or multiple kinds of filter with different size. \subsubsection{Two-dimensional Max Pooling Layer} Then 2D max pooling operation is utilized to obtain a fixed length vector. For a 2D max pooling $p \in \mathbb{R}^{p_1 \times p_2}$, it is applied to each possible window of matrix O to extract the maximum value: \begin{equation} p_{i,j} = down{\left(O_{i:i+p_1, \; j:j+p_2}\right)} \end{equation} where $down(\cdot)$ represents the 2D max pooling function, $i= (1, 1+p_1, \cdots, 1+(l-k+1/p_1-1) \cdot p_1)$, and $j = (1, \; 1+p_2, \cdots, 1+(d^w-d+1/p_2-1) \cdot p_2)$. Then the pooling results are combined as follows: \begin{equation} h^* = [p_{1, 1}, p_{1, 1+p_2}, \cdots, p_{1+(l-k+1/p_1-1) \cdot p_1, 1+(d^w-d+1/p_2-1) \cdot p_2}] \end{equation} where $h^* \in \mathbb{R}$, and the length of $h^*$ is $\lfloor l-k+1/p_1 \rfloor \times \lfloor d^w-d+1/p_2 \rfloor$. \subsection{Output Layer} For text classification, the output $h^*$ of 2D Max Pooling Layer is the whole representation of the input text $S$. And then it is passed to a softmax classifier layer to predict the semantic relation label $\hat{y}$ from a discrete set of classes $\mathit{Y}$. The classifier takes the hidden state $h^*$ as input: \begin{eqnarray} \hat{p}\left(y | s\right) & = & softmax \left(W^{\left(s\right)} h^* + b^{\left(s\right)}\right)\\ \hat{y} & = & \arg \max_y \hat{p}\left(y | s\right) \end{eqnarray} A reasonable training objective to be minimized is the categorical cross-entropy loss. The loss is calculated as a regularized sum: \begin{equation} J\left(\theta\right) = -\frac{1}{m}\sum_{i=1}^{m}t_i\log(y_i) + \lambda{\Vert\theta\Vert}_F^2 \end{equation} where $\boldsymbol{t} \in \mathbb{R}^m$ is the one-hot represented ground truth, $\boldsymbol{y} \in \mathbb{R}^m$ is the estimated probability for each class by softmax, $m$ is the number of target classes, and $\lambda$ is an L2 regularization hyper-parameter. Training is done through stochastic gradient descent over shuffled mini-batches with the AdaDelta \cite{zeiler2012adadelta} update rule. Training details are further introduced in Section~\ref{hyper-parameter}. \begin{table*}[!t] \centering \begin{tabular}{c||c|c|c|c|c|c|c|c} \hline {\bf{Data}} & {c} & {l} & {m} & {train} & {dev} & {test} & {$|V|$} & {$|V_{pre}|$} \\ \hline SST-1 &5 &18 &51 &8544 &1101 &2210 &17836 &12745\\ SST-2 &2 &19 &51 &6920 &872 &1821 &16185 &11490\\ Subj &2 &23 &65 &10000 &- &\textbf{CV} &21057 &17671\\ TREC &6 &10 &33 &5452 &- &500 &9137 &5990\\ MR &2 &21 &59 &10662 &- &\textbf{CV} &20191 &16746\\ 20Ng &4 &276 &11468 &7520 &836 &5563 &51379 &30575\\ \hline \end{tabular} \caption{Summary statistics for the datasets. c: number of target classes, l: average sentence length, m: maximum sentence length, train/dev/test: train/development/test set size, $|V|$: vocabulary size, $|V_{pre}|$: number of words present in the set of pre-trained word embeddings, \textbf{CV}: 10-fold cross validation.}\label{tab:result} \end{table*} \section{Experimental Setup} \subsection{Datasets} The proposed models are tested on six datasets. Summary statistics of the datasets are in Table 1. \begin{itemize} \item \textbf{MR}\footnote{https://www.cs.cornell.edu/people/pabo/movie-review-data/}: Sentence polarity dataset from \newcite{pang2005seeing}. The task is to detect positive/negative reviews. \item \textbf{SST-1}\footnote{http://nlp.stanford.edu/sentiment/}: Stanford Sentiment Treebank is an extension of MR from \newcite{socher2013recursive}. The aim is to classify a review as fine-grained labels (very negative, negative, neutral, positive, very positive). \item \textbf{SST-2}: Same as SST-1 but with neutral reviews removed and binary labels (negative, positive). For both experiments, phrases and sentences are used to train the model, but only sentences are scored at test time \cite{socher2013recursive,le2014distributed}. Thus the training set is an order of magnitude larger than listed in table 1. \item \textbf{Subj}\footnote{http://www.cs.cornell.edu/people/pabo/movie-review-data/}: Subjectivity dataset \cite{pang2004sentimental}. The task is to classify a sentence as being subjective or objective. \item \textbf{TREC}\footnote{http://cogcomp.cs.illinois.edu/Data/QA/QC/}: Question classification dataset \cite{li2002learning}. The task involves classifying a question into 6 question types (abbreviation, description, entity, human, location, numeric value). \item \textbf{20Newsgroups}\footnote{http://web.ist.utl.pt/acardoso/datasets/}: The 20Ng dataset contains messages from twenty newsgroups. We use the bydate version preprocessed by \newcite{cachopo2007improving}. We select four major categories (comp, politics, rec and religion) followed by \newcite{hingmire2013document}. \end{itemize} \subsection{Word Embeddings} The word embeddings are pre-trained on much larger unannotated corpora to achieve better generalization given limited amount of training data \cite{turian2010word}. In particular, our experiments utilize the GloVe embeddings\footnote{http://nlp.stanford.edu/projects/glove/} trained by \newcite{pennington2014glove} on 6 billion tokens of Wikipedia 2014 and Gigaword 5. Words not present in the set of pre-trained words are initialized by randomly sampling from uniform distribution in $[-0.1, 0.1]$. The word embeddings are fine-tuned during training to improve the performance of classification. \subsection{Hyper-parameter Settings} \label{hyper-parameter} For datasets without a standard development set we randomly select $10\%$ of the training data as the development set. The evaluation metric of the 20Ng is the Macro-F1 measure followed by the state-of-the-art work and the other five datasets use accuracy as the metric. The final hyper-parameters are as follows. The dimension of word embeddings is 300, the hidden units of LSTM is 300. We use 100 convolutional filters each for window sizes of (3,3), 2D pooling size of (2,2). We set the mini-batch size as 10 and the learning rate of AdaDelta as the default value 1.0. For regularization, we employ Dropout operation \cite{hinton2012improving} with dropout rate of 0.5 for the word embeddings, 0.2 for the BLSTM layer and 0.4 for the penultimate layer, we also use l2 penalty with coefficient $10^{-5}$ over the parameters. These values are chosen via a grid search on the SST-1 development set. We only tune these hyper-parameters, and more finer tuning, such as using different numbers of hidden units of LSTM layer, or using wide convolution \cite{kalchbrenner2014convolutional}, may further improve the performance. \begin{table*}[!t] \centering \begin{tabular}{c||l|c|c|c|c|c|c} \hline {\bf{NN}} & {\bf{Model}} & {\bf{SST-1}} & {\bf{SST-2}} & {\bf{Subj}} & {\bf{TREC}} & {\bf{MR}} & {\bf{20Ng}} \\ \hline \multirow{2}{*}{ReNN} &RNTN \cite{socher2013recursive} &45.7 &85.4 &- &- &- &-\\ &DRNN \cite{irsoy2014deep} &49.8 &86.6 &- &- &- &-\\ \hline \multirow{8}{*}{CNN} &DCNN \cite{kalchbrenner2014convolutional} &48.5 &86.8 &- &93.0 &- &-\\ &CNN-non-static \cite{kim2014convolutional} &48.0 &87.2 &93.4 &93.6 &- &-\\ &CNN-MC \cite{kim2014convolutional} &47.4 &88.1 &93.2 &92 &- &-\\ &TBCNN\cite{mou2015discriminative} &51.4 &87.9 &- &96.0 &- &-\\ &Molding-CNN \cite{lei2015molding} &51.2 &88.6 &- &- &- &-\\ &CNN-Ana \cite{zhang2015sensitivity} &45.98 &85.45 &93.66 &91.37 &81.02 &-\\ &MVCNN \cite{yin2016multichannel} &49.6 &89.4 &93.9 &- &- &-\\ \hline \multirow{7}{*}{RNN} &RCNN \cite{lai2015recurrent} &47.21 &- &- &- &- &96.49\\ &S-LSTM \cite{zhu2015long} &- &81.9 &- &- &- &-\\ &LSTM \cite{tai2015improved} &46.4 &84.9 &- &- &- &-\\ &BLSTM \cite{tai2015improved} &49.1 &87.5 &- &- &- &-\\ &Tree-LSTM \cite{tai2015improved} &51.0 &88.0 &- &- &- &-\\ &LSTMN \cite{cheng2016long} &49.3 &87.3 &- &- &- &-\\ &Multi-Task \cite{liu2016recurrent} &49.6 &87.9 &94.1 &- &- &-\\ \hline \multirow{7}{*}{Other} &PV \cite{le2014distributed} &48.7 &87.8 &- &- &- &-\\ &DAN \cite{iyyer2015deep} &48.2 &86.8 &- &- &- &-\\ &combine-skip \cite{kiros2015skip} &- &- &93.6 &92.2 &76.5 &-\\ &AdaSent \cite{zhao2015self} &- &- &\textbf{95.5} &92.4 &\textbf{83.1} &-\\ &LSTM-RNN \cite{le2015compositional} &49.9 &88.0 &- &- &- &-\\ &C-LSTM \cite{zhou2015c} &49.2 &87.8 &- &94.6 &- &-\\ &DSCNN \cite{zhang2016dependency} &49.7 &89.1 &93.2 &95.4 &81.5 &-\\ \hline \multirow{4}{*}{ours} &BLSTM &49.1 &87.6 &92.1 &93.0 &80.0 &94.0\\ &BLSTM-Att &49.8 &88.2 &93.5 &93.8 &81.0 &94.6\\ &BLSTM-2DPooling &50.5 &88.3 &93.7 &94.8 &81.5 &95.5\\ &BLSTM-2DCNN &\textbf{52.4} &\textbf{89.5} &94.0 &\textbf{96.1} &82.3 &\textbf{96.5}\\ \hline \end{tabular} \caption{Classification results on several standard benchmarks. \textbf{RNTN}: Recursive deep models for semantic compositionality over a sentiment treebank \protect\cite{socher2013recursive}. \textbf{DRNN}: Deep recursive neural networks for compositionality in language \protect\cite{irsoy2014deep}. \textbf{DCNN}: A convolutional neural network for modeling sentences \protect\cite{kalchbrenner2014convolutional}. \textbf{CNN-nonstatic/MC}: Convolutional neural networks for sentence classification \protect\cite{kim2014convolutional}. \textbf{TBCNN}: Discriminative neural sentence modeling by tree-based convolution \protect\cite{mou2015discriminative}. \textbf{Molding-CNN}: Molding CNNs for text: non-linear, non-consecutive convolutions \protect\cite{lei2015molding}. \textbf{CNN-Ana}: A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification \protect\cite{zhang2015sensitivity}. \textbf{MVCNN}: Multichannel variable-size convolution for sentence classification \protect\cite{yin2016multichannel}. \textbf{RCNN}: Recurrent Convolutional Neural Networks for Text Classification \protect\cite{lai2015recurrent}. \textbf{S-LSTM}: Long short-term memory over recursive structures \protect\cite{zhu2015long}. \textbf{LSTM/BLSTM/Tree-LSTM}: Improved semantic representations from tree-structured long short-term memory networks \protect\cite{tai2015improved}. \textbf{LSTMN}: Long short-term memory-networks for machine reading \protect\cite{cheng2016long}. \textbf{Multi-Task}: Recurrent Neural Network for Text Classification with Multi-Task Learning \protect\cite{liu2016recurrent}. \textbf{PV}: Distributed representations of sentences and documents \protect\cite{le2014distributed}. \textbf{DAN}: Deep unordered composition rivals syntactic methods for text classification \protect\cite{iyyer2015deep}. \textbf{combine-skip}: skip-thought vectors \protect\cite{kiros2015skip}. \textbf{AdaSent}: Self-adaptive hierarchical sentence model \protect\cite{zhao2015self}. \textbf{LSTM-RNN}: Compositional distributional semantics with long short term memory \protect\cite{le2015compositional}. \textbf{C-LSTM}: A C-LSTM Neural Network for Text Classification \protect\cite{zhou2015c}. \textbf{DSCNN}: Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents \protect\cite{zhang2016dependency}.} \end{table*} \section{Results} \subsection{Overall Performance} This work implements four models, BLSTM, BLSTM-Att, BLSTM-2DPooling, and BLSTM-2DCNN. Table 2 presents the performance of the four models and other state-of-the-art models on six classification tasks. The BLSTM-2DCNN model achieves excellent performance on 4 out of 6 tasks. Especially, it achieves $52.4\%$ and $89.5\%$ test accuracies on SST-1 and SST-2 respectively. BLSTM-2DPooling performs worse than the state-of-the-art models. While we expect performance gains through the use of 2D convolution, we are surprised at the magnitude of the gains. BLSTM-CNN beats all baselines on SST-1, SST-2, and TREC datasets. As for Subj and MR datasets, BLSTM-2DCNN gets a second higher accuracies. Some of the previous techniques only work on sentences, but not paragraphs/documents with several sentences. Our question becomes whether it is possible to use our models for datasets that have a substantial number of words, such as 20Ng and where the content consists of many different topics. For that purpose, this paper tests the four models on document-level dataset 20Ng, by treating the document as a long sentence. Compared with RCNN \cite{lai2015recurrent}, BLSTM-2DCNN achieves a comparable result. Besides, this paper also compares with ReNN, RNN, CNN and other neural networks: \begin{itemize} \item{Compared with ReNN, the proposed two models do not depend on external language-specific features such as dependency parse trees.} \item{CNN extracts features from word embeddings of the input text, while BLSTM-2DPooling and BLSTM-2DCNN captures features from the output of BLSTM layer, which has already extracted features from the original input text.} \item{BLSTM-2DCNN is an extension of BLSTM-2DPooling, and the results show that BLSTM-2DCNN can capture more dependencies in text.} \item{AdaSent utilizes a more complicated model to form a hierarchy of representations, and it outperforms BLSTM-2DCNN on Subj and MR datasets. Compared with DSCNN \cite{zhang2016dependency}, BLSTM-2DCNN outperforms it on five datasets.} \end{itemize} Compared with these results, 2D convolution and 2D max pooling operation are more effective for modeling sentence, even document. To better understand the effect of 2D operations, this work conducts a sensitivity analysis on SST-1 dataset. \begin{figure} \begin{tabular}{lr} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[scale=0.4]{r1.eps} \caption{Fine-grained sentiment classification accuracy $vs.$ sentence length.} \label{pool} \end{minipage} \hspace{0.2in} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[scale=0.4]{pooling_final1.eps} \caption{Prediction accuracy with different size of 2D filter and 2D max pooling.} \label{pool} \end{minipage} \end{tabular} \end{figure} \subsection{Effect of Sentence Length} Figure 2 depicts the performance of the four models on different length of sentences. In the figure, the x-axis represents sentence lengths and y-axis is accuracy. The sentences collected in test set are no longer than 45 words. The accuracy here is the average value of the sentences with length in the window $[l-2, l+2]$. Each data point is a mean score over 5 runs, and error bars have been omitted for clarity. It is found that both BLSTM-2DPooling and BLSTM-2DCNN outperform the other two models. This suggests that both 2D convolution and 2D max pooling operation are able to encode semantically-useful structural information. At the same time, it shows that the accuracies decline with the length of sentences increasing. In future work, we would like to investigate neural mechanisms to preserve long-term dependencies of text. \subsection{Effect of 2D Convolutional Filter and 2D Max Pooling Size} We are interested in what is the best 2D filter and max pooling size to get better performance. We conduct experiments on SST-1 dataset with BLSTM-2DCNN and set the number of feature maps to 100. To make it simple, we set these two dimensions to the same values, thus both the filter and the pooling are square matrices. For the horizontal axis, c means 2D convolutional filter size, and the five different color bar charts on each c represent different 2D max pooling size from 2 to 6. Figure 3 shows that different size of filter and pooling can get different accuracies. The best accuracy is 52.6 with 2D filter size (5,5) and 2D max pooling size (5,5), this shows that finer tuning can further improve the performance reported here. And if a larger filter is used, the convolution can detector more features, and the performance may be improved, too. However, the networks will take up more storage space, and consume more time. \section{Conclusion} This paper introduces two combination models, one is BLSTM-2DPooling, the other is BLSTM-2DCNN, which can be seen as an extension of BLSTM-2DPooling. Both models can hold not only the time-step dimension but also the feature vector dimension information. The experiments are conducted on six text classificaion tasks. The experiments results demonstrate that BLSTM-2DCNN not only outperforms RecNN, RNN and CNN models, but also works better than the BLSTM-2DPooling and DSCNN \cite{zhang2016dependency}. Especially, BLSTM-2DCNN achieves highest accuracy on SST-1 and SST-2 datasets. To better understand the effective of the proposed two models, this work also conducts a sensitivity analysis on SST-1 dataset. It is found that large filter can detector more features, and this may lead to performance improvement. \section*{Acknowledgements} We thank anonymous reviewers for their constructive comments. This research was funded by the National High Technology Research and Development Program of China (No.2015AA015402), and the National Natural Science Foundation of China (No. 61602479), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB02070005). \bibliographystyle{acl}
2024-02-18T23:40:26.518Z
2016-11-22T02:08:35.000Z
algebraic_stack_train_0000
2,415
4,703
proofpile-arXiv_065-11856
\section{Introduction} Predicting pedestrian trajectory is an essential task for accomplishing mobility-based jobs in naturalistic environments, such as robotics navigation in crowded areas, safe autonomous driving and many other applications that require foreseeing motion in an interactive dynamical system. Existing literature has focused so far on modeling the social interactions between pedestrians by discretizing the environment as a grid of local spatial neighborhoods \cite{alahi2016social}, taking global scope of the whole scene \cite{vemula2018social} or inferring relationships between pedestrians pairwise \cite{choi2019looking}. So far, spatial neighborhoods have been the fundamental basis for considering pedestrians influence on each other, accounting only for the positional and higher-order motion features. Other methods have used additional features such as head pose \cite{hasan2018mx,hasan2019forecasting,hasan2018seeing} to attain the visual field of attention in pedestrians in order to assess how they are related to each other. Classical approaches \cite{hasan2018seeing,helbing1995social,yamaguchi2011you} resorted to a hardcoded quantitative threshold to define the proximal distance for neighborhoods. The use of deep learning improves the neighborhood concept by learning a custom threshold for each pedestrian \cite{kipf2018nri}. MX-LSTM \cite{hasan2018mx} determines neighborhoods around pedestrians visually based on the fixed visual horizon of pedestrian. Within this imaginary area, pedestrians spatial states get pooled. To this end, MX-LSTM only considers the correlation between head pose and speed. In the aforementioned works, generalization can be problematic as the models rely on fixed assumptions that pertain to specific scenes. Recent works introduced additional features such as the looking angle of pedestrians \cite{hasan2018mx}. These approaches and GAN-based approaches resorted to dedicating several pooling layers for treating multiple features \cite{lisotto2019social,ridel2019scene} or multiple trajectories states \cite{amirian2019social,sadeghian2018sophie,zhang2019seabig} which will require additional transformation and pooling layers or assigning separate LSTM to each pedestrian in order to treat all the features together. \begin{figure} \centering \includegraphics[width=6cm, height=4cm, trim={0 0 0 2cm }]{images/intro_figure.png} \caption{Illustration of adaptive deep neighborhoods selection process. The process is comprised of two grids. To the left, the static neighborhood grid $f_O$ segments the scene image into several local regions. It takes pedestrians looking angle $v_1, v_2$ (shaped as yellow cones) to stem their awareness of the static surroundings. The dynamic grid $f_S$ takes pedestrians trajectories $x_1, x_2$ along with their looking angle to stem their social interactions. The end goal is to predict future trajectories $\Tilde{x}^1_{t+l}, \Tilde{x}^2_{t+l}$ accurately given the estimation of future potential neighborhoods and the best modeling of social interactions. To the right, the output static grid has few highlighted areas, which indicates future neighborhoods where pedestrians would walk. Also note the leaning links connecting pedestrians to indicate how the existence of social influence on each other.} \label{fig:intro_fig_visuospatial_ngh} \end{figure}{} In order to overcome the limitations of existing approaches, we estimate the neighborhood given the static context around pedestrian and a mixture of social cues that defines pedestrian situational awareness and social interactions. More concretely, we define an adaptive neighborhood that relies on visual locus and spatial constraints, supported by the existential correlation that associates the head pose to walking direction and intention. This is based on the observation that although people often do not articulately plan their walking trajectories, they keep their attention focused within coarse path boundaries. They also do not strictly define their neighborhoods. The neighborhood is a virtual concept to discretize the scene and depends on a mixture of social cues that indicate social interaction state and pedestrian awareness. Our approach updates the neighborhood definition for a multi-cued context, where multiple features are encoded for pedestrians. We coin this Visuospatial neighborhoods, as such neighborhoods characterize the spatial distance around pedestrians as well as their visual attention boundaries. Figure \ref{fig:intro_fig_visuospatial_ngh} shows that neighbourhood $\nu_4$ is highlighted with yellowish shade. This is perceived as a future neighborhood for pedestrians that are related to each other. Hence their states are to be pooled together as they share the same future neighborhood. Anticipating some region as potential neighborhood relies on where they look and their walking path. We propose Graph-to-Kernel LSTM (G2K LSTM) that combines the spatial physical boundaries and the visual attention scope to shape each pedestrian neighborhood. G2K LSTM is an LSTM-based approach that transforms a spatio-temporal graph into the kernel to estimate the correlations between pedestrians. This correlation represents the importance of a relationship between two people, stemming from the natural correlation between their visual angle and their relative distances. In sociological and psychological proxemics \cite{hall1966hidden,bera2017sociosense}, human maintains specific distance from others to be his/her personal zone within which they feel comfortable and evade collisions. This zone is maintained by what they observe and pay attention to. Attaining accurate estimation of such neighborhoods is a non-trivial task, due to the stochasticity pertaining to the ground-truth definition. Following a stochastic optimization, neighborhood formation is bounded to minimize the trajectory prediction errors, such that the underlying graph structure improves the social interaction modeling. We develop neighborhood modeling mechanism based on Grid-LSTM \cite{kalchbrenner2015grid}, which is a grid mask that segments the environment into a regular-shaped grid. It encompasses the sharing mechanism between adjacent neighborhoods in the grid. This sharing determines how pedestrians social interaction is modeled in deep networks. Predicting pedestrians future trajectory can be used to further define future neighborhoods, thereby leading to a better understanding of how pedestrians are related and influenced by each other. In summary, this paper delivers the following contributions: \begin{itemize} \item We introduce Grid-LSTM into pedestrian trajectory prediction to encode multiple social cues altogether. Seeding from the natural correlation found between the visual and the spatial cues mixture, the network learns a soft-attention mechanism for evaluating given social interaction importance from within the provided data. The correlation between head pose and walking direction emphasizes pedestrian walking intention and in this work we combine head pose with walking trajectories to improve social and contextual modeling of pedestrians interactions. \item We present a deep neighborhood selection method for estimating the influence of social relationships between pedestrians. It guides the message-passing process according to a relational inference in the graph-based network that decides the routes for sharing the attention weights between pedestrian nodes. \item Due to the aforementioned data-driven mechanisms, our approach yields state-of-the-art results on widely-tested datasets. Our models also produce consistent results across the datasets, which indicates its generalization capability to various crowd contexts. \end{itemize} \section{Related Work} \paragraph{The visual field and the situational awareness.} Existing works have shown the benefits of combining head pose with positional trajectory for prediction. The head pose is used as a substitute for the gaze direction to determine the Visual Field of Attention (VFOA) \cite{hasan2018seeing,yang2018my}. The head pose feature correlates with the walking direction and speed, which emits pedestrian destinations as well as their awareness of the surrounding context. The visual field of attention in pedestrians relied on assumptions that align head pose with gaze direction to fixate the attention region as pedestrian is walking. In resemblance to \cite{yang2018my}, we argue that the width of the visual field and its shape shall affect the representation of pedestrian visual awareness state and thereby their neighborhood perception. To a large extent, when pedestrians are walking, they only consider other pedestrians who are close and can pose a direct influence. This social situation can be captured when using a narrow visual angle, i.e. 30 $\deg$. Nevertheless, pedestrian does not always look straight, they tend to tilt their heads and therefore perceive more about the environment structure and other dynamical objects around them. Using a wider sight span for each pedestrian allows a better perception of pedestrian awareness and focal attention. \paragraph{Relational inference for neighborhood selection.} Extensive research is conducted on relationships inference between entities of data in image segmentation \cite{scarselli2008graph,battaglia2018relational}, graph recovery for interactive physical systems \cite{webb2019factorised,kipf2018nri}. Recently, researchers engaged spatio-temporal graphs to model pedestrians relationships to each other \cite{jain2016structural,vemula2018social}. More advanced approaches such as \cite{choi2019looking,sadeghian2018sophie,xue2019location,fernando2018soft+} evaluate the relational importance between pedestrians using neural attention technique. In addition to using attention, \cite{zhang2019sr} deploys the refinery process that iterates over variants of the neighborhood until it selects the best neighbors set for each pedestrian. In our work, we similarly target neighborhood selection problems but with fewer iterations and faster recurrent cell called Grid-LSTM that requires fewer training epochs. \paragraph{Social neighborhoods for human-human interaction.} In the literature, this area is extensively studied as two separate disciplines: Human-human interaction and Human-space interaction. Forming neighborhoods is found to be based on a single type or a combination of both interaction types and the objective is to determine a way for combining pedestrians and discovering their influence. The approaches that socially define the basis for neighborhoods \cite{alahi2016social,cheng2018pedestrian,xue2018ss} proposed pooling techniques to effectively combine spatially proximate pedestrians, while \cite{haddad2020self} proposed influence-ruled techniques to combine pedestrians that align their motion according to each other. However, even when modeling the spatial neighborhoods with the aid of context information, existing approaches generally involve hardcoded proxemic distance for outlining neighborhood boundaries as fixed grids. While these works achieve successful results, they fail to consider dynamic environments representation which may cause failure in cases that require adaptive neighborhoods or extra cues such as pedestrian visual sight span, to ascertain the conjunction between social motion and contextual restrictions. \section{Our Approach} \subsection{Problem Formulation} The problem of learning neighborhood boundaries is formulated as minimization of Euclidean errors between the predicted trajectory $\widetilde{X}$ and ground-truth trajectory $X$: \begin{equation} \label{eqn_costfn} \mathcal{L} = \underset{J_\theta}{argmin} ||\quad \widetilde{X} - X \quad||^2_2 \end{equation}{} Such that $J_\theta$ refers to the network trainable parameters that minimize the loss, $\mathcal{L}$, \textit{L2-norm} of $\widetilde{X}$ and $X$. Let $X$ be pedestrians trajectories, such that: $X$ = ${x_1, x_2, ... , x_n}$, with n pedestrians. $\widetilde{X}$ are future trajectories, $x^{i}_t$ is i-th pedestrian trajectory from time-step $t = 1$ until $t = t + obs$, given that $obs$ is observation length. We observe 8 steps of each pedestrian trajectory and predict for the next 12 steps. Each predicted step is added to the first predicted point, $\Tilde{x}^i_{t+pred} = \Tilde{x}^i_{t+1} + \Tilde{x}^i_{t+2} + ... + \Tilde{x}^i_{t+l}$ to maintain consistency and dependency between predicted steps. \subsection{Temporal Graphs} Given a set of pedestrians P, we represent their trajectories over time using a temporal graph $G_t$ at each time step $t$, containing \ensuremath{\mathcal{N}} nodes, such that each pedestrian is assigned a node $n_t$ to store their position and temporal edge $e_t$ that links the node $n_{t-1}$ to $n_t$. Our approach Graph-to-Kernel (G2K), maps temporal graphs into a kernel of fixed dimension, $K$, such that the kernel generates predicted trajectories $\widetilde{X}$ and adjacency states between the pedestrian nodes. Adjacency models the social interactions. So $J_\theta$ in Eq. \ref{eqn_costfn} refers to pedestrians associations to each others: \begin{equation}\label{eqn:kernel} \widetilde{X} , J_\theta = K(G_{(\mathcal{N},\nu)}) \end{equation}{} \subsection{Social Relational Inference (SRI)} \label{sec:sri} In this section, we explain the SRI unit which is the proposed kernel. We detail the steps for the mechanism in which the kernel $K$ performs deep relational inference between pedestrians. SRI unit starts estimating the adjacency state for each pedestrian. It forms the adaptively-shaped social neighborhoods. The kernel has a customized design relevant to the features set included in our model versions. As defined earlier in Eq. \ref{eqn:kernel}, kernel $K$ generates the social interaction states and future positional predictions, so according to $MC$ model and $MCR^*$ models set, Eq. \ref{eqn:kernel} is updated as follows: \begin{equation} \label{mc_mcr_K} \widetilde{X}, J_\theta = K(f_S, V, h) \end{equation}{} A single Grid-LSTM cell $NLSTM_{\nu}$ is used as the encoder of the social human-human interactions. It assumes that motion happens over the uniformly-divided square grid where each neighborhood is tagged by $\nu$. SRI casts $NLSTM_{\nu}$ over the temporal graph $g_t$. It takes pedestrians trajectories, $X$, initial hidden state $h$ and generate embeddings of spatial features $f_S$. To formulate multi-cued trajectories set $T$, we firstly encode positional trajectories $X$ using two-stages transformation function $\phi$ as follows: \begin{equation} \mathcal{X} = \phi (X) \end{equation} \begin{equation} \phi (X) = W_{ii} * (W_i * X) \end{equation} In our work, we coin a new term for multi-cued neighborhoods as "Visuospatial" neighborhood, that is a combination of pedestrians spatial whereabouts and their visual sight span. To represent the Visuospatial neighborhoods, we encode the 2D head pose annotations, also called Vislets, $V$ using single-stage transformation function $\phi$ as follows: \begin{equation} \mathcal{V} = \phi (V) \end{equation} \begin{equation} \phi (V) = W_i * V \end{equation} Then we concatenate the embedded cues $\mathcal{V}$ and $\mathcal{X}$ in Eq. to be fed into the GNN Grid-LSTM in Eq. \ref{eqn:ngh_glstm}: \begin{equation} T = (\mathcal{X}, \mathcal{V}) \end{equation}{} \begin{equation} \label{eqn:ngh_glstm} f_S, h_S = NLSTM_{\nu}(T, h_S) \end{equation}{} We developed several models starting with the simplest model comprising of single Grid-LSTM cell with only positional trajectories. We term it as G-LSTM model. According to G-LSTM model, kernel $K$ of Eq. \ref{eqn:kernel} is updated as follows: \begin{equation} \label{glstm_K} \widetilde{X}, J_\theta = K(f_S, h) \end{equation} \begin{figure} \centering \includegraphics[width=0.85\linewidth, height=6cm]{images/gated_neighborhood_network.png} \caption{Full pipeline of G2K kernel. The SRI network encodes Vislets $\mathcal{V}$ and positional trajectories $\mathcal{X}$ for each pedestrian trajectory. Then maps them into social grid mask $f_S$ using $NLSTM_\nu$. The GNN network discretize static context using $NLSTM_O$ into 'Visuospatial' neighborhoods and stores pedestrian contextual awareness in $f_O$. At the consequent step, SRI takes $f_O$ and $f_S$, and maps them into the weighted adjacency matrix. This will generate the edge set $\nu$ as means of completing graph $G_t$ at time-step $t$.} \label{fig:sri} \end{figure}{} After that, we develop two versions of Multi-Cued Relational inference (MCR) model: $MCR_n$ \& $MCR_{mp}$. Hidden states sharing between pedestrians is guided by a deep relational mechanism that involves learning the social interaction effect. This effect is manifested through soft-attention that evaluates pedestrian relationships importance and chooses the more influential relationships set than other associations that can have a weaker impact on pedestrian trajectory. Consequently, the hidden states become messages passed between pedestrians based on a deep selective process that determines who are the neighbors of each pedestrian as displayed in Figure \ref{fig:sri}. As part of the SRI network, we run GNN network to produce the final static features $\mathcal{F}$. SRI takes this output and feeds it through another transformation function $\phi$. The latter embeds the features set output by GNN unit and yields $\mathcal{F}'$: \begin{equation} \mathcal{F}' = \phi(\mathcal{F}) = C * ( (W_v * [f_S, \mathcal{V}] + b_v) * (W_r * \mathcal{F}) ) \end{equation} In $MCR_{mpc}$, $\mathcal{F}$ stores the enhanced modeling, comprised of physical spatial constraints $C$ and social relative features $f_S$. In $MCR_n$, $\mathcal{F}$ will only take the social relative features $f_S$. SRI unit will tune the influence of each region. It deploys scaled self-attention mechanism \cite{velivckovic2017graph} (as defined in Eq. \ref{eqn:soft_attn}). Attention coefficients $\mathscr{a}$ considers the human-human interaction features and the human-space interaction features: \begin{equation} \label{eqn:soft_attn} a = \frac{Softmax (\exp{(\mathcal{F}')})}{\Sigma \exp{(\mathcal{F}')}} \end{equation}{} Finally, $MCR_n$ kernel neurally evaluate hidden states by passing through Softmax layer. This will transform pedestrians nodes states into the continuous space of the interval [0,1]: \begin{equation} \label{eqn:softmax_fnri} Softmax(H) := [0,1] \end{equation}{} Given the $i$th pedestrian trajectory features, neighborhood $\nu^i_t$ of pedestrian $i$ is defined as follows: \begin{equation} \nu^i_t = \{(n_i, n_j)\}^+; \quad |\nu^i_t| <= |\mathcal{N}| \end{equation}{} Moreover, neighborhood boundaries at pedestrian $i$ are defined by the set of edge pairs that connect pedestrian nodes to other nodes in the graph. Note here that $\nu$ is the Greek symbol (\textit{Nu}) and it is different from the symbol $V$ which was earlier tagged to the Vislets. Inspired by the neural factorization technique \cite{webb2019factorised} to factorize spatio-temporal graph edges, we establish a deep mechanism for neighborhoods that is aware of the static and the social constraints. Since the best setting for neighborhoods is unknown, we estimate pedestrians relationships using their relative visual attention and spatial motion features to learn their neighborhoods. In $MCR_{mp}$ kernel, the model directly factorizes relationships between pedestrians by passing importance weights in Eq. \ref{eqn:soft_attn} as a message with the hidden states matrix and thresholding the strong and weak relationships: \begin{equation} H = Softmax (f'_{\mathcal{O}_{t+1}}*H) \end{equation}{} For accomplishing social relational inference of pedestrians interactions, the last two steps are common among models: $MCR_n$, $MCR_{mp}$ and $MCR_{mpc}$. Using a normally-initialized linear transformation matrix $W$, the new hidden states are mapped into adjacency matrix to determine the edges in the graph which manifest pedestrians relationships with various degrees of strength: \begin{equation} A = W*H \end{equation}{} Eventually, $J_\theta$, is the updated pedestrian adjacency states $H^*_t$, which will be fed again in the following time-step to the social neighborhood encoder in Eq. \ref{eqn:ngh_glstm}: \begin{equation} H^*_{t} = A*H \end{equation}{} The resulting adjacency-based representation $H^*_{t}$ is evaluated by whether the new proposal of edge set, generates better modeling than the previous time-step edge set. This is set as the objective cost function $J_\theta$. \subsection{Gated Neighborhood Network (GNN)} \label{sec:gnn} In this section, we explain our proposed mechanism for deep neighborhoods formation. GNN unit involves a Grid-LSTM \cite{kalchbrenner2015grid} cell $NLSTM_O$ to encode contextual interactions. It assumes the scene as fixed-shaped grid of shape $[n x n]$. \begin{figure*} \centering \includegraphics[width=\linewidth, height=6cm]{images/mpc_kernel_3.png} \caption{Gated Neighborhood Network pipeline. At the beginning, $2DCONV$ encodes a static image of the scene and forward the features into NLSTM cell which discretizes the environment into a virtual grid.} \label{fig:gnn} \end{figure*}{} Figure \ref{fig:gnn} illustrates the pipeline of GNN. $2DCONV$ is a 2D convolutional layer used for encoding the contextual interaction. It runs a grid mask filter $M$ and assumes that static space neighborhoods are discretized into a square grid. The mask filter is a normally-initialized matrix to indicate that initially, all local regions are of equal influence on pedestrian motion. \begin{equation} C = 2DCONV(f_S, h_\mathcal{O}, M) \end{equation}{} Compared to literature approaches, the Social-Grid model \cite{cheng2018pedestrian} dedicates a separate Grid-LSTM \cite{kalchbrenner2015grid} for each pedestrian. In our models, we use only a single Grid-LSTM to encode a set of features for all pedestrians. MX-LSTM \cite{hasan2018mx} applies the concept of Visual Field of Attention (VFOA) to determine pedestrians attention to each other. They hard-code pedestrians VFOA and rely only on the head pose which to some extent can validate the looking angle. The social pooling and the visual pooling proposed in \cite{alahi2016social,hasan2018mx} respectively selects one feature modality for learning about pedestrians interactions by pooling their states into pre-determined neighborhoods. In our work we encode head pose $V$ with static features $C$ to formulate the "Visuospatial" neighborhood representation as means of pedestrian attention to static context, using single Grid-LSTM cell $NLSTM_O$: \begin{equation} \label{eqn:visuospatial} f_\mathcal{O}, h_\mathcal{O} = NLSTM_O(C, V, h_\mathcal{O}) \end{equation}{} Eq. \ref{eqn:visuospatial} introduce multi-modality concept through combining both, visual awareness state $V$ and physical spatial constraints $C$, for stemming pedestrians attention to the physical context. In other words, encoding pedestrians interaction with the static context is stored in $f_\mathcal{O}$. Later in Section \ref{sec:sri}, $f_\mathcal{O}$ will be passed with the social interaction features $f_{S}$ to calculate Adjacency state of each pedestrian. Taking the static grid features $f_{\mathcal{O}}$ in Eq. \ref{eqn:visuospatial} are regularized by a constant factor of $\lambda$ as a means of starting with equally important static neighborhood regions: \begin{equation} f_{\mathcal{O}}^\ensuremath{\prime} = f_{\mathcal{O}} * f_{\lambda} \end{equation}{} In formal definition, given a frame $f$, we discretize it into a grid of uniformly-shaped grid $G$ of $k$ neighbourhoods. Initially, coarse neighborhoods in the grid $f_\mathcal{O}$ are assigned equal importance which will be adapted through training batches. A final neighborhood mask $\mathcal{F}$ contains a combination of fixed and relative pedestrians trajectory features, $f_O \& f_S$. $\mathcal{F}$ can vary depending on the input features to the static mask: \begin{equation} \mathcal{F} = f_S * f_{\mathcal{O}}^\ensuremath{\prime} \end{equation}{} The static features $f_\mathcal{O}$ are weighted by soft-attention coefficient $a$ using the scaled soft-attention mechanism \cite{bahdanau2014neural}, before being neurally evaluated in Eq. \ref{eqn:soft_attn}: \begin{equation} f'_{\mathcal{O}_{t+1}} = a * f'_{\mathcal{O}_{t+1}} * h_\mathcal{O} \end{equation}{} \section{Experiments} \label{sec:exp} \subsection{Accuracy Metrics} We use Euclidean average errors same as in \cite{hasan2018mx,alahi2016social}, which are: \begin{itemize} \item Average Displacement Error (ADE): measures prediction errors along the time-steps between the predicted trajectory and the ground-truth trajectory as follows: \begin{equation} \frac{\sum_{i = 1}^N \sum_{j = 1}^l ||(\widetilde{X_i^j} - X_i^j)||_2 } {N * |T|} , \end{equation}{} \item Final Displacement Error (FDE): measures prediction errors at the final time-step between the predicted trajectory and the ground-truth trajectory as follows: \begin{equation} \frac{ \sum_{i = 1}^N ||(\widetilde{X}^{T-1}_i - X^{T-1}_i)||_2 } {N} \quad . \end{equation}{} \end{itemize} \subsection{Implementation Details} During training, we set the batch size to 16, grid size to 4 and lambda parameter to 0.0005. We set the hidden size to 128 for Grid-LSTM, the number of frequency blocks to 4, frequency skip to 4 and the number of cell units to 2. Each frame gets segmented into 8 virtual zones as indicated by the grid size and set the global neighborhood size to 32. In all experiments over our models, the features extracted from the static grid and social neighborhood vector were mapped to hidden states of length fixed at 10. With Grid-LSTM we set the training epochs to 10 as we noticed that LSTM has reached convergence within that number of iterations without any major improvements in the learning curve beyond that. \subsection{Baseline Models and Proposed Models} \begin{itemize} \item \textit{S-LSTM} \cite{alahi2016social}. Dedicates LSTM for every pedestrian, and pool their states before predicting future steps. Their method only combines features of pedestrians who are found occupants of common neighborhood space. The neighborhood and occupancy grid sizes are set empirically for attaining the best results over ETH and UCY datasets. \item \textit{MX-LSTM}. A multi-cued model that encodes Vislets and Tracklets to estimate pedestrian trajectory using LSTM. It determines pedestrians relationships using fixed visual frustum that indicates their visual attention state. \item \textit{SR-LSTM} \cite{zhang2019sr}. A LSTM-based network inspired by message-passing graph neural networks \cite{gilmer2017neural} which improves pedestrians motion representation and social neighborhoods for predicting future trajectory over fully-connected spatio-temporal graphs. \item \textit{G-LSTM}. Single Grid-LSTM cell that encodes pedestrian walking trajectories without additional cues. It doesn't include Sections \ref{sec:gnn} and \ref{sec:sri} in its pipeline. \item $MC$ Multi-Cue inference. This model combines Vislets and Tracklets to predict pedestrians trajectories. It uses Vislets to incorporate pedestrian situational awareness into their motion modeling, however, it does not differentiate pedestrians influence. \item $MCR_n$ Multi-Cue Relational inference. Built upon the $MC$ model, it capitalizes on its understanding of situational awareness to differentiate pedestrians importance on each other for more accurate modeling of social interactions. \item $MCR_{mp}$ Multi-Cue Relational inference. Uses message passing mechanism through virtual static neighborhood grid that directly passes importance values which reflects pedestrians social interaction. \item $MCR_{mpc}$. In addition to the previous design components, this model encodes the visual static feature of the scene using a 2D convolutional layer to account for the contextual interaction between pedestrian and scene structure. \end{itemize}{} \subsection{Quantitative Analysis} \begin{table*}[t] \begin{center} \begin{tabular}{ccccccc} \hline Metric & Model & Zara1 & Zara2 & UCY-Univ& TownCentre & AVG\\ \hline ADE & S-LSTM \cite{alahi2016social} & 0.68 & 0.63 & 0.62& 1.96 & 0.64 \\ & MX-LSTM \cite{hasan2018mx} & 0.59 & 0.35 & 0.49 & 1.15 & 0.48 \\ & SR-LSTM\cite{zhang2019sr} & \textbf{0.41} & \textbf{0.32} & 0.51 & 1.36 & 0.65 \\ & MC & 0.46 & 0.38 & 0.47 & 0.43& 0.44 \\ & MCR\_n & 0.47 & 0.38 & 0.47 & 0.39 & 0.43\\ & MCR\_{mp} & 0.45 & 0.38 & \textbf{0.45} & \textbf{0.34}& \textbf{0.41}\\ & MCR\_{mpc} & 0.45 & 0.38 & 0.47 & 0.39 & 0.42\\\hline Metric& Model & Zara01 & Zara02& UCY-Univ &TownCentre & Avg \\\hline FDE & S-LSTM \cite{alahi2016social} &1.53 & 1.43& 1.40 &3.96 &1.45 \\ &MX-LSTM \cite{hasan2018mx}&1.31 & 0.79 & 1.12 & 2.30 & 1.38 \\ & SR-LSTM\cite{zhang2019sr}&\textbf{0.90}& \textbf{0.70} &1.10& 2.47& 1.30\\ & MC&0.98& 0.84& \textbf{0.96}& 1.06& 0.96\\ & MCR\_n&1.00& 0.82& 0.97& 0.95& 0.94\\ & MCR\_{mp}&1.00& 0.93& 1.01&\textbf{0.80}& \textbf{0.94} \\ & MCR\_{mpc} & 0.95 & 0.83 & 1.03 & 1.00 & 0.95 \\\hline \end{tabular} \end{center} \caption{Euclidean Errors in trajectory prediction over Crowd \& TownCentre datasets in meters. Observation length is 8 steps (3.2 seconds) and prediction length is 12 steps (4.2 seconds). } \label{tab:quant_table} \end{table*}{} Table \ref{tab:quant_table} summarizes our experimental evaluation over Zara and UCY datasets. Our models achieve state-of-the-art results in pedestrian trajectory prediction task. Taking the closest model, $MC$, it resembles the MX-LSTM design in terms of using mixture of pedestrian features, yet we are able to improve upon MX-LSTM \cite{hasan2018mx} by a margin of 8\% and 12\% in $MC$ and 10\% and 13\% in $MCR_{mp}$ over the average errors in ADE and FDE, respectively. In $MCR_{mp}$, we were able to achieve comparable results with SR-LSTM \cite{zhang2019sr}. They rely on sharing states between pedestrians using a message-passing mechanism that carries out the social information between pedestrian nodes in a global manner, meaning that they assume a fully-connected graph to refine the social interaction modeling through graphs. In Table \ref{tab:quant_table}, by comparing SR-LSTM to our message-passing model $MCR_{mp}$, we observe that both achieve approximate prediction errors, yet SR-LSTM induce higher message passing complexity due to using fully-connected graph and performing message passes pedestrian-wise to refine the social features representation. On the other hand, $MCR_{mp}$ encodes pedestrians awareness states into their social interaction using multiplication as previously discussed in Section \ref{sec:sri} without the need for iterative message-passing throughout the nodes. $MCR_{mp}$ model has better performance over the SR-LSTM method in Univ subset. The density of social interactions is the highest in Univ scenario with less impact from the static context on pedestrian paths. This is where the addition of social cues shows improvement in a highly interactive crowd, in comparison with the less interactive crowd as in Zara sets. The crowd in Zara is significantly influenced by the shop facade and therefore encoding the physical context features was more beneficial under these two subsets as we observe from $MCR_{mpc}$ performance. In general, the crowd in TownCentre induce a rather linear walking pattern with less socialization and less attention paid to the contextual remarks. Best results were obtained through $MCR_{mp}$, as it focuses on relational inference given the tracklets and vislets cues. This proves that our relational inference mechanism succeeded in inferring interactions even in the less interactive situation along with the inference performance in UCY-Univ where the crowd behaves like transient groups where the walks are interrupted by conversations. \subsection{Ablation Studies} We performed additional ablative experiments to quantitatively illustrate the importance of main grid components and attention mechanisms, by retraining the model each time without specific layer(s). We tested the impact of two components: the Grid-LSTM handling static grid shown earlier in Figure \ref{fig:gnn} and the soft-attention mechanism explained through Eq. \ref{eqn:soft_attn}. We followed the same k-fold cross-validation used for training the original MCR model on the same datasets: Zara and UCY. Table \ref{tab:glstm_abl} clearly illustrates the improvement of our basic model, G-LSTM, upon the NoFrustum model \cite{hasan2018mx}. Our basic model only deploys one Grid-LSTM cell and relies on pedestrians positional trajectories. This design complexity resembles the NoFrustum model which is a simpler variant of MX-LSTM work. They only use pedestrians positions with LSTM cell. The usage of Grid-LSTM attains outperforming results over Zara and UCY in a row. We picked the NoFrustum variant as a baseline to compare with our basic model design G-LSTM, which excludes the GNN and SRI units from its pipeline and only take positional trajectories into single Grid-LSTM cell. We tested neighborhood size on two values: 32 and 64 in the grid $G$. In our implementation, 64 local neighborhood creates a virtual neighborhood that covers a smaller area around pedestrian while 32 indicates that neighborhoods become coarser in size. Through experiments, we observed that the smaller the neighborhood, the higher the prediction error becomes. The errors increased by up to 15\% and 20\% in UCY-Univ over ADE and FDE respectively. This elevation has a special cause that is related to how our model benefits from encoding more pedestrian states together and understand that their cues are relevant to each other. However, in TownCentre, the pedestrians are far less communicative with each other, therefore, we noticed that ADE and FDE errors decreased by 10\% and 25\%, respectively. \begin{table}[t] \begin{center} \begin{tabular}{ccccccc} \hline Metric & Model & Zara1 & Zara2 & UCY-Univ & TownCentre & AVG\\ \hline ADE & NoFrustum \cite{hasan2018mx} & 0.63 & \textbf{0.36} & 0.51 & 1.70 & 0.50\\ & SR-LSTM$_{\{ID:2\}}$ & 0.47 & 0.38 & 0.56 & 1.36 & 0.77\\ & G-LSTM &\textbf{0.46} & 0.38 & \textbf{0.48} & \textbf{0.38} & \textbf{0.43}\\ \hline FDE & NoFrustum \cite{hasan2018mx} & 1.40& 0.84 &1.15 & 3.40& 1.13 \\ & SR-LSTM$_{\{ID:2\}}$ & 1.07 & 0.85 & 1.27& 2.47& 1.42 \\ & G-LSTM & \textbf{0.95}& \textbf{0.83} & \textbf{1.01}& \textbf{0.89} & \textbf{0.92} \\\hline \end{tabular} \end{center}{} \caption{Comparison of trajectory prediction errors between MX-LSTM NoFrustum model, SR-LSTM$_{\{ID:2\}}$ and our basic Grid-LSTM model, G-LSTM. Errors are displayed in meters. SR-LSTM$_{\{ID:2\}}$ is a model configuration which excludes social attention and selective message-passing routes between pedestrians.} \label{tab:glstm_abl} \end{table}{} \paragraph{Component Impact Analysis} Before encoding the static context features in $MCR_{mpc}$, the virtual grid $G$ was considered an additional encoding layer that only transforms the multiple features together concerning the social interaction without adding a reference to the contextual interaction. Hence, it indicated low effectiveness and appeared as unnecessary to use the static grid to achieve noticeable improvements. Accounting for the semantic features in $G$ illustrates the reduction in FDE across all datasets and this proves the positive impact of this feature on predicting pedestrian trajectory at its final point being aware of their static surrounding settings. The addition of this component only increases the running time by 0.30 seconds compared to our socially-focused models. \subsection{Qualitative Analysis} Figure \ref{zara_htmap} is a visualization of attention weights generated in $MCR_{mp}$. Attention weights, in this case, are distributed over the active part of the scene which includes the navigable space and social interaction between pedestrians. The dark squares include the least active areas where an obstacle is blocking motion. Moreover, the heatmap of adjacency matrix is a depiction of each neighborhood. These are assigned weighted values to show different impact on the crowd motion. This is manifested through the zones that highlight higher impact through the lighter shades of color, such as Zara shop entrance and the walkway at the right side. The lighter shade means that throughout the observed frames, these two regions were busy and have specific influence that causes change in pedestrians direction or dynamics. Figure \ref{zara_7815_maxadjmat} presents failure case of estimating an active neighborhood under $MCR_n$. However, $MCR_{mpc}$ relates attention to physical space features. Figure \ref{zara_8015_maxadjmat}, illustrates that the maximum attention is given to the area through which pedestrian future path is passing. This illustration proves that our model is able to effectively recognize future neighborhoods for pedestrians as these neighborhoods presents target points that were regularly visited in the scene. \begin{figure} \subfloat[][Attention weights from $MCR_{mp}$ over the virtual grid in Zara1 scene]{\label{zara_htmap} \includegraphics[width=0.3\linewidth, height=0.3\linewidth]{images/zara1_6017_htmap_1.png} } \subfloat[][Maximum attention weight assigned to the neighborhood highlighted by the white patch, in Frame 8015 of Zara2 dataset.]{\label{zara_8015_maxadjmat} \includegraphics[width=0.3\linewidth, height=0.3\linewidth]{images/Zara2_8015_maxwght_adj_bg_1.png}} \subfloat[][Failure case of assigning maximum importance to neighborhoods in the scene]{\label{zara_7815_maxadjmat} \includegraphics[width=0.3\linewidth, height=0.3\linewidth]{images/zara2_7815_max_adjmat_bg_1.png}} \caption{Snapshots taken from our models to visualize the perception of neighborhoods relevant importance to pedestrians.} \label{fig:qualit_mcr} \end{figure}{} \section{Conclusion} In this work, we introduced a novel perspective on modeling social and contextual interactions as virtual neighborhoods with an estimation of its future relative importance values as a Gated Neighborhood Network (GNN). We developed the Social Relational Inference (SRI), a data-driven inference mechanism, to model the social interaction on graphs with no reliance on any metric assumption. Our approach outperformed state-of-the-art approaches that conduct pedestrian trajectory prediction with the help of several hand-engineered settings to model the crowd motion. In our future work, we plan to further improve the alignment between GNN and SRI components, so as to generate more realistic perception of the neighborhoods importance. Additionally, the current approach takes 3 to 4 seconds to run predictions over moderately crowded scene consisting of around 12 pedestrians. We will investigate methods to improve the training process in deep recurrent models such that it can eliminate network component complexity. \bibliographystyle{splncs}
2024-02-18T23:40:27.281Z
2020-07-09T02:11:15.000Z
algebraic_stack_train_0000
2,451
5,850
proofpile-arXiv_065-11964
\section{$h$-Principle and Existence of $K$-Isocontact Immersions} \label{sec:application} We shall now obtain the $h$-principle for $\Omega$-regular, $K$-isocontact immersions $(\Sigma, K)\to (M,\calD)$, where $\calD$ will be a degree $2$ fat distribution or a quaternionic contact structure. \subsection{Isocontact Immersions into Degree $2$ Fat Distribution} Throughout this section, $\calD$ is a degree $2$ fat distribution on $M$ and $K$ is a contact structure on $\Sigma$. \label{sec:hPrinIntoFat:isocontact} \begin{theorem}\label{thm:hPrinIsocontactDeg2} $\relICont$ satisfies the $C^0$-dense $h$-principle, provided $\rk\calD \ge 2\rk K + 4$. \end{theorem} \begin{proof} We embed $(\Sigma,K)$ in $(\tilde\Sigma, \tilde K)$ where $\tilde \Sigma = \Sigma \times \bbR$, $\tilde K = d\pi^{-1} K = K \times \bbR$, and $\pi : \tilde \Sigma\to \Sigma$ is the projection. We have the associated relation $\relIContTilde\subset J^1(\tilde\Sigma,M)$. In view of \autoref{thm:hPrinGenExtn}, it is enough to show that the map $ev : \relIContTilde|_O\to \relICont|_O$ induces a surjective map on sections over any contractible $O\subset \Sigma$. We first show that $ev$ is fiberwise surjective. Suppose $(x,y,F)$ is a jet in $\relICont$ and let $V = F(K_x) \subset \calD_y$. Proceeding as in the proof of \autoref{prop:isocontactIsOmegaRegular}, we can show $V$ is $\omega^1$-symplectic and $\omega^2$-isotropic, with respect to a suitable choice of trivializations. Hence, by \autoref{prop:fatTupleIsoContact} (\hyperref[prop:fatTupleIsoContact:2]{2}) $V^\Omega \cap {V^\Omega}^\Omega = 0$. Now, $V$ being an $\Omega$-regular subspace, the codimension of $V^\Omega$ in $\calD_y$ is $2\dim V = 2\rk K$ (\autoref{prop:fatTupleOmegaRegular}). Hence, it follows from the dimension condition that $\dim V^\Omega \ge 4$. So we can choose $0 \ne \tau \in V^\Omega$. Since $\tau \not \in {V^\Omega}^\Omega$, it follows from \autoref{prop:fatTupleExtension} that $V^\prime = V+\langle\tau\rangle$ is an $\Omega$-regular subspace of $\calD_y$. Define an extension $\hat{F} : T_x\Sigma \times \bbR\to T_y M$ of $F$ by $$\hat{F}(v, t) = F(v) + t\tau \; \text{for $t\in \bbR$ and $v\in T_x\Sigma$.}$$ It is then immediate that $\hat{F}^{-1}(\calD_y) = \tilde{K}_x$ and $\hat{F}$ is $\Omega$-regular. Furthermore, for $(v_i, t_i) \in \tilde{K}_x = K_x \oplus\bbR, i=1,2$, we have \begin{align*} \Omega\big( \hat F(v_1, t_1), \hat F(v_2,t_2)\big) &= \Omega(F(v_1), F(v_2)), \; \text{as $\tau \in V^{\Omega} = \big(F(K_x)\big)^{\Omega}$}\\ &= \tilde F \circ \Omega_{K_x}\big(v_1, v_2\big), \; \text{as $F^*\Omega|_K = \tilde F \circ \Omega_K$}\\ &= \tilde{\hat F} \circ \Omega_{\tilde K_x} \big((v_1, t_1), (v_2, t_2)\big), \end{align*} where $\tilde{\hat F} : T\tilde \Sigma/\tilde K|_{(x,t)} \to TM/\calD|_y$ is the map induced by $\hat F$ and $\Omega_{\tilde{K}}$ is the curvature form of $\tilde{K}$. Note that after identifying $T\tilde{\Sigma}/\tilde{K} = \pi^*(T\Sigma/K)$, we get $\Omega_{\tilde{K}} = \pi^*\Omega_K$. In other words, $\hat{F}$ satisfies the curvature condition relative to $\Omega_{\tilde K}$ and $\Omega$. Now suppose $(F,u) : T\Sigma \to TM$ is a bundle map representing a section of $\relICont$, with $u = \bs F : \Sigma \to M$ being the base map of $F$. It follows from the above discussion that we have two vector subbundles of $u^*TM$ defined as follows: \[T\Sigma^\Omega := \bigcup_{\sigma \in \Sigma} \big(F(K_\sigma)\big)^\Omega \quad \text{and} \quad {T\Sigma^\Omega}^\Omega := \bigcup_{\sigma \in \Sigma} {\big(F(K_\sigma)\big)^\Omega}^\Omega,\] such that $T\Sigma^\Omega \cap {T\Sigma^\Omega}^\Omega = 0$ and $\rk T\Sigma^\Omega \ge 4$, as discussed in the previous paragraph. Then, using a local vector field $\tau$ in $T\Sigma^\Omega$, we can extend $F$ to a bundle monomorphism $\hat F: T(O\times \bbR) \to TM$, over an arbitrary contractible open set $O\subset \Sigma$. Clearly $\hat F$ is a section of $\relIContTilde|_O$. Thus, $ev : \Gamma \relIContTilde \to \Gamma \relICont$ is surjective on such $O$. The proof then follows by a direct application of \autoref{thm:hPrinGenExtn}. \end{proof} \subsubsection{Existence of Isocontact Immersions} Let us first note the following. \begin{prop}\label{prop:isocontactIsOmegaRegular} Any formal isocontact immersion $F:(T\Sigma,K)\to (TM,\calD)$ satisfying the curvature condition is $\Omega$-regular. \end{prop} \ifarxiv \begin{proof} Let $x\in \Sigma$ and $F_x : T_x\Sigma \to T_y M$. We choose some trivializations of $T\Sigma/K$ and $TM/\calD$ near $x$ and $y$, respectively, such that $\tilde F_x$ is the canonical injection $\bbR \to \bbR\times \{0\} \subset \bbR^2$. Hence, there exist local $2$-forms $\eta, \omega^1,\omega^2$ such that $\Omega_K \underset{loc.}{=} \eta$ and $\Omega \underset{loc.}{=}(\omega^1,\omega^2)$ with respect to the trivializations, and the curvature condition $F^*\Omega|_K = \tilde F\circ\Omega_K$ translates into $$F^*\omega^1|_K = \eta, \quad F^*\omega^2|_K = 0.$$ Since $K$ is contact, $\eta$ is nondegenerate. Hence, $V = F(K)$ is $\omega^1$-symplectic and $\omega^2$-isotropic. By \autoref{prop:fatTupleIsoContact} (\ref{prop:fatTupleIsoContact:1}) $V$ is $(\omega^1,\omega^2)$-regular and hence, $F$ is $\Omega$-regular. \end{proof} \fi In view of the above proposition we then have the simpler description $$\relICont = \Big\{ (x,y, F) \; \Big|\; \text{$F$ is injective, \; $F^{-1}\calD_y = K_x$, \; $F^*\Omega|_{K_x} = \tilde F\circ \Omega_K|_x$} \Big\}.$$ In order to prove the existence of a $K$-isocontact immersion, we thus need to produce a monomorphism $F:T\Sigma \to TM$ such that $F^{-1}\calD = K$ and $F^*\Omega|_K = \tilde F \circ \Omega_K$. Existence of $F$ implies the existence of a monomorphism $G : T\Sigma/K \to TM/\calD$. Conversely, given such a $G$ we can produce an $F$ as above, with $\tilde{F} = G$, under the condition $\rk \calD \ge 3\rk K - 2$. Suppose $G$ covers the map $u : \Sigma \to M$. We construct a subbundle $\calF\subset \hom( K, u^*\calD)$, where the fibers are given by $$\calF_x = \Big\{F: K_x \to \calD_{u(x)} \;\Big|\; \text{$F$ is injective and $F^*\Omega|_{ K_x} = G_x\circ \Omega_K$}\Big\}, \quad\text{for $x\in \Sigma$.}$$ We wish to get a global section of the bundle $\calF$. Towards this end, we need to determine the connectivity of the fibers $\calF_x$. We consider the following linear algebraic set up. Let $(D,\omega^1,\omega^2)$ be a degree $2$ fat tuple with $\dim D = d$ and $A:D\to D$ be the connecting automorphism for the pair $(\omega^1,\omega^2)$. Define the subspace $R(k)\subset V_{2k}(D)$ as follows: $$R(k) = \Big\{b=(u_1,v_1,\ldots,u_k,v_k) \in V_{2k}(D) \;\Big|\; \substack{\text{$b$ is a symplectic basis for $\omega^1|_V$ and $V$ is $\omega^2$-isotropic},\\ \text{where $V = \big \langle u_i,v_i, \; i=1,\ldots,k \big\rangle$}}\Big\}.$$ We can identify the fiber $\calF_x$ with $R(k)$, by fixing a symplectic basis of $K_x$. \begin{lemma}\label{lemma:connectivityOfR(k)IsocontactDeg2Fat} The space $R(k)$ is $d-4k + 2$-connected. \end{lemma} \begin{proof} We proceed by induction on $k$. For $k=1$, $$R(1) = \Big\{(u,v) \in V_2(D) \;\Big|\; \omega^1(u,v) = 1\text{ and }\omega^2(u,v) = 0\Big\}.$$ For fixed $u \in D$, consider the linear map \begin{align*} S_u : \langle u\rangle^{\perp_2} &\to \bbR \\ v &\mapsto \omega^1(u,v) \end{align*} so that we have $R(1) = \bigcup_{u\in D\setminus 0} \{u\}\times S_u^{-1}(1)$. As $(D,\omega^1,\omega^2)$ is a fat tuple, every non-zero $u$ is $(\omega^1,\omega^2)$-regular and hence $\ker S_u = \langle u\rangle^{\perp_1} \cap \langle u\rangle^{\perp_2} = \langle u\rangle^{\Omega}$ is a codimension $1$ hyperplane in $\langle u\rangle^{\perp_2}$. Therefore, $S_u^{-1}(1)$ is an affine hyperplane. Thus, $R(1)$ is homotopically equivalent to the space of nonzero vectors $u$ in $D$ and so $R(1)$ is $(d-2)$-connected. Note that $d - 2 = d - 4.1 + 2$. Let us now assume that $R(k-1)$ is $d - 4(k-1) + 2 = d - 4k + 6$-connected for some $k\ge 2$. Observe that the projection map $p : V_{2k}(D)\to V_{2k-2}(D)$ maps $R(k)$ into $R(k-1)$. For a fixed tuple $b=(u_1,v_1,\ldots,u_{k-1},v_{k-1})\in R(k-1)$, the span $V = \langle u_1,\ldots,v_{k-1}\rangle$ is $\omega^1$-symplectic and $\omega^2$-isotropic. By an application of \autoref{prop:fatTupleIsoContact} (\hyperref[prop:fatTupleIsoContact:3]{3}) we have $V^\Omega$ is symplectic with respect to both $\omega^1$ and $\omega^2$. Moreover, since $A$ has degree $2$ minimal polynomial, we have $A(V+AV) = V+AV$. Consequently, it follows from \autoref{obs:fatTupleBasic} that, $$A(V^\Omega) = A\big((V+AV)^{\perp_2}\big) = \big(A(V+AV)\big)^{\perp_1} = (V+AV)^{\perp_1} = V^\Omega.$$ Hence, $(\hat D, \omega^1|_{\hat D}, \omega^2|_{\hat D})$ is again a degree $2$ fat tuple, where $\hat D = V^\Omega$. Now, if we choose any $(u,v)\in V_2(\hat D)$, satisfying $\omega^1(u,v) =1$ and $\omega^2(u,v)=0$, it follows that $(u_1,\ldots,v_{k-1},u,v)\in R(k)$. In fact, we may identify the fiber $p^{-1}(b)$ with the space $$\big\{(u,v)\in V_2(\hat D) \;\big|\; \text{$\omega^1(u,v)=1,\omega^2(u,v)=0$}\big\},$$ which is $(\dim V^\Omega -2)$-connected as it has been already noted above. Since $V$ is $\Omega$-regular, we get that $$\dim V^\Omega = \dim (V+AV)^{\perp_1} = \dim D - 2\dim V = d - 4(k-1) = d - 4k + 4.$$ Thus, $p^{-1}(b)$ is $\dim V^\Omega - 2 = (d - 4k + 4) - 2 = d - 4k + 2$-connected. An application of the homotopy long exact sequence to the bundle $p : R(k)\to R(k-1)$ then gives us that $$\pi_i\big(R(k)\big) = \pi_i \big(R(k-1)\big),\quad\text{for $i\le d - 4k + 2$.}$$ By induction hypothesis we have $$\pi_i\big(R(k)\big) = \pi_i \big(R(k-1)\big) = 0, \quad\text{for $i\le d - 4k + 2$.}$$ Hence, $R(k)$ is $d-4k + 2$-connected. This concludes the proof. \end{proof} \begin{remark}\label{rmk:existenceGermIsoCont} It follows from the proof of the above theorem that R(k) is non-empty for $\dim D \ge 4k$. This implies, from the local h-principle for $\relICont$\ifarxiv (\autoref{corr:relContLocalWHE})\fi, the existence of germs of $K$-isocontact immersions in a degree $2$ fat distribution $\calD$, provided $K$ is contact and $\rk \calD \ge 2 \rk K$. \end{remark} \begin{theorem}\label{thm:existenceIsocontact} Any map $u:\Sigma\to M$ can be homotoped to an isocontact immersion $(\Sigma, K)\to (M,\calD)$ provided $\rk\calD \ge \max\{2\rk K + 4, \; 3\rk K - 2\}$, and one of the following two conditions holds true: \begin{itemize} \item both $ K$ and $\calD$ are cotrivializable. \item $H^2(\Sigma) = 0$. \end{itemize} Furthermore, the base level homotopy can be made arbitrary $C^0$-close to $u$. \end{theorem} \begin{proof} Suppose $u:\Sigma\to M$ is any given map. We first observe the implication of the second part of the hypothesis. If both $ K$ and $\calD$ are given to be cotrivializable, then there exists an injective bundle morphism $G: T\Sigma/ K \to u^*TM/\calD$. In general, the obstruction to the existence of such a map $G$ lies in $H^2(\Sigma)$ (\cite{husemollerFiberBundle}). Hence, with $H^2(\Sigma) = 0$, we have the required bundle map. Now, for a fixed monomorphism $G$, we construct the fiber bundle $\calF=\calF(u,G)\subset \hom( K, u^*TM)$ as discussed above. By \autoref{lemma:connectivityOfR(k)IsocontactDeg2Fat}, the fibers of $\calF$ are $d-4k+2$ connected, where $\rk\calD= d$ and $\rk K = 2k$. From the hypothesis we have, $$\rk\calD \ge 3\rk K - 2 = 6k - 2 \;\Leftrightarrow\; d - 4k + 2 \ge 2k = \dim \Sigma -1.$$ Hence we have a global section $\hat F\in\Gamma\calF$, which defines a formal, $ K$-isocontact immersion $F : T\Sigma\to u^*TM$ covering $u$, satisfying $F|_\calD= \hat F$ and $\tilde{F} = G$. The proof now follows from a direct application of \autoref{thm:hPrinIsocontactDeg2}, since $\rk \calD\ge 2 \rk K + 4$ by the hypothesis. \end{proof} \autoref{thm:mainTheoremIsoContDeg2}, stated in the introduction, is a restatement of \autoref{thm:hPrinIsocontactDeg2} and \autoref{thm:existenceIsocontact}.\medskip \subsection{Horizontal Immersions into Degree $2$ Fat Distribution}~ \label{sec:hPrinIntoFat:horiz} \begin{theorem}\label{thm:hPrinHorizImmFatDeg2} Suppose $\calD\subset TM$ is a degree $2$ fat distribution on a manifold $M$ and $\Sigma$ is an arbitrary manifold. Then $\relHor$ satisfies the $C^0$-dense $h$-principle provided, $\rk\calD \ge 4\dim\Sigma$. \end{theorem} \begin{proof} Given $\Sigma$, we consider the manifold $\tilde \Sigma = J^1(\Sigma, \bbR)$, which admits a canonical contact structure $K$. Note that $\dim\tilde\Sigma = 2\dim\Sigma +1$ and $\Sigma$ is canonically embedded as a Legendrian submanifold of $\tilde \Sigma$. Hence, any $K$-isocontact immersion $\tilde \Sigma \to M$ restricts to a horizontal immersion $\Sigma \to M$. We consider the relation $\relIContTilde \subset J^1(\tilde \Sigma, M)$ consisting of formal maps $T\tilde \Sigma \to TM$ inducing $K$ and satisfying the curvature condition, which are $\Omega$-regular by \autoref{prop:isocontactIsOmegaRegular}. In view of \autoref{thm:hPrinGenExtnContact}, it is enough to show that the map $ev : \relIContTilde|_O \to \relHor|_O$ is surjective for contractible open sets $O \subset \Sigma$. Fix some contractible chart $O\subset \Sigma$ along with coordinates $\{x^i\}$. Then, we have a canonical choice of coordinates $\{x^i, p_i, z\}$ on $\tilde O = J^1(O,\bbR) \subset \tilde \Sigma$ so that the contact structure is given as \[K|_{\tilde O} = \ker\big(\theta := dz - p_i dx^i\big) = \Span \big\langle \partial_{p_i}, \; \partial_{x^i} + p_i \partial_z \big\rangle.\] Next, fix a coordinate chart $U\subset M$ and suppose $(F,u):TO \to TU$ is a bundle map representing a section of $\relHor$, with $u = \bs F$. Choose some trivialization $TM/\calD|_U = \Span\langle e_1, e_2\rangle$ and write $\lambda : TM \to TM/\calD$ as $\lambda = \lambda^1 \otimes e_1 + \lambda^2 \otimes e_2$. Denote $V = \im F\subset \calD$. Since $F$ is $\Omega$-regular, codimension of $V^\Omega = V^{\perp_1} \cap V^{\perp_2}$ in $V^{\perp_2}$ equals $\dim V$. Hence, for any subspace $V^\prime\subset V^{\perp_2}$ which is a complement to $V^\Omega$, we see that $\big(S := V\oplus V^\prime, d\lambda^1|_{S}\big)$ is a symplectic bundle. Our goal is to get a complement $V^\prime$ such that $S = V\oplus V^\prime$ is $\omega^2 = d\lambda^2|_\calD$-isotropic. First, we get an almost complex structure $J : \calD \to \calD$ so that \[g : (u,v) \mapsto \omega^2(u,Jv),\quad u,v\in\calD\] is a nondegenerate symmetric tensor. Such a compatible $J$ always exists and then $\omega^2$ is $J$-invariant. Since $\omega^2(V^\Omega, {V^\Omega}^\Omega) = 0$, we have $\calD = V^\Omega \oplus_g J\big({V^\Omega}^\Omega\big)$ by \autoref{prop:fatTupleVAV}. Take $V^{\prime} = V^{\perp_2} \cap J\big({V^\Omega}^\Omega\big)$. A dimension counting argument then gives us $V^{\perp_2} = V^\Omega \oplus V^{\prime}$. Since both $V$ and $V^\prime$ are $\omega^2$-isotropic and also $\omega^2(V, V^\prime) = 0$, we have $S := V \oplus V^\prime$ is $\omega^2$-isotropic. Now, $V\subset S$ is $d\lambda^1$-Lagrangian. Consider the frame $V = \Span \langle X_i := F(\partial_{x^i}) \rangle$ and extend it to a symplectic frame $\langle X_i, Y_i \rangle$ of $(S, d\lambda^1|_S)$ so that the following holds: \[d\lambda^1(X_i, X_j) = 0 = d\lambda^1(Y_i, Y_j), \quad d\lambda^1(X_i, Y_j) = \delta_{ij}.\] Define the extension map $\tilde F : T\tilde O \to TM$ as follows: \[\tilde F(\partial_{x^i} + p_i \partial_z) = F(\partial_{x^i}) = X_i, \quad \tilde F(\partial_{p_i}) = Y_i, \quad F(\partial_z) = e_1.\] Clearly, $\tilde F$ induces $K$ from $\calD$ and satisfies the curvature condition \[\tilde F^*d\lambda^1|_K = d\theta|_K, \quad \tilde F^*d\lambda^2|_K = 0.\] But then by \autoref{prop:isocontactIsOmegaRegular}, $\tilde F$ defines a section of $\relIContTilde$ over $\tilde O$. Thus, $ev : \Gamma \relIContTilde|_\Sigma \to \Gamma \relHor$ satisfies the local extension property. The $h$-principle now follows from \autoref{thm:hPrinGenExtn}. \end{proof} \begin{remark}\label{rmk:horizontalOptimalRange} As noted in \autoref{rmk:horizontalNecessaryDimension}, we necessarily need $\rk \ge 4 \dim \Sigma$ for the existence of $\Omega$-regular $\calD$-horizontal immersions $\Sigma \to M$. Thus, the above $h$-principle is in the optimal range. \end{remark} \subsubsection{Existence of Regular Horizontal Immersions} \begin{theorem} \label{thm:existenceHorizImm} Suppose $\calD\subset TM$ is a degree $2$ fat distribution. Then any $u:\Sigma\to M$ can be $C^0$-approximated by a $\Omega$-regular, $\calD$-horizontal map provided $\rk\calD \ge \max\big\{ 4\dim\Sigma,\; 5\dim\Sigma - 3 \big\}$. \end{theorem} In order to prove the above existence theorem, it is enough to obtain a formal $\Omega$-regular, $\calD$-horizontal immersion, covering a given smooth map $u:\Sigma\to M$. Consider the subbundle $\calF\subset\hom(T\Sigma, u^*TM)$, where the fibers are given by $$\calF_x =\Big\{ F:T_x\Sigma\to \calD_{u(x)} \quad \Big|\quad \text{$F$ is injective, $\Omega$-regular and $\Omega$-isotropic} \Big\}, \quad x\in\Sigma.$$ We need to show that $\calF$ has a global section. Suppose $(D,\omega^1,\omega^2)$ is a degree $2$ fat tuple with $\dim D = d$ and let $V_k(D)$ denote the space of $k$-frames in $D$. Note that the fibers $\calF_x$ can be identified with the subset $R(k)$ of $V_k(D)$ defined by $$R(k) = \Big\{(v_1,\ldots,v_k) \in V_k(D)\;\Big|\; \text{the span $\langle v_1,\ldots,v_k\rangle$ is $\Omega$-regular and $\Omega$-isotropic}\Big\}.$$ \begin{lemma} \label{lemma:connectivityOfHorizR(k)} The space $R(k)$ is $d-4k+2$-connected. \end{lemma} \begin{proof} The proof is by induction over $k$. For $k=1$, we have $$R(1) = \big\{v \in D \;\big|\; \text{$v\ne 0$ and $\langle v\rangle$ is $\Omega$-regular, $\Omega$-isotropic}\big\}.$$ Since $(D,\omega^1,\omega^2)$ is a fat tuple, \emph{every} $1$-dimensional subspace of $D$ is $\Omega$-regular as well as $\Omega$-isotropic. Thus, $R(1) \equiv D\setminus \{0\} \simeq S^{d - 1}$ and hence, $R(1)$ is $d-2$-connected. Note that, $d - 2 = d - 4.1 + 2$. Let $k\ge 2$ and assume that $R(k-1)$ is $d - 4(k-1) + 2 = d - 4k + 6$-connected. Observe that the projection map $p:V_{k}(D)\to V_{k-1}(D)$ given by $p(v_1,\ldots,v_{k})=(v_1,\ldots,v_{k-1})$ maps $R(k)$ into $R(k-1)$. To identify the fibers of $p:R(k)\to R(k-1)$, let $b=(v_1,\ldots,v_{k-1})\in R(k-1)$ so that $V=\langle v_1,\ldots,v_{k-1}\rangle$ is $\Omega$-regular and $\Omega$-isotropic. Clearly, ${V^\Omega}^\Omega \subset V^\Omega$ since $V$ is $\Omega$-isotropic. Also, it follows from \autoref{prop:fatTupleExtension} that a tuples $(v_1,\ldots,v_{k-1},\tau) \in R(k)$ if and only if $\tau \in V^\Omega \setminus {V^\Omega}^{\Omega}$. Note that, $\dim {V^\Omega}^\Omega = 2\dim V = 2(k-1)$ and $$\codim V^\Omega = 2\dim V \;\Rightarrow\; \dim V^\Omega = d - 2(k-1).$$ We have thus identified the fiber of $p$ over $b$: $$F(k) := p^{-1}(b) \equiv V^\Omega \setminus {V^\Omega}^\Omega \equiv \bbR^{d-2k + 2} \setminus \bbR^{2(k-1)},$$ which is $d - 4k+ 2$-connected. Next, consider the fibration long exact sequence associated to $p : R(k)\to R(k-1)$, $$\cdots\rightarrow\pi_i(F(k)) \rightarrow \pi_i(R(k)) \rightarrow \pi_i (R(k-1)) \rightarrow \pi_{i-1}(F(k))\rightarrow\cdots$$ Since $\pi_i(F(k)) = 0$ for $i\le d - 4k + 2$, we get the following isomorphisms: $$\pi_i(R(k)) \cong \pi_i(R(k-1)), \quad\text{for $i\le d-4k+2$.}$$ But from the induction hypothesis, $\pi_i(R(k-1)) = 0$ for $i\le d - 4k + 6$. Hence, $\pi_i(R(k)) = 0$ for $i\le d - 4k + 2$. This concludes the induction step and hence the lemma is proved. \end{proof} \begin{remark}\label{rmk:existenceGermHorizDeg2} It is clear from the above proof that $R(k) \ne \emptyset$ if $d \ge 4k$. Consequently, from the local $h$-principle for $\relHor$\ifarxiv (\autoref{corr:relContLocalWHE})\fi, we can conclude the existence of \emph{germs} of horizontal $k$-submanifolds to a degree $2$ fat distribution $\calD$ provided $\rk \calD \ge 4 \dim\Sigma$. \end{remark} \begin{proof}[Proof of \autoref{thm:existenceHorizImm}] Since $\rk \calD \ge 5\dim\Sigma - 3$, we have a global section $F$ of $\calF$ by \autoref{lemma:connectivityOfHorizR(k)}. Since $\rk \calD \ge 4\dim \Sigma$ as well, \autoref{thm:hPrinHorizImmFatDeg2} implies the existence of a horizontal immersion $\Sigma \to M$. \end{proof} \autoref{thm:mainTheoremHorizDeg2} (stated in the introduction) now follows from \autoref{thm:hPrinHorizImmFatDeg2} and \autoref{thm:existenceHorizImm}. \begin{corr} Given a corank $2$ fat distribution $\calD$ on a $6$-dimensional manifold $M$, any map $S^1 \to M$ can be homotoped to a $\calD$-horizontal immersion. \end{corr} \begin{proof} As noted in \autoref{obs:fatDegree} (\ref{obs:fatDegree:3}), ever corank $2$ fat distribution is of degree $2$ and a formal horizontal map $S^1 \to M$ is $\Omega$-regular. The proof then follows directly from \autoref{thm:existenceHorizImm}. \end{proof} If $\calD$ is the underlying real distribution of a holomorphic contact structure $\Xi$ on a complex manifold $(M, J)$, where $J$ is the (integrable) almost complex structure, then in view of \autoref{prop:fatTupleOmegaRegular} (\ref{prop:fatTupleOmegaRegular:2}), $\Omega$-regular immersions $\calD$ are the same as totally real immersions. Hence, we get the following corollary to \autoref{thm:hPrinHorizImmFatDeg2}. \begin{corr} Given a holomorphic contact structure $\Xi$ on $M$, there exists a totally real $\Xi$-horizontal immersion $\Sigma \to M$ provided $\rk_\bbR \Xi \ge \max\{4\dim\Sigma, 5\dim \Sigma - 3\}$. \end{corr} \subsection{Horizontal Immersions into Quaternionic Contact Manifolds}\label{sec:hPrinIntoFat:quaternion} We recall the following observation by Pansu. \begin{prop}[\cite{pansuCarnotManifold}] \label{prop:qContOmegaRegular} If $\calD$ is a quaternionic contact structure, then any $\Omega$-isotropic subspace of $\calD_x$ is $\Omega$-regular. Hence every horizontal immersion is $\Omega$-regular. \end{prop} In view of the above result, $\relHor$ has the following simpler description $$\relHor = \Big\{(x,y,F)\;\Big|\; \text{$F$ is injective, $F(T_x\Sigma)\subset \calD_y$, \; $F^*\Omega = 0$} \Big\}.$$ \begin{theorem}\label{thm:hPrinHorizImmQCont} Suppose $\calD\subset TM$ is a quaternionic contact structure and $\Sigma$ is an arbitrary manifold. Then $\relHor\subset J^1(\Sigma,M)$ satisfies the $C^0$-dense $h$-principle, provided $\rk\calD \ge 4\dim \Sigma + 4$. \end{theorem} \begin{proof} It is enough to show that under the hypothesis $\rk \calD \ge 4\dim\Sigma + 4$, the map $ev : \relHorTilde|_O\to \relHor|_O$ is surjective on sections over any contractible open chart $O\subset \Sigma$. Let $(x,y,F)$ represent a jet in $\relQHor$. Then $V = \im F$ is an $\Omega$-isotropic subspace of $\calD_y$ and so $V\subset V^\Omega$. As $V$ is $\Omega$-regular, we have $$\codim V^\Omega = \cork\calD\times \dim V = 3\dim V.$$ Now, from the dimension condition we conclude that the codimension of $V$ in $V^\Omega$ is $\ge 4$. Then, for any $\tau \in V^\Omega \setminus V$ we have that $V^\prime = V+ \langle\tau\rangle$ is again isotropic. By \autoref{prop:qContOmegaRegular}, $V^\prime$ is then $\Omega$-regular as well. We can now define an extension $\tilde{F} : T_x\Sigma\oplus\bbR \to T_y M$ by $\tilde{F}(v,t) = F(v) + t\tau$ for all $v\in T_x\Sigma$ and $t\in \bbR$. Clearly $(x,y,\tilde{F})$ is then a jet in $\relQHorTilde$. Proceeding just as in \autoref{thm:hPrinIsocontactDeg2}, we can now complete the proof. \end{proof} \subsubsection{Existence of Horizontal Immersions} \begin{theorem}\label{thm:existenceHorizImmQCont} Let $\calD$ be a quaternionic contact structure on $M$. Then any map $u:\Sigma\to M$ can be homotoped to a $\calD$-horizontal immersion provided, $\rk\calD \ge \max \{4\dim\Sigma + 4, \;5\dim\Sigma - 3\}$. Furthermore, the homotopy can be made arbitrarily $C^0$-small. \end{theorem} \begin{proof} The proof is similar to that of \autoref{thm:existenceHorizImm}; in fact it is simpler since $\Omega$-regularity is automatic by \autoref{prop:qContOmegaRegular}. Given a map $u:\Sigma \to M$, we consider the subbundle $\calF \subset \hom(T\Sigma, u^*TM)$ with the fibers given as $$\calF_x = \Big\{F:T_x \Sigma \to \calD_{u(x)} \;\Big|\; \text{$F$ is injective and $\Omega$-isotropic}\Big\}, \quad x\in\Sigma.$$ Clearly, a global section of $\calF$ is precisely a formal $\calD$-horizontal immersion covering $u$. A choice of a frame of $T_x\Sigma$ lets us identify $\calF_x$ with the space $$R(k) = \Big\{(v_1,\ldots,v_k) \in V_k(\calD_x) \;\Big|\; \text{the span $\langle v_1,\ldots,v_k \rangle \subset \calD_x $ is $\Omega_x$- isotropic}\Big\},$$ where $V_k(D)$ is the space of $k$-frames in a vector space $D$. A very similar argument as in \autoref{lemma:connectivityOfHorizR(k)} gives us that the space $R(k)$, and consequently the fiber $\calF_x$, is $\rk \calD - 4k + 2$-connected. The proof \autoref{thm:existenceHorizImmQCont} then follows exactly as in \autoref{thm:existenceHorizImm}. \end{proof} We can now prove \autoref{thm:mainTheoremHorizQuat} from \autoref{thm:hPrinHorizImmQCont} and \autoref{thm:existenceHorizImmQCont}. \begin{remark}\label{rmk:existenceGermHorizQuatContact} As in the previous two cases, we can deduce the existence of \emph{germs} of horizontal $k$-submanifolds to a given quaternionic contact structure $\calD$ provided $\rk \calD \ge 4\dim\Sigma$. \end{remark} \subsection{Isocontact Immersions into Quaternionic Contact Manifolds} \begin{theorem}\label{thm:hPrinIsocontactQCont} Suppose $\calD$ is a quaternionic contact structure on a manifold $M$ and $K$ is a contact structure on $\Sigma$. Then, $\relICont$ satisfies the $C^0$-dense $h$-principle provided $\rk \calD \ge 4\rk K + 4$. \end{theorem} \begin{proof} The proof is very similar to that of \autoref{thm:hPrinIsocontactDeg2}. Suppose $F:T_x\Sigma \to T_y M$ represents a jet in $\relICont$. Suitably choosing trivializations near $x$ and $y$, we may assume that the induced map $\tilde{F} :T\Sigma/K \hookrightarrow TM/\calD$ is the canonical injection $\bbR \to \bbR\times \{0\} \subset \bbR^3$. In particular, there exists local $2$-forms $\eta, \omega^i, i=1,2,3$ so that \[\Omega_K \underset{loc.}{=} \eta \quad \text{and} \quad \Omega \underset{loc.}{=} (\omega^1,\omega^2,\omega^3).\] and furthermore, we have a quaternionic structure $(J_1, J_2, J_3)$ so that $g(J_i u, v) = \omega^i(u,v)$ for all $u,v\in\calD$ and for each $i=1,2,3$. Here $g$ is a Riemannian metric on the quaternionic contact structure $\calD$. The curvature condition $F^*\Omega|_{K} = \tilde{F} \circ \Omega_K$ translates into \[F^*\omega^1|_K = \eta, \quad F^*\omega^2|_K = 0 = F^*\omega^3|_K.\] Since $K$ is contact, $\eta$ is nondegenerate. Consequently, $V = F(K)$ is $\omega^1$-symplectic and $\omega^2,\omega^3$-isotropic. Now, for any subspace $W\subset \calD$, we have $\calD = W^\Omega \oplus_g \big(\sum_{i=1}^3 J_i W\big)$, indeed, \[g(z, \sum J_i w_i) = \sum \omega^i(z,w_i) = 0, \quad \forall z\in W^\Omega, \; J_i w_i \in J_i W.\] Consequently, $W\subset \calD$ is $\Omega$-regular if and only if $\sum J_i W$ is a direct sum. Also observe that, \[\omega^2(u, -J_1v) = g(J_2J_1 v,u) = -g(J_3v,u) = \omega^3(u,v), \quad u,v\in\calD,\] an so $(\calD_y, \omega^2|_y, \omega^3|_y)$ is a degree $2$ fat tuple with the connecting automorphism $A = -J_1$. As $V$ is $(\omega^2,\omega^3)$-isotropic and is $(\omega^2,\omega^3)$-regular, we get from \autoref{prop:fatTupleOmegaRegular} and \autoref{prop:fatTupleVAV} that \[V\oplus J_1V \subset V^{\perp_2}\cap V^{\perp_3} \quad \text{and} \quad \codim\big(V^{\perp_2} \cap V^{\perp_3}\big) = 2\dim V.\] Also, $\calD = \big(V^{\perp_2} \cap V^{\perp_3}\big) \oplus_g \big(J_2 V + J_3 V\big)$ and hence, $\big(V^{\perp_2}\cap V^{\perp_3}\big) \cap\big(V+ J_1 V+ J_2V + J_3 V\big) = V+ J_1 V$. But then, \[V^\Omega \cap \big(V+\sum J_i V\big) = V^{\perp_1} \cap \Big(V^{\perp_2} \cap V^{\perp_3} \cap \big(V + \sum J_i V\big)\Big) = V^{\perp_1} \cap (V+ J_1 V).\] Since $V$ is $\omega^1$-symplectic, we have $\calD= V^{\perp_1} \oplus V = V^{\perp_1} + (V\oplus J_1 V)$. A dimension counting argument then gives us $\dim \big(V^\Omega \cap (V+\sum J_i V)\big) = \dim V$. But then from the hypothesis $\rk \calD \ge 4 \dim V + 4$, we get the intersection has codimension $\ge 4$ in $V^\Omega$. Pick $\tau \in V^\Omega \setminus \big(V + \sum J_i V\big)$. We claim that $V^\prime = V+\langle\tau\rangle$ is $\Omega$-regular. We only need to show that $\big(\sum J_i V\big) \cap \langle J_1\tau, J_2\tau, J_3 \tau\rangle = 0$. Suppose, $z = \sum a_i J_i \tau$ is in the intersection for some $a_i\in\bbR$. Note that $ J_s\big(\sum J_i V\big) \subset V + \sum J_i V$ for each $s = 1,2,3$. If $(a_1, a_2, a_3) \ne 0$, we have \[\tau = \big(\sum a^i J_i\big)^{-1} z = \frac{-\sum a_i J_i}{\sum a_i^2} z \in \frac{-\sum a_i J_i}{\sum a_i^2} \big(\sum J_i V\big) \subset V + \sum J_i V.\] This contradicts our choice of $\tau \not\in V+\sum J_i V$. Hence, $z = 0$ and we have $V^\prime$ is indeed $\Omega$-regular. We can now finish the proof just as in \autoref{thm:hPrinIsocontactDeg2}. \end{proof} \subsubsection{Existence of Regular Isocontact Immersions} \begin{theorem}\label{thm:existenceIsocontactQCont} Suppose $\calD$ is a quaternionic contact structure on a manifold $M$ and $K$ is a contact structure on $\Sigma$. Assume that both $K$ and $\calD$ are cotrivializable. Then, any map $u : \Sigma \to M$ can be homotoped to an $\Omega$-regular $K$-isocontact immersion $(\Sigma,K) \to (M,\calD)$ provided $\rk \calD \ge \max \{4 \rk K + 4, 6 \rk K - 2\}$. \end{theorem} \begin{proof} Given $u : \Sigma \to M$, we can get a monomorphism $G : T\Sigma/ K \hookrightarrow u^*TM/\calD$, since $K$ and $\calD$ are cotrivializable. Next, we consider the bundle $\calF \subset \hom(K, u^*\calD)$ with fibers \[\calF_x = \Big\{F:K_x \to \calD_{u(y)} \;\Big|\; \text{$F$ is injective, $\Omega$-regular and $F^*\Omega|_{K_x} = G_x \circ \Omega_K$}\Big\}.\] Assume $\rk K = 2k$ and $\rk \calD=d$. Then, suitable choosing trivializations, we can identify $\calF_x$ with the subspace $R(k) \subset V_{2k}(\calD_y)$: \[R(k) = \Big\{b=(u_1,v_1,\ldots,u_k,v_k) \in V_{2k}(\calD_y) \;\Big|\; \substack{\text{$b$ is an $\omega^1$-symplectic basis for $V:=\Span\langle u_i,v_i\rangle$,} \\ \text{$V$ is $\omega^2,\omega^3$-isotropic and $\Omega$-regular.}} \Big\}.\] We can check via an inductive argument similar to \autoref{lemma:connectivityOfR(k)IsocontactDeg2Fat} that $R(k)$ is $(\rk\calD - 4\rk K + 2)$-connected\ifarxiv ~(\autoref{lemma:connectivityOfIsocontactQContR(k)})\fi. Since $\rk \calD \ge 6\rk K - 2$, the fibers of $\calF$ is $\dim \Sigma-1$-connected and hence, we get a global section of $\calF$. We conclude the proof by an application of \autoref{thm:hPrinIsocontactQCont}, since $\rk \calD \ge 4 \rk K + 4$ as well. \end{proof} \autoref{thm:hPrinIsocontactQCont} and \autoref{thm:existenceIsocontactQCont} implies \autoref{thm:mainTheoremIsocontactQCont}. \ifarxiv \begin{lemma}\label{lemma:connectivityOfIsocontactQContR(k)} $R(k)$ in the above theorem is $d - 8k + 2$ connected, where $\rk \calD = d$ and $\rk K = 2k$. \end{lemma} \begin{proof} We have \[R(1) = \Big\{(u,v)\in V_2 (\calD_y) \;\Big|\; \omega^1(u,v)=1,\; \omega^2(u,v) = 0 = \omega^2(u,v), \; \text{$\langle u,v\rangle$ is $\Omega$-regular.}\}.\] For each $0 \ne u \in \calD_y$ consider the map \begin{align*} S_u : u^{\perp_2} \cap u^{\perp_3} &\to \bbR \\ v &\mapsto \omega^1(u,v) \end{align*} As argued in \autoref{thm:hPrinIsocontactQCont}, for some $v \in S_u^{-1}(1)$, the subspace $V = \langle u,v\rangle$ is $\Omega$-regular if and only if $V + J_1 V$ is a direct sum, which is equivalent to having $v \in S_u^{-1}(1) \setminus \langle u, J_1 u\rangle$. Thus, we have identified \[R(1) \equiv \bigcup_{u \in \calD_x \setminus 0} \{u\} \times \Big(S_u^{-1}(1) \setminus \langle u,J_1 u\rangle\Big).\] Now, $S_u^{-1}(1)$ is a codimension $1$ affine plane in $u^{\perp_2} \cap u^{\perp_3}$ and $\langle u, J_1 u\rangle \subset u^{\perp_2} \cap u^{\perp_3}$ is transverse to $S_u^{-1}(1)$. Hence, we find out the codimension of the affine plane $S_u^{-1}(1) \cap \langle u, J_1 u\rangle$ in $u^{\perp_2} \cap u^{\perp_3}$: \[\codim \Big(S_u^{-1}(1) \cap \langle u,J_1 u\rangle\Big) = 1 + (\dim u^{\perp_2}\cap u^{\perp_3} - 2) = (d-2) - 1 = d - 3.\] But then the connectivity of $S_u^{-1}(1) \setminus \langle u, J_1 u\rangle$ is $(d-3) - (d - 2 - (d-3)) - 2 = d - 6$. Since $\calD_x \setminus 0$ is $d - 2$-connected, we get $R(1)$ is $d-6$-connected by an application of the homotopy long exact sequence. Note that $d - 6= d- 8.1 + 2$. Let us now assume $R(k-1)$ is $d-8(k-1) + 2 = d - 8k + 10$. Now, consider the projection map $p : V_{2k}(\calD_x) \to V_{2(k-1)}(\calD_x)$ which maps $R(k)$ into $R(k-1)$. Say, $b = (u_1,v_1,\ldots,u_{k-1},v_{k-1}) \in R(k-1)$ and $V = \Span\langle u_i,v_i\rangle$. We show $p^{-1}(b)$ is nonempty and find out its connectivity. As in \autoref{thm:hPrinIsocontactQCont}, we must first pick $\tau \in V^\Omega \setminus (V + \sum J_i V)$. For any such $\tau$ fixed, we set $V_\tau = V + \langle \tau \rangle$ and then choose $\eta \in (V_\tau^{\perp_2} \cap V_\tau^{\perp_3}) \setminus (V_\tau + J_1 V_\tau)$, satisfying $\omega^1(\tau , \eta) = 1$. We can check that $(u_1,v_1,\ldots,u_{k-1},v_{k-1}, \tau, \eta) \in p^{-1}(b)$. Now, let us consider the map \begin{align*} S_\tau : V_\tau^{\perp_2} \cap V_\tau^{\perp_3} &\to \bbR \\ \eta &\mapsto \omega^1(\tau, \eta) \end{align*} Then, we have in fact identified \[p^{-1}(b) = \bigcup_{\tau \in V^\Omega \setminus (V+\sum J_i V)} \{u\} \times \Big(S_\tau^{-1}(1) \setminus (V_\tau + J_1 V_\tau)\Big).\] Since $\dim \big(V^\Omega \cap (V+\sum J_i V)\big) = \dim V$, we get the connectivity of the space of $\tau$ as \[(d - 6(k-1)) - 2(k-1) - 2 = d-8(k-1) - 2 = d - 8k + 6.\] On the other hand, the codimension $1$ hyperplane $S_\tau^{-1}(1)$ and $V_\tau + J_1V_\tau \subset V_\tau^{\perp_2} \cap V_\tau^{\perp_3}$ are transverse to each other. Hence, the codimension of $S_\tau^{-1}(1) \cap (V_\tau + J_1 V_\tau)$ in $V_\tau^{\perp_2} \cap V_\tau^{\perp_3}$ is \[1 + \big(\dim(V_\tau^{\perp_2} \cap V_\tau^{\perp_3}) - \dim(V_\tau + J_1 V_\tau)\big) = 1 + \big((d - 2(2k - 1)) - 2(2k - 1)\big) = d - 4(2k-1) + 1.\] Consequently, $S_u^{-1}(1) \cap (V_\tau + J_1 V_\tau) \equiv \bbR^{d-2(2k-1) - (d - 4(2k-1) + 1)} = \bbR^{2(2k-1) - 1}$. We get the connectivity of $S_\tau^{-1}(1) \setminus (V_\tau + J_1 V_\tau)$: \[(d - 2(2k - 1) - 1) - (2(2k-1) - 1) - 2 = d - 4(2k-1) - 2 = d - 8k + 2.\] A homotopy long exact sequence argument then gives the connectivity of $p^{-1}(b)$ as $\min\big\{d-8k + 2, d - 8k + 6\big\} = d- 8k + 2$. Then, again appealing to the exact sequence for $p : R(k) \to R(k-1)$, we get the connectivity of $R(k)$ as \[\min\{d - 8k + 2, d-8k + 10\} = d-8k + 2.\] This concludes the proof. \end{proof} \fi \subsection{Applications in Symplectic Geometry} A 1-form $\mu$ on a manifold $N$ is said to be a Liouville form if $d\mu$ is symplectic. Any such form defines a contact form $\theta$ on the product manifold $N\times\mathbb R$ by $\theta=dz-\pi^*\mu$, where $\pi:N\times\mathbb R\to N$ is the projection onto the first factor and $z$ is the coordinate function on $\bbR$. This construction can be extended to a $p$-tuple of Liouville forms $(\mu^1,\dots,\mu^p)$ on $N$ to obtain a corank $p$ distribution $\calD$ on $N\times\mathbb R^p$. If we denote by $(z_1,\dots,z_p)$ the global coordinate system on $\mathbb R^p$, then $\calD=\cap_{i=1}^p\ker \lambda^i$, where $\lambda^i = dz_i-\pi^*\mu^i$ and $\pi:M\times\mathbb R^p\to M$ is the projection map. We note that the curvature form of $\calD$ is given as $$\Omega = \big(d\lambda^i|_\calD\big) = \big(\pi^*d\mu^i|_\calD\big).$$ The derivative of the projection map $\pi$ restricts to isomorphism $\pi_* : \calD_{(x,z)}\to T_x N$ for all $(x,z)\in N\times\mathbb R^p$. Thus, it follows that if $(d\mu^1,\dots,d\mu^p)$ is a fat tuple on $T_x N$ for all $x\in N$, then $\calD$ is a fat distribution. Next, recall that given a manifold $N$ with a symplectic form $\omega$, an immersion $f: \Sigma\to N$ is called \emph{Lagrangian} if $f^*\omega = 0$. Now, $\omega = d\mu$ for some Liouville form $\mu$, a Lagrangian immersion $f:\Sigma\to N$ is called \emph{exact} if the closed form $f^*\mu$ is exact. The homotopy type of the space of exact $d\mu$-Lagrangian immersions does not depend on the primitive $\mu$, we refer to \cite{gromovBook,eliashbergBook} for the $h$-principle for exact Lagrangian immersions. Extend this notion to $p$-tuples $(\mu^1,\ldots,\mu^p)$ of Liouville forms on $N$, if $f:\Sigma\to N$ is exact Lagrangian with respect to each $d\mu^i$, $i=1,\dots,p$, then there exist smooth functions $\phi^i$ such that $f^*\mu^i=d\phi_i$. It is easy to check that $(f,\phi^1,\dots,\phi^p) : \Sigma \to M = N \times \bbR^p$ is then a $\calD$-horizontal immersion. Conversely every $\calD$ horizontal immersion $\Sigma \to M$ projects to an immersion $\Sigma\to N$ which is exact Lagrangian with respect to each $d\mu^i$. \paragraph{\bfseries Regularity:} For immersions $f:\Sigma\to N$, we have a similar notion of $(d\mu^i)$-regularity. A subspace $V\subset T_x N$ is called \emph{$(d\mu^i)$-regular} if the map, \begin{align*} \psi : T_x N &\to \hom(V,\bbR^p)\\ \partial &\to \big(\iota_\partial d\mu^1|_V, \ldots, \iota_\partial d\mu^p|_V\big) \end{align*} is surjective (compare \autoref{defn:contOmegaRegular}). An immersion $f:\Sigma\to N$ is called \emph{$(d\mu^i)$-regular} if $V=\im df_\sigma$ is $(d\mu^i)$-regular for each $\sigma\in\Sigma$. \begin{defn}\label{defn:formalBiLagrangian} A monomorphism $F:T\Sigma\to TN$ is said to be a \emph{formal} regular, $(d\mu^i)$-Lagrangian if for each $\sigma\in \Sigma$, \begin{itemize} \item the subspace $V=\im F_\sigma\subset T_{u(\sigma)}N$ is $(d\mu^i)$-regular subspace, and \item $F^*d\mu^i = 0$, that is, $V$ is $d\mu^i$-isotropic, for $i=1,\ldots,p$. \end{itemize} \end{defn} \begin{prop} \label{prop:regularExactLagrangian} Let $\Omega$ be the curvature of the distribution $\calD$ on $M = N\times \bbR^p$. Then, every formal regular, $(d\mu^i)$-Lagrangian immersion lifts to a formal $\Omega$-regular $\calD$-horizontal immersion. Conversely, any formal $\Omega$-regular $\calD$-horizontal immersion projects to a formal regular, exact $(d\mu^i)$-Lagrangian immersion. \end{prop} \ifarxiv \begin{proof} Suppose $(F,f):T\Sigma\to TN$ is a given formal, regular $(d\mu^i)$-Lagrangian map. Set, $u= (f,\underbrace{0,\ldots,0}_{p}):\Sigma\to M$. Then we can get a canonical lift $H:T\Sigma\to TM$ covering $u$, by using the fact that $d\pi:\calD_{u(\sigma)}\to T_{f(\sigma)}N$ is an isomorphism. Therefore, $H$ is injective. We claim that $H$ is $\Omega$-regular and $(d\lambda^i)$-isotropic for $i=1,\ldots,p$ (in other words $\Omega$-isotropic). The isotropy condition follows easily, since, $$H^*d\lambda^i|_\calD = H^* \pi^*d\mu^i|_\calD = (d\pi|_\calD\circ H)^*d\mu^i = F^*d\mu^i = 0,\qquad i=1,\ldots,p.$$ To deduce the $\Omega$-regularity, observe that we have a commutative diagram, \[\begin{tikzcd} \calD_{u(\sigma)} \arrow{r}{\phi} \arrow{d}[swap]{d\pi|_{u(\sigma)}} &\hom(\im H_\sigma,\bbR^p) \\ T_{f(\sigma)}N \arrow{r}[swap]{\psi} & \hom(\im F_\sigma,\bbR^p) \arrow{u}[swap]{\big(d\pi|_{u(\sigma)}\big)^*} \end{tikzcd}\] where both the vertical maps are isomorphisms and the maps $\phi,\psi$ are given as $$\phi(v) = \big(\iota_v d\lambda^i|_{\im H}\big)_{i=1}^p, \; v\in \calD_{u(\sigma)}, \qquad\text{and}\qquad \psi(w) = \big(\iota_w d\mu^i|_{\im F}\big)_{i=1}^p, \; w\in T_{f(\sigma)}N.$$ Now, $(d\mu^i)$-regularity of $F$ is equivalent to surjectivity of $\psi$, which implies the surjectivity of $\phi$. Thus, the lift $H$ is a formal $\Omega$-regular isotropic $\calD$-horizontal map. Similar argument proves the converse statement as well. \end{proof} \fi In the case $p=2$, the pair $d\mu^1$ and $d\mu^2$ are related by a bundle isomorphism $A:TN\to TN$ as $d\mu^1(v,Aw)=d\mu^2(v,w)$. If for every $x\in N$, the operator $A_x$ has no real eigenvalue and the degree of the minimal polynomial of $A_x$ is $2$, then $\calD$ is a degree $2$ fat distribution. In particular, if $N$ is a holomorphic symplectic manifold, then $\calD$ is holomorphic contact distribution on $N\times\mathbb R^2$. \begin{theorem} \label{thm:hPrinExactBiLagrangian} Let $(N,d\mu^1, d\mu^2)$ as above. Then the exact Lagrangian immersions satisfy the $C^0$-dense $h$-principle, provided $\dim N \ge 4\dim\Sigma$. \end{theorem} The proof is immediate from \autoref{thm:hPrinHorizImmFatDeg2} and \autoref{prop:regularExactLagrangian}. Furthermore, an obstruction-theoretic argument as in \autoref{thm:existenceHorizImm} gives us the following corollary. \begin{corr}\label{corr:existenceExactBiLagrangian} Suppose $(N,d\mu^1,d\mu^2)$ is as in \autoref{thm:hPrinExactBiLagrangian}. If $\dim N \ge \max \{4\dim\Sigma,\; 5\dim\Sigma-3\}$, then any $f:\Sigma\to N$ can be homotoped to a regular exact $(d\mu^1,d\mu^2)$-Lagrangian, keeping the homotopy arbitrarily $C^0$-small. \end{corr} \begin{remark} The above corollary improves upon the result in \cite{dattaSymplecticPair}, where the author proved the existence of regular, exact $(d\mu^1,d\mu^2)$-Lagrangian immersions $\Sigma\to N$, under the condition $\dim N \ge 6\dim \Sigma$. \end{remark} In the case $p=3$, let us assume that we have triple of symplectic forms $(d\mu^1,d\mu^2,d\mu^3)$ on a Riemannian manifold $(N,g)$. Then, we have the automorphisms $J_i : TN\to TN$ defined by $g(v, J_i w) = d\mu^i(v,w)$. If we assume that $\{J_1, J_2, J_3\}$ satisfies the quaternionic relation at each point of $N$, then $\calD$ is quaternionic contact structure. In particular, if $N$ is hyperk\"ahler then $\calD$ is a Quaternionic contact distribution on $N\times \bbR^3$ (\cite{boyerSasakianGeometry}). In view of \autoref{prop:regularExactLagrangian}, we have the direct corollary to \autoref{thm:existenceHorizImmQCont}. \begin{corr}\label{corr:existenceTriLagrangian} Let $(N,g, d\mu^i, i=1,2,3)$ as above. Then, there exists an exact $(d\mu^i)$-Lagrangian immersion $\Sigma\to N$, provided $\dim N \ge \max \{4\dim \Sigma + 4, \; 5\dim \Sigma - 3\}$. \end{corr} \section{Fat Distributions of Corank $2$ and their Degree}\label{sec:fatAndDegree} In this section we first recall the definition of fatness of a distribution and then introduce a notion called `degree' on the class of corank $2$ fat distributions. \subsection{Fat Distribution} \begin{defn} A distribution $\calD\subset TM$ is called \emph{fat} (or \emph{strongly bracket generating}) at $x\in M$ if for every nonzero local section $X$ of $\calD$ near $x$, the set \[\big\{[X,Y]_x \; \big|\; \text{$Y$ is a local section near $x$}\big\}\] equals $T_x M$. The distribution is fat if it is fat at every point $x\in M$. \end{defn} In \cite{gromovCCMetric}, Gromov defines this as $1$-fatness. An important consequence of fatness is that for \emph{every} non-vanishing $\alpha$ annihilating $\calD$, the $2$-form $d\alpha|_\calD$ is nondegenerate. There are many equivalent ways to describe a fat distribution. \begin{prop}[\cite{montTour}] The following are equivalent. \begin{itemize} \item $\calD$ is fat at $x \in M$. \item $\omega(\alpha)$ is a nondegenerate $2$-form on $\calD_x$ for every $\alpha$ in the annihilator bundle $\Ann(\calD)$, where $\omega : \Ann(\calD) \to \Lambda^2\calD^*$ is the dual curvature map. \item Every $1$-dimensional subspace of $\calD_x$ is $\Omega$-regular. \end{itemize} \end{prop} \ifarxiv Fat distributions are interesting in themselves and they have been studied in generality (\cite{geHorizontalCC, raynerFat}). Fatness puts strict numerical constraints on the rank and corank of the distribution. \begin{theorem}[\cite{raynerFat, montTour}] Suppose $\calD$ is a corank rank $k$ distribution on $M$ with $\dim M=n$. If $\calD$ is fat then the following numerical constraints hold, \begin{itemize} \item $k$ is divisible by $2$; and if $k < n-1$ then $k$ is divisible by $4$ \item $k\ge (n-k)+1$ \item The sphere $S^{k-1}$ admits $n-k$-many linearly independent vector fields \end{itemize} Conversely, given any pair $(k,n)$ satisfying the above, there is fat distribution germ of type $(k,n)$. \end{theorem} \else Fatness puts strict numerical constraints on the rank and corank of the distribution. For example, if $\calD$ is a fat distribution of rank $k$ on $M$ with $\dim M=n$ $k$ is divisible by $2$; and if $k < n-1$ then $k$ is divisible by $4$ (\cite{raynerFat, montTour}). \fi When $\cork\calD=1$, a fat distribution must be of the type $(2n,2n+1)$. In fact, corank $1$ fat distributions are exactly the contact ones, and hence are generic. In general, fatness is not a generic property (\cite{zanetGenericRank4,montTour}). We now describe two important classes of fat distributions in corank $2$ and $3$. These are holomorphic and quaternionic counterparts of contact structures. \begin{example}\label{exmp:holomorphicContact} A \emph{holomorphic contact structure} on a complex manifold $M$ with $\dim_\bbC M = 2n + 1$ is a corank $1$ holomorphic subbundle of the holomorphic tangent bundle $T^{(1,0)}M$, which is locally given as the kernel of a holomorphic $1$-form $\Theta$ satisfying $\Theta \wedge d\Theta^n \ne 0$. By the holomorphic contact Darboux theorem (\cite{forstnericHoloLegendrianCurves}), $\Theta$ can be locally expressed as $\Theta = dz - \sum_{j=1}^n y_j dx_j$, where $(z,x_1,\ldots,x_n,y_1,\ldots,y_n)$ is a holomorphic coordinate. Writing $z=z_1 + i z_2, x_j = x_{j1} + i x_{j2}, y_j = y_{j1} + i y_{j2}$, we get $\Theta = \lambda^1 + i \lambda^2$, where $$\lambda^1 = dz_1 - \sum_{j=1}^n \big(y_{j1} dx_{j1} - y_{j2}dx_{j2}\big),\quad \lambda^2 = dz_2 - \sum_{j=1}^n \big(y_{j2} dx_{j1} + y_{j1}dx_{j2}\big).$$ The distribution $\calD \underset{loc.}{=} \ker \lambda^1 \cap \ker\lambda^2$ is a corank $2$ fat distribution. \ifarxiv We can explicitly define a frame $\{X_{j1},X_{j2},Y_{j1},Y_{j2}\}$ for $\calD$ by $$X_{j1} = \partial_{x_{j1}} + y_{j1}\partial_{z_1} + y_{j2}\partial_{z_2}, \quad X_{j2} = \partial_{x_{j2}} - y_{j2}\partial_{z_1} + y_{j1}\partial_{z_2}, \quad Y_{j1} = \partial_{y_{j1}},\quad Y_{j2} = \partial_{y_{j2}}.$$ They generate a finite dimensional Lie algebra, known as the complex Heisenberg algebra.\fi \end{example} \begin{example} \label{exmp:quaternionicContact} A \emph{quaternionic contact structure}, as introduced by Biquard in \cite{biquardQuaternionContact}, on a manifold $M$ of dimension $4n+3$ is a corank $3$ distribution $\calD\subset TM$, given locally as the common kernel of $1$-forms $(\lambda^1,\lambda^2,\lambda^3)\in \Omega^1(M,\bbR^3)$ such that there exists a Riemannian metric $g$ on $\calD$ and a Quaternionic structure $(J_i, i=1,2,3)$ on $\calD$ satisfying, $d\lambda^i|_\calD= g(J_i\_,\_)$. By a Quaternionic structure, we mean that $J_i$ are (local) endomorphisms which satisfy the quaternionic relations: $J_1^2 = J_2^2 = J_3^2 = -1 = J_1J_2J_3$. Equivalently, there exists an $S^2$-bundle $Q\to M$ of triples of almost complex structures $(J_1,J_2,J_3)$ on $\calD$. It is easy to see that any linear combination of a (local) quaternionic structure $\{J_i\}$, say $S = \sum a_i J_i$, satisfies $S^2 = -(\sum a_i^2) I$. Hence, for any non-zero $1$-form $\lambda$ annihilating $\calD$, the $2$-form $d\lambda|_\calD$ is nondegenerate, proving fatness of the quaternionic contact structure. \end{example} \subsection{Corank $2$ Fat Distribution} We now focus on corank $2$ fat distributions, in particular, on a specific class of such (real) distributions locally modeled on holomorphic contact structures.\smallskip Given a corank $2$ distribution $\calD$, let us assume $\calD\underset{loc.}{=}\ker\lambda^1\cap \ker\lambda^2$. Further assume that $\omega_i = d\lambda^i|_\calD$ is nondegenerate. Then we can define a (local) automorphism $A:\calD\to \calD$ by the following property: $$\omega_1(u,Av) = \omega_2(u,v),\qquad \forall u,v\in\calD.$$ Explicitly, $A=-I_{\omega_1}^{-1}\circ I_{\omega_2}$, where $I_{\omega_i}:\calD\to\calD^*$ is defined by $I_{\omega_i}(v) = \iota_v\omega_i$ for all $v\in \calD$. We shall refer to $A$ as the connecting automorphism between $\omega^1$ and $\omega^2$. The following proposition characterizes corank $2$ fat distribution. \begin{prop}\label{prop:fatEigenValue} If $\calD$ is fat at $x\in M$, then for some (and hence every) local defining form, the induced automorphism $A_x:\calD_x\to \calD_x$ has no real eigenvalue. Conversely, if $A_x$ has no real eigenvalue, then $\calD$ is fat at $x\in M$. \end{prop} \ifarxiv \begin{proof} The distribution $\calD$ is fat at $x$ if and only if for any $0\ne v\in \calD_x$ the map (see \autoref{defn:contOmegaRegular}) $$\calD_x \ni u\longmapsto \Big(\omega^1(u,v), \; \omega^2(u,v)\Big) = \Big(\omega^1(u,v), \; \omega^1(u, Av)\Big) = -\Big(\iota_v \omega^1, \; \iota_{Av}\omega^1\Big)(u)$$ is surjective, which is equivalent to linear independence of $\{v, Av\}$ for all $0 \ne v \in \calD_x$. Hence the proof follows. \end{proof} \fi Now, given a corank $2$ fat distribution $\calD$ on $M$, we would like to assign an integer to each point $x\in M$. \begin{defn} \label{defn:degreeOfFatDistribution} Let $\calD$ be a corank $2$ fat distribution on $M$. Then, at each point $x\in M$, we associate a positive integer $\deg(x,\calD)$ by, $$\text{$\deg(x,\calD)$ := degree of the minimal polynomial of the automorphism $A_x:\calD_x\to\calD_x$,}$$ where $A$ is the relating automorphism as above, for a pair of local $1$-forms defining $\calD$ about the point $x$. \end{defn} \ifarxiv We need to check that this notion of degree is indeed well-defined. Suppose, $$\calD \underset{loc.}{=} \ker\lambda^1 \cap \ker\lambda^2 = \ker\mu^1\cap \ker\mu^2,$$ where $\lambda^i,\mu^i$ are local $1$-forms around $x\in M$. Then we can write $$\mu^1 = p\lambda^1 + q\lambda^2,\quad \mu^2 = r\lambda^1 + s\lambda^2,$$ for some local $p,q,r,s\in C^\infty(M)$ such that $\begin{pmatrix} p & q \\ r & s\end{pmatrix}$ is nonsingular. Note that, $$d\mu^1|_\calD = pd\lambda^1|_\calD + qd\lambda^2|_\calD, \quad d\mu^2|_\calD= rd\lambda^1|_\calD + s d\lambda^2|_\calD.$$ Since $\calD$ is fat, we get a pair of (local) automorphisms $A,B:\calD\to\calD$ defined by $$d\lambda^1(u,Av) = d\lambda^2(u,v),\quad\forall u,v\in\calD \qquad\text{ and }\qquad d\mu^1(u,Bv) = d\mu^2(u,v),\quad\forall u,v\in\calD.$$ It follows from \autoref{prop:fatEigenValue} that $A_x$ and $B_x$ have no real eigenvalue. \begin{prop}\label{prop:fatDegMinPoly} The minimal polynomials of $A_x$ and $B_x$ have the same degree. \end{prop} \begin{proof} For simplicity, we drop the suffix $x$ in the proof. Note that $(pI + q A) B = r I + s A$, which gives \[B = (p I + q A)^{-1} (r I + s A),\] since $A$ has no real eigenvalue. Now, if $T : D \to D$ is invertible, then $T^{-1}$ can be written as a polynomial in $A$. Consequently, $B$ can be written as a polynomial in $A$. Similarly, $A$ can be written as a polynomial in $B$ as well. Next, recall that for a linear map $T:D\to D$, the degree of minimal polynomial $\mu_T$ is given by $$\deg \mu_T = \dim \Span \{ T^i, \; i\ge 0\} := \dim \langle T^i, \; i\ge 0\rangle.$$ Now, suppose $S = \sum_{i=1}^k c_iT^i$ is some polynomial expression in $T$. But then for any $i \ge 0$ we have $S^i \in \langle T^i, \; i\ge 0 \rangle = \langle I, T,\ldots, T^{d-1} \rangle$, where $d = \deg \mu_T$. Hence, $\deg \mu_S= \dim \langle S^i,\; i\ge 0\rangle \le d = \deg \mu_T$. The proof then follows. \end{proof} In particular, we thus have that the notion of degree at a point is independent of the choice of $\lambda_1, \lambda_2$. \begin{prop} Given a corank $2$ fat distribution $\calD$ on $M$, the map $x\mapsto \deg (x,\calD)$ is lower semi-continuous. \end{prop} \begin{proof} Without loss of generality, we assume that $\calD=\ker\lambda^1\cap\ker\lambda^2$. Suppose $d=\deg(x,\calD)$. Consider the map, \begin{align*} \phi:\calD &\to \Lambda^d\calD\\ v &\mapsto v\wedge Av \wedge \ldots \wedge A^{d-1} v \end{align*} where $A:\calD\to\calD$ is the relating automorphism associated to $\omega^1,\omega^2$, where $\omega^i = d\lambda^i|_\calD$. Clearly, $\phi$ is continuous and there exists $v_0\in\calD_x$ such that $\phi(v_0)\ne 0$. Hence, $\phi_y$ must be nonzero for all $y$ in some neighborhood $U$ of $x$. Therefore, $\deg(y,\calD)\ge d$ for all $y\in U$. This proves the lower semi-continuity. \end{proof} \else The notion is independent of the choice of $\lambda_1, \lambda_2$. Furthermore, the map $x\mapsto \deg (x,\calD)$ is lower semi-continuous. \fi \begin{obs} \label{obs:fatDegree} We make a few observations about degree. \begin{enumerate} \item \label{obs:fatDegree:1} Since $A_x$ has no real eigenvalue, it follows that $\deg(x,\calD)$ is even for all $x$. \item \label{obs:fatDegree:2} Furthermore, $\deg(x,\calD) \le \frac{1}{2}\rk\calD$. Indeed, we note that the operators $A$ under consideration are \emph{skew-Hamiltonian}. Recall that an operator $T:D\to D$ on a symplectic vector space $(D,\omega)$, is skew-Hamiltonian if $(u,v) \mapsto \omega(u, Tv)$ is a skew-symmetric tensor on $D$. The observation then follows from \cite{skewHamitonianStructure}. \item \label{obs:fatDegree:3} In particular, it then follows that a fat distribution of type $(4,6)$ is always of degree $2$. \end{enumerate} \end{obs} \begin{defn} A corank $2$ fat distribution $\calD$ on $M$ is said to have degree $d$, if $d = \deg(x,\calD)$ for every $x\in M$. \end{defn} \begin{example} \label{exmp:holomorphicContactDeg2} In the example of holomorphic contact structure \autoref{exmp:holomorphicContact}, the $2$-forms $d\lambda^1|_\calD$ and $d\lambda^2|_\calD$ are related by $d\lambda^1(u, Jv) = - d\lambda^2(u,v)$ for $u, v\in \calD$, where $J$ is the (integrable) almost complex structure on $TM$. Hence, the underlying real distribution is degree $2$ fat. \end{example} \begin{remark} We would like to remark here that there exists degree $2$ fat distribution germs which are not equivalent to holomorphic contact structures. The minimum dimension of a manifold admitting a corank $2$ fat distribution is $6$, in which case all fat distributions are of degree $2$. Hence, degree $2$ fat distribution germs form an open set. By a result of Montgomery (\cite{montGeneric}), generic distribution germs in this dimension does not admit a local frame which generates a finite dimensional Lie algebra. Hence, there are germs of degree $2$ fat distributions which are not equivalent to germs of holomorphic contact distributions. We also refer to a result of Cap and Eastwood (\cite{capEastwoodSpecialDim6}). \end{remark} \subsection{Linear Algebraic Interlude} We now study fatness from an algebraic viewpoint. Consider a tuple $(D,\omega^1,\omega^2)$, where $D$ is a vector space equipped with a pair of linear symplectic forms $\omega^1,\omega^2$. We shall denote the pair $(\omega^1,\omega^2)$ by $\Omega$. Next, define an isomorphism $A:D\to D$ by $$\omega^1(u,Av) = \omega^2(u,v), \quad \text{for $u,v\in D$.}$$ For any subspace $V\subset D$ denote, $$V^{\perp_i} = \big\{w\in D \; \big|\; \omega_i(v,w) = 0,\forall v\in V\big\},\quad i=1,2, \qquad V^\Omega = V^{\perp_1}\cap V^{\perp_2}.$$ \begin{obs}\label{obs:fatTupleBasic} For any subspace $V\subset D$ we have the following. \begin{enumerate} \item \label{obs:fatTupleBasic:1} $V^{\perp_2} = \big(AV\big)^{\perp_1}, \quad V^{\perp_1} = A\big(V^{\perp_2}\big)$. \item \label{obs:fatTupleBasic:2} $V^\Omega = (V+AV)^{\perp_1} = (V+A^{-1}V)^{\perp_2}$. \item \label{obs:fatTupleBasic:3} The subspace $V^\Omega$ only depends on the linear span of the $2$-forms $\omega^1,\omega^2$. \end{enumerate} \end{obs} \begin{defn} A subspace $V\subset D$ is called $\Omega$-\emph{regular} if the linear map, \begin{align*} D &\to \hom(V,\bbR^2)\\ \xi &\mapsto \big(\iota_\xi \omega^1|_V, \iota_\xi \omega^2|_V\big) \end{align*} is surjective. $V$ is called $\Omega$-\emph{isotropic} if $V\subset V^\Omega$. \end{defn} We have the following characterization of regularity. \begin{prop}\label{prop:fatTupleOmegaRegular} For a subspace $V\subset D$, the following statements are equivalent. \begin{enumerate} \item \label{prop:fatTupleOmegaRegular:1} $V$ is $\Omega$-regular. \item \label{prop:fatTupleOmegaRegular:2} $V\cap AV = \{0\}$, i.e, $V+AV$ is a direct sum. \item \label{prop:fatTupleOmegaRegular:3} $\codim V^\Omega = 2\dim V$. \end{enumerate} \end{prop} \ifarxiv \begin{proof} From the definition, it is clear that $V$ is $\Omega$-regular if and only if $\codim V^\Omega = 2\dim V$. This proves (\ref{prop:fatTupleOmegaRegular:1}) $\Leftrightarrow$ (\ref{prop:fatTupleOmegaRegular:3}). To prove (\ref{prop:fatTupleOmegaRegular:2}) $\Leftrightarrow$ (\ref{prop:fatTupleOmegaRegular:3}), note that $$\codim V^\Omega = \codim (V+AV)^{\perp_1} = \dim (V+AV), \quad\text{}$$ as $\omega^1$ is nondegenerate. Hence, $\codim V^\Omega = 2\dim V$ if and only if $V+AV$ is a direct sum. \end{proof} \fi It is clear from the above proposition that $\Omega$-regularity of a subspace only depends on the span of the $2$-forms $\omega^1,\omega^2$. \begin{defn} A tuple $(D,\omega^1,\omega^2)$ is called \emph{fat} if every one dimensional subspace of $D$ is $\Omega=(\omega^1,\omega^2)$-regular. \end{defn} Clearly, fatness is equivalent to saying that $A$ has no real eigenvalue, where, $A:D\to D$ is the connecting automorphism (\autoref{prop:fatEigenValue}). \begin{defn} A fat tuple $(D,\omega^1,\omega^2)$ is said to have \emph{degree $d$} if the minimal polynomial of $A$ has degree $d$. \end{defn} \begin{prop}\label{prop:fatTupleVAV} Let $(D,\omega^1,\omega^2)$ be a degree $2$ fat tuple. Then, for any subset $V$ of $D$, \begin{enumerate} \item \label{prop:fatTupleVAV:1} $V+AV=V+A^{-1}V$. \item \label{prop:fatTupleVAV:2} $V^\Omega = (V+AV)^{\perp_1} = (V+AV)^{\perp_2} = (V+AV)^\Omega$. \item \label{prop:fatTupleVAV:3} $(V^\Omega)^\Omega = V+AV$. \item \label{prop:fatTupleVAV:4} If $V$ is $\Omega$-isotropic then $(V^\Omega)^\Omega$ is $\Omega$-isotropic. \end{enumerate} \end{prop} \begin{proof} Since the minimal polynomial of $A$ is of degree 2, it follows that $A^{-1}=\lambda I-\mu A$ for some non-zero real numbers $\lambda, \mu$. Hence $V+AV=V+A^{-1}V$ for any subspace $V$ of $D$, proving (\ref{prop:fatTupleVAV:1}). Proof of (\ref{prop:fatTupleVAV:2}) now follows directly from \autoref{obs:fatTupleBasic} (\ref{obs:fatTupleBasic:2}). Furthermore, (\ref{prop:fatTupleVAV:2}) implies (\ref{prop:fatTupleVAV:3}): $$(V^\Omega)^\Omega = (V^\Omega)^{\perp_1}\cap (V^\Omega)^{\perp_2} = V+AV.$$ To prove (\ref{prop:fatTupleVAV:4}), let $V$ be $\Omega$-isotropic, i.e, $V\subset V^\Omega$, which implies $({V^\Omega})^\Omega \subset V^\Omega$. On the other hand, by (\ref{prop:fatTupleVAV:2}) and (\ref{prop:fatTupleVAV:3}) we have $V^\Omega=(V+AV)^\Omega$ and $V+AV=(V^\Omega)^\Omega$. This proves that $(V^\Omega)^\Omega$ is $\Omega$-isotropic. \end{proof} \begin{remark}\label{rmk:horizontalNecessaryDimension} If $V \subset D$ is both $\Omega$-regular and $\Omega$-isotropic, then we conclude from \autoref{prop:fatTupleOmegaRegular} and \autoref{prop:fatTupleVAV} that $\dim V \le \frac{1}{4} \dim D$. \end{remark} The following results will be useful later in \autoref{sec:application} when we shall discuss $h$-principle results for $K$-contact immersions in degree $2$ fat distributions. \begin{prop}\label{prop:fatTupleExtension} Let $(D,\omega^1,\omega^2)$ be a degree $2$ fat tuple. Then, for any $(\omega^1,\omega^2)$-regular subspace $V\subset D$ and for any $\tau \not \in (V^\Omega)^\Omega$, the subspace $V^\prime = V+\langle \tau\rangle$ is again $(\omega^1,\omega^2)$-regular. \end{prop} \begin{proof} Let $V$ be $\Omega$-regular and $\tau \not \in (V^\Omega)^\Omega = V+AV$. We only need to show that $(V+AV)\cap \langle \tau, A\tau \rangle = 0$. Clearly, $\dim (V+AV) \cap \langle \tau, A\tau \rangle < 2$. Since the minimal polynomial of $A$ has degree $2$, both the subspaces $V+AV$ and $\langle \tau, A\tau\rangle$ are invariant under $A$. Consequently, their intersection is also invariant under $A$. As $A$ has no real eigenvalue, this intersection \emph{cannot} be $1$-dimensional. This concludes the proof. \end{proof} \begin{prop}\label{prop:fatTupleIsoContact} Let $(D,\omega^1,\omega^2)$ be a fat tuple. Suppose $V\subset D$ is symplectic with respect to $\omega^1$ and isotropic with respect to $\omega^2$. Then, \begin{enumerate} \item \label{prop:fatTupleIsoContact:1} $V$ is $(\omega^1,\omega^2)$-regular. \end{enumerate} If $(D,\omega^1,\omega^2)$ is of degree $2$, then \begin{enumerate} \item[(2)] \label{prop:fatTupleIsoContact:2} $V^\Omega \cap(V^\Omega)^\Omega = 0$. \item[(3)] \label{prop:fatTupleIsoContact:3} $V^\Omega$ and $(V^\Omega)^\Omega$ are symplectic with respect to both $\omega^1,\omega^2$. \end{enumerate} \end{prop} \begin{proof} To prove (\ref{prop:fatTupleIsoContact:1}), we need to show that $V\cap AV=0$, where $A$ is the automorphism defined by $\omega^1(u,Av) = \omega^2(u,v)$. Let $z \in V\cap AV$. Then there exists a $v\in V$ such that $z = Av$. Now, for any $u \in V$ we have $\omega^1(u,z) = \omega^1(u,Av) = \omega^2(u,v) = 0$, as $V$ is $\omega^2$-isotropic. Since $V$ is $\omega^1$-symplectic, we conclude that $z = 0$. Hence, $V\cap AV = 0$ and thus, $V$ is $(\omega^1,\omega^2)$-regular. Since $V$ is $\omega^2$-isotropic, $V$ and $AV$ are $\omega^1$-orthogonal, that is $\omega^1(V, AV)=0$. On the other hand, $V$ being $\omega^1$-symplectic, $AV$ is also a $\omega^1$-symplectic subspace. These two observations imply that $V+AV$ is $\omega_1$-symplectic. By \autoref{prop:fatTupleVAV} (\ref{prop:fatTupleVAV:2}) we then conclude that $(V+AV)$ is also $\omega^2$-symplectic. This proves (\hyperref[prop:fatTupleIsoContact:3]{2}) and (\hyperref[prop:fatTupleIsoContact:3]{3}) by recalling the \autoref{prop:fatTupleVAV} again. \end{proof} \section{Appendix: Preliminaries of $h$-principle} \label{sec:generalHPrinciple} In this section we briefly recall certain techniques in the theory of $h$-principle. We refer to \cite{gromovBook} for a detailed discussion on this theory. Let $p:X\to V$ be a smooth fibration and $X^{(r)}\to V$ be the $r$-jet bundle associated with $p$. The space $\Gamma X$ consisting of smooth sections of $X$ has the $C^\infty$-compact open topology, whereas $\Gamma X^{(r)}$ has the $C^0$-compact open topology. Any differential condition on sections of the fibration defines a subset in the jet space $X^{(r)}$, for some integer $r \ge 0$. Hence, in the language of $h$-principle, a \textit{differential relation} is by definition a subset $\calR\subset X^{(r)}$, for some $r\geq 0$. A section $x$ of $X$ is said to be a \textit{solution} of the differential relation $\calR$ if its $r$-jet prolongation $j^r_x : V \to X^{(r)}$ maps $V$ into $\calR$. Let $\Sol \calR$ denote the space of smooth solutions of $\calR$ and let $\Gamma \calR$ denote the space of sections of the jet bundle $X^{(r)}$ having their images in $\calR$. The $r$-jet map then takes $\Sol \calR$ into $\Gamma \calR$; in fact, this is an injective map, so that $\Sol \calR$ may be viewed as a subset of $\Gamma \calR$. Any section in the image of this map is called a \emph{holonomic} section of $\calR$. \begin{defn} If every section of $\calR$ can be homotoped to a solution of $\calR$ then we say that $\calR$ satisfies the \textit{ordinary $h$-principle} (or simply, \emph{$h$-principle}). \end{defn} \begin{defn} We say that $\calR$ satisfies the \textit{parametric $h$-principle} if $j^r:\Sol \calR\to \Gamma \calR$ is a weak homotopy equivalence; this means that the solution space of $\calR$ is classified by the space $\Gamma \calR$. $\calR$ satisfies the \emph{local} (parametric) $h$-principle if $j^r$ is a local weak homotopy equivalence. \end{defn} \begin{defn} $\calR$ is said to satisfy the \emph{$C^0$-dense} $h$-principle if for every $F_0 \in \Gamma\calR$ with base map $f_0 = \bs F_0$ and for any neighborhood $U$ of $\im f_0$ in $X$, there exists a homotopy $F_t \in \Gamma\calR$ joining $F_0$ to a holonomic $F_1 = j^r_{f_1}$ such that the base map $f_t = \bs F_t$ satisfies $\im f_t \subset U$ for all $t\in [0,1]$. \end{defn} We shall now state the main results in sheaf technique and analytic technique, the combination of which gives global $h$-principle for many interesting relations, including closed relations arising from partial differential equations. \subsection{Sheaf Technique in $h$-Principle}\label{sec:generalHPrinciple:sheaf} We begin with some terminology of topological sheaves $\Phi$ on a manifold $V$. For any arbitrary set $C\subset V$, we denote by $\Phi(C)$ the collection of sections of $\Phi$ defined on some arbitrary open neighborhood $\Op C$. \begin{defn} \label{defn:sheafMicroFlexible} A topological sheaf $\Phi$ is called \emph{flexible} (resp. \emph{microflexible}) if for every pair of compact sets $A\subset B\subset V$, the restriction map $\rho_{B,A}:\Phi(B)\to\Phi(A)$ is a Serre fibration (resp. microfibration). Recall that $\rho_{B,A}$ is a microfibration if every homotopy lifting problem $(F,\tilde{F}_0)$, where $F:P\times I\to \Phi(A)$ and $\tilde{F}_0:P\to \Phi(B)$ are (quasi)continuous maps, admits a partial lift $\tilde{F}:P\times [0,\varepsilon]\to \Phi(B)$ for some $\varepsilon>0$. \end{defn} \begin{defn} Given two sheaves $\Phi,\Psi$ on $V$, a sheaf morphism $\alpha : \Phi \to \Psi$ is called a \emph{weak homotopy equivalence} if, for each open $U\subset V$, $\alpha(U):\Phi(U)\to \Psi(U)$ is a weak homotopy equivalence. The map $\alpha$ is a \emph{local} weak homotopy equivalence if, for each $v\in V$, the induced map $\alpha _v : \Phi(v) \to \Psi(v)$ on the stalk is a weak homotopy equivalence. \end{defn} We now quote a general result from the theory of topological sheaves. \begin{theorem}[Sheaf Homomorphism Theorem]\label{thm:sheafHomoTheorem} Every local weak homotopy equivalence $\alpha:\Phi\to\Psi$ between flexible sheaves $\Phi,\Psi$ is a weak homotopy equivalence. \end{theorem} Now, suppose $\Phi$ is the sheaf of solutions of a relation $\calR\subset X^{(r)}$, and $\Psi$ is the sheaf of sections of $\calR$. Then we have the obvious sheaf homomorphism given by the $r$-jet map, $J = j^r :\Phi \to \Psi$. In this case, the sheaf $\Psi$ is always flexible. Hence, if $\Phi$ is flexible and $J$ is a local weak homotopy equivalence, then the relation $\calR$ satisfies the parametric $h$-principle. But in general $\Phi$ fails to be flexible, though the solution sheaves for many relations do satisfy the micro-flexibility property. The following theorem gives a sufficient condition for flexibility of a the solution sheaf when restricted to submanifold of positive codimension. For any manifold $V$, let $\diff(V)$ denote the pseudogroup of local diffeomorphisms of $V$. \begin{theorem}[Flexibility Theorem]\label{thm:mainFlexibilityTheorem} Let $\Phi$ be a microflexible sheaf and $V_0 \subset V$ be a submanifold of positive codimension. If $\Phi$ is invariant under the action of certain subset of $\diff(V)$ which sharply moves $V_0$, then the restriction sheaf $\Phi|_{V_0}$ is flexible. (That is, for any compact sets $A, B \subset V_0$ with $A\subset B$, the restriction map $\rho_{B,A}$ is a fibration.) \end{theorem} We refer to \cite[pg. 82]{gromovBook} for the definition of sharply moving diffeotopies and also to \cite[pg. 139]{eliashbergBook} for the related notion of capacious subgroups. \begin{example}\label{exmp:sharplyMovingDiffeo} We mention two important classes of sharply moving diffeotopies here which will be of interest to us. \begin{enumerate} \item If $V= V_0\times \bbR$ with the natural projection $\pi : V\to V_0$, then we can identify a subpseudogroup $\diff(V,\pi)\subset\diff(V)$, which consists of fiber preserving local diffeomorphisms of $V$, i.e, the ones commuting with the projection $\pi$. It follows that $\diff(V,\pi)$ sharply moves $V_0$. \item Let $K$ be a contact structure on $V$. Then, the collection of contact diffeotopies of $V$ sharply moves any submanifold $V_0 \subset V$ (\cite[pg. 339]{gromovBook}). \end{enumerate} \end{example} As a consequence of the above theorem we get the following result. \begin{theorem} \label{thm:openManifoldHPrin} Let $V_0 \subset V$ be a submanifold positive codimension. A relation $\calR$ satisfies the parametric $h$-principle near $V_0$ provided the following conditions hold: \begin{enumerate} \item $\calR$ satisfies the local $h$-principle. \item the solution sheaf of $\calR$ satisfies the hypothesis of \autoref{thm:mainFlexibilityTheorem}. \end{enumerate} \end{theorem} It can be easily seen that any \emph{open} relation satisfies the local $h$-principle and its solution sheaf is microflexible. A large class of \emph{non-open} relations also enjoy the same properties, as we shall discuss below.\smallskip Let $J^r(\Sigma,M)$ denote the $r$-jet space associated with smooth maps from a manifold $\Sigma$ to $M$ (which may be considered as sections of a trivial fiber bundle). In order to study the $h$-principle for a relation $\calR \subset J^r(\Sigma,M)$ on an arbitrary manifold $\Sigma$, the general idea is to first embed $\Sigma$ in a higher dimensional manifold $\tilde{\Sigma}$. The natural restriction morphism $C^\infty(\tilde{\Sigma},M) \to C^\infty(\Sigma, M)$ then induces a map $\rho:J^r(\tilde{\Sigma},M)|_\Sigma \to J^r(\Sigma,M)$. A relation $\tilde{\calR}$ on $\tilde{\Sigma}$ is called an \textit{extension} of $\calR$ if $\rho$ maps $\tilde{\calR}|_\Sigma$ into $\calR$ \begin{defn}\label{defn:microextension} An extension $\tilde{\calR}$ is called a \emph{microextension} of $\calR$ if the induced maps $\rho_* : \Gamma(\tilde \calR|_O) \to \Gamma(\calR|_O)$ are surjective for contractible open sets $O \subset \Sigma$. \end{defn} \subsection{Analytic Technique in $h$-Principle}\label{sec:generalHPrinciple:analytic} Suppose $X\to V$ is a fibration and $G\to V$ is a vector bundle. Let us consider a $C^\infty$-differential operator $\frD: \Gamma X\to \Gamma G$ of order $r$, given by the $C^\infty$-bundle map $\Delta : X^{(r)}\to G$, known as the \emph{symbol} of the operator, satisfying $$\Delta\circ j^r_x = \frD(x), \quad \text{for $x \in \Gamma X$.}$$ Suppose that $\frD$ is \emph{infinitesimally invertible} over a subset $\calS \subset \Gamma X$, where $\calS$ consists of all $C^\infty$-solutions of a $d$-th order \emph{open} relation $S\subset X^{(d)}$, for some $d\ge r$. Roughly speaking, this means that there exists an integer $s\geq 0$ such that for each $x\in \calS$, the linearization of $\frD$ at $x$ admits a right inverse, which is a linear differential operator of order $s$. The integer $s$ is called the \emph{order} of the inversion, while $d$ is called the \emph{defect}. The elements of $\calS$ are referred to as \emph{$S$-regular} (or simply, \emph{regular}) maps. It follows from the Nash-Gromov Implicit Function Theorem (\cite[pg. 117]{gromovBook}) for smooth differential operators that, $\frD$ restricted to $\calS$ is an open map with respect to the fine $C^\infty$-topologies, if the operator is infinitesimally invertible on $\calS$. In particular, it implies that $\frD$ is locally invertible at $S$-regular maps. Explicitly, if $x_0\in\calS$ and $\frD(x_0)=g_0$, then there exists a neighborhood $\calV_0$ of the zero section in $\Gamma G$ and an operator $\frD_{x_0}^{-1}:\calV_0\to \calS$ such that for all $g\in\calV_0$ we have $\frD(\frD_{x_0}^{-1}(g))=g_0+g$. We shall call $\frD_{x_0}^{-1}$ a local inverse of $\frD$ at $x_0$. \begin{defn} Fix some $g\in\Gamma G$. A germ $x_0\in S$ at a point $v \in V$ is called an \emph{infinitesimal solution} of $\frD(x)=g$ of order $\alpha$ if $j^\alpha_{\frD(x_0) - g}(v) = 0$. \end{defn} Let $\calR^\alpha(\frD, g)\subset X^{r+\alpha}$ denote the relation consisting of jets represented by \emph{infinitesimal solutions} of $\frD(x)=g$ of order $\alpha$, at points of $V$. For $\alpha\geq d-r$, define the relations $\calR_\alpha$ as follows: $$\calR_\alpha :=\calR^{\alpha}\cap (p_d^{r + \alpha})^{-1}S,$$ where $p^{r+\alpha}_d:X^{(r+\alpha)}\to X^{(d)}$ is the canonical projection of the jet spaces. Then, for all $\alpha\ge d-r$, the relations $\calR_\alpha$ have the same set of $C^\infty$-solutions, namely, the $S$-regular $C^\infty$-solutions of $\frD(x)=g$. Denote the sheaf of solutions of any such $\calR_\alpha$ by $\Phi$ and let $\Psi_\alpha$ denote the sheaf of sections of $\calR_\alpha$. \begin{theorem}\label{thm:microflexibleLocalWHESheafTheorem} Suppose $\frD$ is a smooth differential operator of order $r$, which admits an infinitesimal inversion of order $s$ and defect $d$ on an open subset $S \subset X^{(d)}$, where $d \ge r$. Then for $\alpha\ge \max\{d+s, 2r+2s\}$ the jet map $j^{r+\alpha} : \Phi\to\Psi_\alpha$ is a local weak homotopy equivalence. Also, $\Phi$ is a microflexible sheaf. \end{theorem} We end this section with a theorem on the Cauchy initial value problem associated with the equation $\frD(x) = g$. \begin{theorem}\cite[pg. $144$]{gromovBook} \label{thm:consistenInversion} Suppose $\frD$ is differential operator of order $r$, admitting an infinitesimal inversion of order $s$ and defect $d$ over $\calS$. Let $x_0\in \calS$ and $g_0 = \frD(x_0)$. Suppose $V_0\subset V$ is a codimension $1$ submanifold without boundary and $g\in \Gamma G$ satisfies $$j^{l}_g|_{V_0} = j^{l}_{g_0}|_{V_0} \text{ for some }l \ge 2r + 3s + \max\{d, 2r + s\}.$$ Then, there exists an $x\in\calS$ such that $\frD(x)= g$ on ${\Op V_0}$ and $$j^{2r+s-1}_x|_{V_0} = j^{2r+s-1}_{x_0}|_{V_0}.$$ \end{theorem} The above result follows from a stronger version of the Implicit Function Theorem. \section{Introduction} A \emph{distribution} on a manifold $M$ is a subbundle $\calD$ of the tangent bundle $TM$. The sections of $\calD$ constitute a distinguished subspace $\Gamma(\calD)$ in the space of all vector fields on $M$. On one end there are \emph{involutive} distributions for which $\Gamma(\calD)$ is closed under the Lie bracket operation, while at the polar opposite there are \emph{bracket-generating distributions} for which the local sections of $\calD$ generate the whole tangent bundle under successive Lie bracket operations. A celebrated theorem due to Chow says that if $\calD$ is a bracket-generating distribution on a manifold $M$, then any two points of $M$ can be joined by a $C^\infty$-path horizontal (that is, tangential) to $\calD$ (\cite{chowBracketGenerating}). This is the starting point of the study of subriemannian geometry. Chow's theorem is clearly not true for involutive distributions since by Frobenius theorem the set of points that can be reached by horizontal paths from a given point is a (integral) submanifold of dimension equal to the rank of $\calD$. Given an arbitrary manifold $\Sigma$ there is a distinguished class of maps $u:\Sigma\to (M,\calD)$ such that $T\Sigma$ is mapped into $\calD$ under the derivative map of $u$. Such maps are called $\calD$-horizontal maps or simply horizontal maps. If $u$ is an embedding then the image of $u$ is called a \emph{horizontal submanifold} to the given distribution $\calD$. An immediate question that arises after Chow's theorem is the following : For a given distribution $\calD$ on $M$ and a given point $x\in M$, what is the maximum dimension of a (local) horizontal submanifold through $x$? More generally, can we classify $\calD$-horizontal immersions (or embeddings) of a given manifold into $(M,\calD)$ up to homotopy? These questions have been studied in generality by Gromov, and the answers can be given in the language of $h$-principle. Horizontal immersions of a manifold $\Sigma$ in $(M,\calD)$ can be realized as solutions to a first order partial differential equation associated with a differential operator $\frD$ defined on $C^\infty(\Sigma, M)$ and taking values in $TM/\calD$-valued $1$-forms. If $\calD$ is globally defined as the common kernel of independent $1$-forms $\lambda^i$ on $M$ for $i=1,\ldots, p$, then the operator can be expressed as $$\frD : u \mapsto \big(u^*\lambda^1,\ldots, u^*\lambda^p\big).$$ This operator is infinitesimally invertible on \emph{$\Omega$-regular} horizontal immersions (\autoref{defn:contOmegaRegular}), where $\Omega$ is the curvature form of $\calD$. It follows from an application of the Nash-Gromov Implicit Function Theorem that $\calD$ is locally invertible on $\Omega$-regular immersions. An integrable distribution $\calD$ has vanishing curvature form; as a consequence there are no $\Omega$-regular immersions. In order to have a $\Omega$-regular horizontal immersion, it is necessary that $k(p+1)\leq \rk\calD$. Gromov proves that for \emph{generic} distribution germs this is also sufficient. Moreover, with sheaf theoretic techniques, he obtains $h$-principle for horizontal immersions satisfying `overregularity' condition, which demands that $(k+1)(p+1)\leq \rk\calD$. Gromov, however, conjectures an $h$-principle for $\Omega$-regular horizontal immersions under the condition $k(p+1)< \rk\calD$, the operator being underdetermined in this range. Among all the bracket-generating distributions, the \emph{contact structures} have been studied most extensively (\cite{geigesBook}). These are corank $1$ distributions on odd-dimensional manifolds, which are maximally non-integrable. In other words, a contact structure $\xi$ is locally given by a $1$-form $\alpha$ such that, $\alpha\wedge (d\alpha)^n$ is non-vanishing, where the dimension of the manifold is $2n+1$. Since $d\alpha$ is non-degenerate on $\xi$, the maximal dimension of a horizontal submanifold of $\xi$ as above must be $n$. The $n$-dimensional horizontal submanifolds of a contact structure are called \emph{Legendrians}. Locally, there are plenty of $n$-dimensional horizontal (Legendrian) submanifolds due to Darboux charts. Globally, the Legendrian immersions and the `loose' Legendrian embeddings are completely understood in terms of $h$-principle (\cite{gromovBook,duchampLegendre,murphyLooseLegendrianThesis}). Any horizontal immersion in a contact structure is $\Omega$-regular by default. Moreover, one does not require the overregularity condition to obtain the $h$-principle. Beyond the corank $1$ situation, very few cases are completely understood. \emph{Engel structures}, which are certain rank $2$ distribution on $4$-dimensional manifolds (\cite{engelOriginal}), have been studied in depth in the recent years, and the question of existence and classification of horizontal loops in a given Engel structure has been solved (\cite{adachiEngelLoops,casalsPinoEngelKnot}). We also refer to \cite{dAmbraSubbundle} for horizontal immersions in products of contact manifolds. The simplest invariant for distribution germs is given by a pair of integers $(n,p)$ where $n=\dim M$ and $p=\text{corank }\calD$. The germs of contact and Engel structures are generic in their respective classes. They also admit local frames generating finite dimensional lie algebra structures. The only other distributions that have the same properties are the class of even contact structures and the $1$-dimensional distributions. All of these lie in the range $p(n-p) \le n$. But in the range $p(n-p) > n$, generic distribution germs do not admit local frames which generate finite dimensional Lie algebra, due to the presence of function moduli (\cite{montGeneric}). The contact distributions are the simplest kind of \emph{strongly} bracket generating distribution. A distribution $\calD$ is called strongly bracket generating if every non-vanishing vector field in $\calD$, about a point $x\in M$, Lie bracket generates the tangent space $T_x M$ in $1$-step. Strongly bracket generating distributions are also referred to as \emph{fat distributions} in the literature. In fact, corank 1 fat distributions are the same as the contact ones. The germs of fat distributions in higher corank, are far from being generic (\cite{raynerFat}). However, they are interesting in their own right and have been well-studied (\cite{geHorizontalCC, montTour}). The notion of contact structures can be extended verbatim to complex manifolds (\autoref{exmp:holomorphicContact}). There is also a quaternionic analogue of contact structures, which are corank 3 distributions on $(4n+3)$-manifolds (\autoref{exmp:quaternionicContact}). Both the distributions mentioned above enjoy the fatness property. Moreover, these distributions admit local frames generating finite dimensional lie algebras, namely, the complex Heisenberg lie algebra and the quaternionic Heisenberg Lie algebra in place of real Heisenberg algebra for contact structures (\cite{montTour}). Hence they are not generic.\medskip In this article we mainly focus on the existence of smooth horizontal immersion in certain class of corank $2$ fat distributions, which are referred here as `degree $2$ fat distributions' (\autoref{defn:degreeOfFatDistribution}) and which include the underlying real bundles of holomorphic contact structures. The main results may be stated as follows. \begin{theoremX} [\autoref{thm:hPrinHorizImmFatDeg2}, \autoref{thm:existenceHorizImm}] \label{thm:mainTheoremHorizDeg2} Let $\calD$ be a degree 2 fat distribution on a manifold $M$. Then $\calD$-horizontal $\Omega$-regular immersions $\Sigma\to (M,\calD)$ satisfy the $C^0$-dense $h$-principle provided $\rk \calD \ge 4\dim \Sigma$. Furthermore, any map $\Sigma\to M$ can be $C^0$-approximated by an $\Omega$-regular $\calD$-horizontal immersion provided $\rk \calD \ge \max \{4\dim \Sigma, 5\dim \Sigma - 3\}$. \end{theoremX} The holomorphic Legendrian embeddings of an open Riemann surface into $\bbC^{2n+1}$, with the standard holomorphic contact structure, are known to satisfy the Oka's principle (\cite{forstnericLegendrianCurves, forstnericLegendrianCurvesProjectivised}). We would also like to emphasize that the $h$-principle obtained in \autoref{thm:mainTheoremHorizDeg2} is in the optimal range (\autoref{rmk:horizontalOptimalRange}) and we do not need any overregularity condition on the immersions. More generally, we can consider immersions $u : \Sigma \to M$ which induce a specific distribution $K$ on the domain, i.e, $K = du^{-1}\calD$. These are called $K$-isocontact immersions. The $h$-principle in \autoref{thm:mainTheoremHorizDeg2} is derived as a consequence of the following. \begin{theoremX} [\autoref{thm:hPrinIsocontactDeg2}, \autoref{thm:existenceIsocontact}] \label{thm:mainTheoremIsoContDeg2} Let $K$ be a given contact structure on $\Sigma$ and $\calD$ be a degree $2$ fat distribution on $M$. Then, $K$-isocontact immersions $\Sigma \to M$ satisfy the $C^0$-dense $h$-principle provided $\rk \calD \ge 2\rk K + 4$. Furthermore, if $K,\calD$ are cotrivial, then any map $\Sigma \to M$ can be $C^0$-approximated by a $K$-isocontact immersion provided $\rk \calD\ge \max \{2 \rk K + 4, 3\rk K - 2\}$. \end{theoremX} We also obtain $h$-principles for horizontal and isocontact immersions into quaternionic contact distributions. \begin{theoremX} [\autoref{thm:hPrinHorizImmQCont}, \autoref{thm:existenceHorizImmQCont}] \label{thm:mainTheoremHorizQuat} Let $\calD$ be a quaternionic contact structure on $M$. Then, $\calD$-horizontal immersions $\Sigma\to M$ satisfy the $C^0$-dense $h$-principle provided $\rk \calD \ge 4\dim\Sigma + 4$. Furthermore, any map $\Sigma\to M$ can be $C^0$-approximated by a $\calD$-horizontal immersion provided $\rk \calD \ge \max\{4 \dim \Sigma + 4, 5\dim\Sigma - 3\}$. \end{theoremX} \begin{theoremX} [\autoref{thm:hPrinIsocontactQCont}, \autoref{thm:existenceIsocontactQCont}] \label{thm:mainTheoremIsocontactQCont} Let $K$ be a given contact structure on $\Sigma$ and $\calD$ be a quaternionic contact structure on $M$. Then, $\Omega$-regular $K$-isocontact immersions $\Sigma \to M$ satisfy the $C^0$-dense $h$-principle provided $\rk \calD \ge 4\rk K + 4$. Furthermore, if $K,\calD$ are cotrivial, then any map $\Sigma \to M$ can be $C^0$-approximated by an $\Omega$-regular, $K$-isocontact immersion provided $\rk \calD\ge \max \{4 \rk K + 4, 6\rk K - 2\}$. \end{theoremX} The results follow from the general theory of $h$-principle by applying the sheaf theoretic and analytic techniques. The article is organized as follows. In \autoref{sec:revisitKContact}, we discuss in detail the $h$-principle of $\Omega$-regular $K$-contact immersions and revisit Gromov's Approximation Theorem for overregular immersions. Next, in \autoref{sec:fatAndDegree} we introduce the notion of `degree' on corank $2$ fat distributions and study their algebraic properties. In \autoref{sec:application} we apply the general results of \autoref{sec:revisitKContact} to prove the main theorems and then discuss some implications of these theorems in symplectic geometry. In the Appendix (\autoref{sec:generalHPrinciple}), we recall briefly the language and the main theorems of $h$-principle in order to make the article self-contained. \subsection{Proof of \autoref{lemma:jetLiftingIsoCont}} \label{sec:jetLiftingLemmaProof} To simplify the notation, we assume that $K=T\Sigma$, i.e, we prove the statement for the relation $\relHor$. The argument for a general $K$ is similar, albeit cumbersome. As the lemma is local in nature, without loss of generality we assume $\calD$ is cotrivializable and hence let us write $\calD=\bigcap_{s=1}^p\ker\lambda^s$ for $1$-forms $\lambda^1,\ldots,\lambda^p$ on $M$. We denote the tuples $$\lambda = (\lambda^s)\in\Omega^1(M,\bbR^p) \text{ and } d\lambda = (d\lambda^s)\in\Omega^2(M,\bbR^p).$$ We need to consider the three operators: $$u \mapsto u^*\lambda, \qquad u\mapsto u^*d\lambda, \qquad\text{the exterior derivative operator, $d : \Omega^1(\Sigma,\bbR^p) \to \Omega^2(\Sigma,\bbR^p)$.}$$ We have their respective symbols: \begin{itemize} \item We have the bundle map $\Delta_\lambda : J^1(\Sigma,M) \to \Omega^1(\Sigma, \bbR^p)$ so that, $\Delta_\lambda\big(j^1_u\big) = u^*\lambda = \big(u^*\lambda^s\big)$. Explicitly, $$\Delta_\lambda(x,y, F:T_x\Sigma\to T_y M) = \big(x, F\circ\lambda|_y\big).$$ \item We have the bundle map $\Delta_{d \lambda} : J^1(\Sigma, M) \to \Omega^1(\Sigma, \bbR^p)$ so that, $\Delta_{d\lambda}\big(j^1_u\big) = u^*d\lambda = \big(u^*d\lambda^s\big)$. Explicitly, $$\Delta_{d\lambda}(x,y, F:T_x\Sigma\to T_y M) = \big(x, F^*d\lambda|_y\big).$$ \item We have the bundle map $\Delta_d : \Omega^1(\Sigma,\bbR^p)^{(1)} \to \Omega^2(\Sigma,M)$ so that, $\Delta_d(j^1_\alpha) = d\alpha$. Explicitly, $$\Delta_d\big(x,\alpha, F:T_x\Sigma\to \hom(T_x\Sigma,\bbR^p)\big) = \big(x, (X\wedge Y) \mapsto F(X)(Y) - F(Y)(X)\big).$$ \end{itemize} \paragraph{\bfseries Jet Prolongation of Symbols:} Recall that given some arbitrary $r^\text{th}$-order operator $\frD:\Gamma X\to \Gamma G$ represented by the symbol $\Delta : X^{(r)} \to G$ as, $\Delta(j^r_u) = \frD(u)$, we have the $\alpha$-jet prolongation, $\Delta^{(\alpha)} : X^{(r+\alpha)} \to G^{(\alpha)}$ defined as $$\Delta^{(\alpha)}(j^{r+\alpha)}_u(x)) = j^\alpha_{\frD(u)}(x).$$ Then, for any $\alpha\ge \beta$ we have $p^{\alpha}_\beta \circ \Delta^{(\alpha)} = \Delta^\beta \circ p^{r + \alpha}_{r+\beta}$. Let us observe the following interplay between the symbols of the operators introduced above. \begin{itemize} \item We have the commutative diagram \[\begin{tikzcd} J^{\alpha+1}(\Sigma, M) \arrow{r}{\Delta_\lambda^{(\alpha)}} \arrow{d}[swap]{p^{\alpha+1}_\alpha} & \Omega^1(\Sigma, \bbR^p)^{(\alpha)} \arrow{d}{\Delta_d^{(\alpha-1)}} \\ J^{\alpha}(\Sigma, M) \arrow{r}[swap]{\Delta_{d\lambda}^{(\alpha-1)}} & \Omega^2(\Sigma, \bbR^p)^{(\alpha-1)} \end{tikzcd}\] Indeed, we observe $$\Delta_d^{(\alpha-1)}\circ \Delta_\lambda^{(\alpha)}\big(j^{\alpha+1}_u(x)\big) = \Delta_d^{(\alpha-1)}\big(j^\alpha_{u^*\lambda}(x)\big) = j^{\alpha-1}_{d\big(u^*\lambda\big)}(x) = j^{\alpha-1}_{u^*d\lambda}(x) = \Delta_{d\lambda}^{(\alpha-1)}\big(j^{\alpha}_u(x)\big),$$ and hence, we get $$\Delta_d^{(\alpha-1)}\circ \Delta_\lambda^{(\alpha)} = \Delta_{d\lambda}^{(\alpha-1)}\circ p^{\alpha+1}_\alpha.$$ \item We have the two commutative diagrams \[\begin{tikzcd} J^{\alpha+1}(\Sigma,M) \arrow{r}{\Delta_\lambda^{(\alpha)}} \arrow{d}[swap]{p^{\alpha+1}_\alpha} & \Omega^1(\Sigma,\bbR^p)^{(\alpha)} \arrow{d}{p^\alpha_{\alpha-1}} \\ J^\alpha(\Sigma,M) \arrow{r}[swap]{\Delta_\lambda^{(\alpha-1)}} & \Omega^1(\Sigma,\bbR^p)^{(\alpha-1)} \end{tikzcd}\quad \text{and} \quad \begin{tikzcd} J^{\alpha+1}(\Sigma,M) \arrow{r}{\Delta_{d\lambda}^{(\alpha)}} \arrow{d}[swap]{p^{\alpha+1}_\alpha} & \Omega^2(\Sigma,\bbR^p)^{(\alpha)} \arrow{d}{p^\alpha_{\alpha-1}} \\ J^\alpha(\Sigma,M) \arrow{r}[swap]{\Delta_{d\lambda}^{(\alpha-1)}} & \Omega^2(\Sigma,\bbR^p)^{(\alpha-1)} \end{tikzcd}\] \end{itemize} Next, let us fix $\calR_{d\lambda} \subset J^1(\Sigma, M)$ representing the $(d\lambda^s)$-regular immersions $\Sigma \to M$, i.e, $$\calR_{d\lambda} = \Big\{(x,y,F:T_x\Sigma \to T_yM) \;\Big|\; \text{ $F$ is injective and $(d\lambda^s)$-regular} \Big\}.$$ Recall that $\calR_\alpha := \relHor_\alpha \subset J^{\alpha+1}(\Sigma,M)$ is given as, $$\calR_\alpha = \Big\{j^{\alpha+1}_u(x)\in J^{\alpha+1}(\Sigma,M)|_x \;\Big|\; \text{$j^\alpha_{u^*\lambda}(x) = 0$ and $u$ is $(d\lambda^s)$-regular}\Big\}.$$ Hence, we can identify $\calR_\alpha$ as $$\calR_\alpha = \ker\big(\Delta_\lambda^{(\alpha)}\big) \cap \big(p^{\alpha+1}_1\big)^{-1}(\calR_{d\lambda}) \subset J^{\alpha+1}(\Sigma,M),$$ where $p^{\alpha+1}_1 :J^{\alpha+1}(\Sigma,M)\to J^1(\Sigma,M)$ is the natural projection map. We denote a sub-relation, $$\bar\calR_\alpha = \calR_\alpha \cap \ker\big(\Delta_{d\lambda}^{(\alpha)}\big) \subset \calR_\alpha.$$ In particular, observe that $\bar\calR_0$ is then precisely $\relHor$, i.e, the relation of $\Omega$-regular, horizontal immersions $\Sigma\to M$. The proof of \autoref{lemma:jetLiftingIsoCont} follows from the next two results. \begin{sublemma}\label{lemma:jetLiftingIsoCont:Lambda} For any $\alpha\ge 0$, we have, $\bar\calR_\alpha = p^{\alpha+2}_{\alpha+1}\big(\calR_{\alpha+1}\big)$ and for each $(x,y)\in\Sigma\times M$, the fiber of $p^{\alpha+2}_{\alpha + 1} : \calR_{\alpha+1}|_{(x,y)}\to \bar \calR_\alpha|_{(x,y)}$ is affine. Furthermore, any section of $\bar\calR_{\alpha}|_O$, over some contractible charts $O\subset \Sigma$, can be lifted to a section of $\calR_{\alpha+1}|_O$ along $p^{\alpha+2}_{\alpha+1}$. \end{sublemma} \begin{sublemma}\label{lemma:jetLiftingIsoCont:OmegaRegularity} For any $\alpha\ge 0$, the map $p^{\alpha+2}_{\alpha +1}:\bar\calR_{\alpha+1}|_{(x,y)} \to \bar \calR_\alpha|_{(x,y)}$ is surjective, with affine fibers, for each $(x,y)\in\Sigma\times M$. Furthermore, any section of $\bar \calR_\alpha|_O$ over some contractible chart $O\subset\Sigma$ can be lifted to a section of $\bar\calR_{\alpha+1}|_O$ along $p^{\alpha+2}_{\alpha+1}$. \end{sublemma} \begin{proof}[Proof of \autoref{lemma:jetLiftingIsoCont}] We have the following ladder-like schematic representation of the proof. \[ \tikzset{% symbol/.style={ draw=none, every to/.append style={ edge node={node [sloped, allow upside down, auto=false]{$#1$}} }, }, } \begin{tikzcd} J^{\alpha+1}(\Sigma,M) \arrow{r}{p^{\alpha+1}_\alpha} & J^{\alpha}(\Sigma,M) \arrow{r} &\cdots \arrow{r} & J^2(\Sigma,M) \arrow{r}{p^2_1} & J^1(\Sigma,M)\\ \calR_{\alpha} \arrow[rdd, twoheadrightarrow, sloped, "p^{\alpha+1}_{\alpha}"] \arrow{r} \arrow[symbol=\subset]{u} & \calR_{\alpha-1} \arrow{r} \arrow[symbol=\subset]{u} & \cdots \arrow{r} & \calR_1 \arrow{r} \arrow[twoheadrightarrow]{rdd} \arrow[symbol=\subset]{u} & \calR_0 \arrow[symbol=\subset]{u} \\ \\ & \bar\calR_{\alpha-1} \arrow[hookrightarrow]{uu} \arrow[dashed, bend left=40, sloped]{uul}{\text{lift using}}[swap]{ \substack{\text{full rank of $\lambda$} \\ \text{(\autoref{lemma:jetLiftingIsoCont:Lambda})}} } &&& \relHor = \bar\calR_0 \arrow[uu, hookrightarrow, sloped] \arrow[dashed]{lll}[swap]{\text{lift inductively to $\bar\calR_{\alpha-1}$}}{\text{using $\Omega$-regularity (\autoref{lemma:jetLiftingIsoCont:OmegaRegularity})} } \end{tikzcd} \] For any $\alpha\ge 1$, we have $p^{\alpha+1}_1 = p^{\alpha}_1 \circ p^{\alpha+1}_\alpha = p^2_1 \circ \cdots \circ p^{\alpha+1}_\alpha$. From \autoref{lemma:jetLiftingIsoCont:Lambda} we have that $p^{\alpha+1}_{\alpha}$ maps $\calR_{\alpha}$ surjectively onto $\bar\calR_{\alpha-1}$. Also, using \autoref{lemma:jetLiftingIsoCont:OmegaRegularity} repeatedly, we have that $p^{\alpha}_1:\bar\calR_{\alpha-1}\to \relHor$ is a surjection as well. Combining the two, we have the claim. Since at each step we have contractible fiber, we see that the fiber of $p^{\alpha+1}_1$ is again contractible. In fact, we are easily able to get lifts of sections over contractible charts as well. This concludes the proof. \end{proof} We now prove the above sublemmas. \begin{proof}[Proof of \autoref{lemma:jetLiftingIsoCont:Lambda}] We have the following commutative diagram \[\begin{tikzcd} \calR_{\alpha+1}\arrow[hookrightarrow]{r} & J^{\alpha+2}(\Sigma,M) \arrow{rr}{\Delta_\lambda^{(\alpha+1)}} \arrow{d}[swap]{p^{\alpha+2}_{\alpha+1}} && \Omega^1(\Sigma, \bbR^p)^{(\alpha+1)} \arrow{d}[swap]{p^{\alpha+1}_\alpha} {\Delta_d^{(\alpha)}} \\ \bar \calR_{\alpha} \arrow[hookrightarrow]{r} & J^{\alpha+1}(\Sigma,M) \arrow{rr}[swap]{\Delta_\lambda^{(\alpha)}, \; \Delta_{d\lambda}^{(\alpha)}} && \Omega^1(\Sigma,\bbR^p)^{(\alpha)} \oplus \Omega^2(\Sigma, \bbR^p)^{(\alpha)} \end{tikzcd} \label{cd:jetLifting:surjectivityOfLambda}\tag{$*$} \] Since we have $\calR_{\alpha+1}\subset \ker \Delta_\lambda^{(\alpha+1)}$, we get $$p^{\alpha+2}_{\alpha+1}(\calR_{\alpha+1}) \subset \ker \Delta_\lambda^{(\alpha)} \cap \ker\Delta_{d\lambda}^{(\alpha)}.$$ Also, we have $$\calR_{\alpha+1}\subset \big(p^{\alpha+2}_1\big)^{-1}(\calR_{d\lambda}) \Rightarrow p^{\alpha+2}_{\alpha+1}\big(\calR_{\alpha+1}\big) \subset \big(p^{\alpha+1}_\alpha\big)^{-1}(\calR_{d\lambda}).$$ Hence we see that $\big(p^{\alpha+2}_{\alpha+1}\big)(\calR_{\alpha+1})\subset\bar \calR_\alpha$. Conversely, let us assume that we are given a jet $$\big(x,y, P_i :\Sym^i T_x\Sigma \to T_y M, \; i=1, \ldots, \alpha +1 \big) \in \bar \calR_\alpha|_{(x,y)}.$$ We wish to find $Q:\Sym^{\alpha+2}T_x\Sigma \to T_y M$ so that $$(x,y, P_i, Q)\in\calR_{\alpha+1}|_{(x,y)}.$$ Recall that $\Delta_\lambda(x,y,F:T_x\Sigma\to T_y M) = \big(x, \lambda|_y \circ F : T_x\Sigma\to \bbR^p\big)$. Then, we may write $$\Delta_\lambda^{(\alpha+1)}\big(x,y,P_i,Q\big) = \big(x, \lambda\circ F, R_i : \Sym^i T_x\Sigma \to \hom(T_x\Sigma, \bbR^p),\; i=1,\ldots,\alpha+1\big),$$ so that $R_{\alpha+1}: \Sym^{\alpha+1} T_x \Sigma \to \hom(T_x\Sigma, \bbR^p)$ is the \emph{only} symmetric tensor which involves $Q$. In fact, we observe that $R_{\alpha+1}$ is given explicitly as $$R_{\alpha+1}\big(X_1,\ldots,X_{\alpha+1}\big) (Y) = \lambda\circ Q\big(X_1,\ldots,X_{\alpha+1}, Y\big) + \text{terms involving $P_i$.}$$ Now, from the commutative diagram \hyperref[cd:jetLifting:surjectivityOfLambda]{$(*)$} we have \begin{align*} \big(x,\lambda\circ F, R_i\; i=1, \ldots, \alpha\big) &= p^{\alpha+1}_\alpha\circ \Delta_\lambda^{(\alpha+1)}(x,y,P_i,Q) \\ &= \Delta_\lambda^{(\alpha)}\circ p^{\alpha+2}_{\alpha+1}(x,y,P_i,Q) \\ &= \Delta_\lambda^{(\alpha)}(x,y,P_i) \\ &= 0. \end{align*} That is, we have, $R_i = 0$ for $i=1,\ldots,\alpha$. We need to find $Q$ so that $R_{\alpha+1}=0$ as well. We claim that the tensor $$R_{\alpha+1}^\prime : \big(X_1,\ldots,X_{\alpha+1},Y\big)\mapsto R_{\alpha+1}(X_1,\ldots,X_{\alpha+1})(Y),$$ is symmetric. Let us write $\Delta_d^{(\alpha)}(x,y,\lambda\circ F,R_i) = \big(x,\omega, S_i :\Sym^i T_x\Sigma\to\hom(\Lambda^2T_x\Sigma,\bbR^p), i=1,\ldots,\alpha\big)$, where the \emph{pure} $\alpha$-jet $S_\alpha$ is given as $$S_\alpha(X_1,\ldots,X_\alpha)(Y\wedge Z) = R_{\alpha+1}(X_1,\ldots,X_\alpha,Y) (Z) - R_{\alpha+1}(X_1,\ldots,X_\alpha,Z)(Y).$$ Again, going back to the commutative diagram \hyperref[cd:jetLifting:surjectivityOfLambda]{$(*)$}, we have $$\Delta_d^{(\alpha)}\big(x,\lambda\circ F, R_i\big) = \Delta_d^{(\alpha)}\circ\Delta_\lambda^{(\alpha+1)}(x,y,P_i,Q) = \Delta_{d\lambda}^{(\alpha)}\circ p^{\alpha+2}_{\alpha+1}(x,y,P_i,Q) = \Delta_{d\lambda}^{(\alpha)}(x,y,P_i) = 0,$$ and so in particular, $S_\alpha = 0$. But then we readily see that $R_{\alpha+1}^\prime$ is a symmetric tensor. Let us now fix some basis $\{\partial_1,\ldots,\partial_{k+1}\}$ of $T_x \Sigma$ so that, $T_x\Sigma = \langle \partial_1, \ldots, \partial_{k+1} \rangle$, where $\dim\Sigma = k+1$. Then, we have the standard basis for the symmetric space $\Sym^{\alpha+2}T_x\Sigma$, so that $$\Sym^{\alpha+2} T_x\Sigma = \Big\langle \partial_J := \partial_{j_1}\odot \cdots\odot \partial_{j_{\alpha+2}} \;\Big|\; J=(1\le j_1 \le \cdots \le j_{\alpha+2} \le k+1) \Big\rangle.$$ Then for each tuple $J=(j_1,\ldots,j_{\alpha+2})$, we see that the \emph{only} equation involving $Q(\partial_J)$ is $$0 = R_{\alpha+1}(\partial_1,\ldots,\partial_{j_{\alpha+1}})(\partial_{j_{\alpha+2}}) = \lambda \circ Q(\partial_J) + \text{terms with $P_i$.}$$ This is an affine equation in $Q(\partial_J)\in T_y M$, which admits solution since $\lambda|_y:T_y M \to \bbR^p$ has full rank. Thus we have solved $Q$. This concludes the proof that $p^{\alpha+2}_{\alpha+1}(\calR_{\alpha+1}) = \bar\calR_\alpha$. Since $Q$ is solved from an affine system of equation, it is immediate that the fiber $\big(p^{\alpha+2}_{\alpha+1}\big)^{-1}\big(x,y,P_i\big)$ is affine in nature. In fact, we see that the projection is an affine fiber bundle. Furthermore, since $\lambda=(\lambda^s)$ has full rank at each point, we are able to get lifts of sections over a fixed contractible chart $O\subset \Sigma$, where we may choose some coordinate vector fields as the basis for $T\Sigma|_O$. \end{proof} \begin{proof}[Proof of \autoref{lemma:jetLiftingIsoCont:OmegaRegularity}] We have the following commutative diagram, \[\begin{tikzcd} \bar\calR_{\alpha+1}\arrow[hookrightarrow]{r}\arrow[dashed]{d} & J^{\alpha+2}(\Sigma,M) \arrow{rr}{\Delta_\lambda^{(\alpha+1)},\;\Delta_{d\lambda}^{(\alpha+1)}} \arrow{d}[swap]{p^{\alpha+2}_{\alpha+1}} && \Omega^1(\Sigma, \bbR^p)^{(\alpha+1)} \oplus \Omega^2(\Sigma,\bbR^p)^{(\alpha+1)} \arrow{d}{p^{\alpha+1}_\alpha}[swap] {p^{\alpha+1}_\alpha} \\ \bar \calR_{\alpha} \arrow[hookrightarrow]{r} & J^{\alpha+1}(\Sigma,M) \arrow{rr}[swap]{\Delta_\lambda^{(\alpha)}, \; \Delta_{d\lambda}^{(\alpha)}} && \Omega^1(\Sigma,\bbR^p)^{(\alpha)} \oplus \Omega^2(\Sigma, \bbR^p)^{(\alpha)} \end{tikzcd} \label{cd:jetLifting:omegaRegularity} \tag{$**$} \] We have already proved that $p^{\alpha+2}_{\alpha+1}$ maps $\calR_{\alpha+1}$ surjectively onto $\bar\calR_\alpha$; since $\bar\calR_{\alpha+1}\subset \calR_{\alpha+1}$ we have that $p^{\alpha+2}_{\alpha+1}$ maps $\bar\calR_{\alpha+1}$ into $\bar\calR_\alpha$. We show the surjectivity. Suppose $\sigma = \big(x,y, P_i : \Sym^i T_x\Sigma\to T_y M,\;i=1,\ldots,\alpha+1\big)\in\bar\calR_\alpha|_{(x,y)}$ is a given jet. We need to find out $Q:\Sym^{\alpha+2}T_x\Sigma \to T_y M$ such that, $(x,y,P_i,Q)\in\bar\calR_{\alpha+1}|_{(x,y)}$. We have seen that in order to find $Q$ so that $(x,y,P_i,Q)\in \calR_{\alpha + 1}|_{(x,y)}$, we must solve the affine system $$\lambda\circ Q = \text{terms with $P_i$,}$$ which is indeed solvable since $\lambda$ has full rank. Now in order to find $(x,y,P_i,Q)\in \bar \calR_{\alpha+1} = \bar \calR_\alpha \cap \ker\Delta_{d\lambda}^{(\alpha+1)}$, we need to figure out the equations involved in $\Delta_{d\lambda}^{(\alpha+1)}$. Let us write $$\Delta_{d\lambda}^{(\alpha+1)}(x,y,P_i,Q) = \big(x, P_1^*d\lambda, R_i : \Sym^i T_x\Sigma \to \hom(\Lambda^2 T_x\Sigma, \bbR^p), i=1,\ldots,\alpha+1\big).$$ Then the \emph{pure} $\alpha+1$-jet $R_{\alpha+1}:\Sym^{\alpha+1}T_x\Sigma\to \hom(\Lambda^2 T_x\Sigma, \bbR^p)$ is the only expression that involves $Q$. In fact we have that $R_{\alpha+1}$ is given as, \begin{align*} R_{\alpha+1}(X_1,\ldots,X_{\alpha+1})(Y\wedge Z) &= d\lambda\big(Q(X_1,\ldots,X_{\alpha+1},Y), P_1(Z)\big)\\ &\qquad\qquad + d\lambda\big(P_1(Y), Q(X_1,\ldots,X_{\alpha+1},Z)\big)\\ &\qquad\qquad + \text{terms involving $P_i$ with $i\ge 2$.} \end{align*} Now, looking at commutative diagram \hyperref[cd:jetLifting:omegaRegularity]{$(**)$}, we have \begin{align*} (x,y, P_1^*d\lambda, R_i, i=1,\ldots,\alpha) &= p^{(\alpha+1)}_{\alpha}\circ \Delta_{d\lambda}^{(\alpha+1)}(x,y,P_i,Q) \\ &= \Delta_{d\lambda}^{(\alpha)} \circ p^{\alpha+2}_{\alpha+1}(x,y,P_i,Q) \\ &= \Delta_{d\lambda}^{(\alpha)}(x,y,P_i) \\ &= 0. \end{align*} That is, we have $R_i=0$ for $i=1,\ldots,\alpha$. In order to find $Q$ such that $R_{\alpha+1}=0$, let us fix some basis $\{\partial_1,\ldots,\partial_{k+1}\}$ of $T_x\Sigma$, where $\dim\Sigma = k+1$. Then we have the standard basis for the symmetric space $\Sym^{\alpha+2} T_x\Sigma$, so that, $$\Sym^{\alpha+2} T_x\Sigma := Span \Big\langle \partial_J = \partial_{j_1}\odot \cdots\odot \partial_{j_{\alpha+2}} \;\Big|\; J=(1\le j_1 \le \cdots \le j_{\alpha+2} \le k+1)\Big\rangle.$$ Now for any tuple $J$ and for any $1\le a < b \le k+1$, we have the equation involving the tensor $Q$ given as, \[0 = R_{\alpha+1}(\partial_J) (\partial_a\wedge\partial_b) = d\lambda\big(Q(\partial_{J+a}), P_1(\partial_b\big)\big) + d\lambda\big(P_1(\partial_a), Q(\partial_{J+b}\big) + \text{terms with $P_i$ for $i \ge 2$,}\] where $J+a$ is the tuple obtained by ordering $(j_1,\ldots,j_{\alpha+2},a)$. Now observe that $$a < b \Rightarrow J+a \prec J+b,$$ where $\prec$ is the lexicographic ordering on the set of all ordered $\alpha+2$ tuples. We then treat the above equation as $$\Big(\iota_{P_1(\partial_a)} d\lambda \Big) \circ Q(\partial_{J+b}) = \Big(\iota_{P_1(\partial_b)} d\lambda\Big) \circ Q(\partial_{J+a}) + \text{terms with $P_1$.}$$ Thus we have identified the defining system of equations for the tensor $Q$ given as follows: \[\left\{ \begin{aligned} \lambda\circ Q(\partial_I) &= \text{terms with $P_i$}, \quad \text{for each $\alpha+2$ tuple $I$}\\ \iota_{P_1(\partial_a)} d\lambda \circ Q(\partial_{J+b}) &= \iota_{P_1(\partial_b)} d\lambda \circ Q(\partial_{J+a}) + \text{terms with $P_i$},\\ &\qquad\qquad \text{for each $\alpha+1$-tuple $J$ and $1\le a < b\le k+1$} \end{aligned}\tag{$\dagger$} \label{eqn:omegaRegularSystem} \right.\] We claim that this system can be solved for each $Q(\partial_I) \in T_y M$ in a \emph{triangular} fashion, using the ordering $\prec$ on the tuples. Indeed, first observe that for the $\alpha+2$-tuple $\hat I = (1,\ldots,1)$, which is the \emph{least} element in the order $\prec$, the only subsystem involving $Q(\partial_{\hat I})$ in the system (\ref{eqn:omegaRegularSystem}) is $$\lambda \circ Q(\partial_{\hat I}) = \text{terms with $P_i$,}$$ which is solvable for $Q(\partial_{\hat I})$ as $\lambda$ has full rank. Next, for some $\alpha+2$-tuple $I$ with $\hat I \preceq I$, inductively assume that $Q(\partial_{I^\prime})$ is solved from (\ref{eqn:omegaRegularSystem}) for each $\alpha+2$-tuple $I^\prime \prec I$. Then, the subsystem involving $Q(\partial_I)$ in (\ref{eqn:omegaRegularSystem}) is given as \[\left\{ \begin{aligned} \lambda\circ Q(\partial_I) &= \text{terms with $P_i$}, \quad \text{for each $\alpha+2$ tuple $I$}\\ \iota_{P_1(\partial_a)} d\lambda \circ Q(\partial_I) &= \text{terms with $P_i$ and $Q(\partial_{I^\prime})$ with $I^\prime \prec I$},\\ &\qquad\qquad \text{for $1 \le a < b \le k+1$, with $b\in I$.} \end{aligned}\tag{$\dagger_I$} \label{eqn:omegaRegularSystem:I} \right.\] From the induction hypothesis, the right hand side of this affine system consists of known terms. Now, it follows from the $\Omega$-regularity condition that for any collection of independent vectors $\{v_1,\ldots,v_r\}$ in $T_x\Sigma$, the collection of $1$-forms $$\big\{\iota_{P_1(v_i)} d\lambda^s|_{\calD_y}, \quad 1\le i \le r,\; 1\le s\le p\big\}$$ are independent. As $\calD$ is given as the common kernel of $\lambda^1,\ldots,\lambda^p$, we see that this is equivalent to the following non-vanishing condition: $$\Big(\bigwedge_{s=1}^p \lambda^s\Big)\wedge \bigwedge_{i=1}^r\Big(\iota_{P_1(v_1)}d\lambda^1\wedge\ldots\wedge\iota_{P_1 (v_i)}d\lambda^s\Big) \ne 0.$$ But then clearly, the subsystem (\ref{eqn:omegaRegularSystem:I}) is a \emph{full rank} affine system, allowing us to solve for $Q(\partial_I)$. Proceeding in this triangular fashion, we solve the tensor $Q$ from (\ref{eqn:omegaRegularSystem}). Clearly, the solution space for $Q$ is contractible since at each stage we have solved an affine system. We have thus proved that $p^{\alpha+2}_{\alpha+1}:\bar\calR_{\alpha+1}|_{(x,y)}\to\bar\calR_{\alpha}|_{(x,y)}$ is indeed surjective, with contractible fiber. In fact, the algorithmic nature of the solution shows that, if $O\subset\Sigma$ is a contractible chart, then we are able to obtain the lift of any section of $\bar\calR_\alpha|_O$ to $\bar\calR_{\alpha+1}$, along $p^{\alpha+2}_{\alpha+1}$. This concludes the proof. \end{proof} \begin{remark}\label{rmk:jetLiftingIsoContRegularityLowerDim} In the above proof of \autoref{lemma:jetLiftingIsoCont:OmegaRegularity}, the full strength of $\Omega$-regularity of $F$ has not been utilized. Note that, with our \emph{choice} of the ordered basis of $T_x\Sigma$, the vector $P_1(\partial_{k+1})$ does not appear in the left hand side of the above triangular system (\ref{eqn:omegaRegularSystem}). In fact, we can prove \autoref{lemma:jetLiftingIsoCont} under the milder assumption that $\im F$ contains a codimension one $\Omega$-regular subspace, which in our case is the subspace $\langle F(\partial_1),\ldots,F(\partial_k)\rangle \subset T_x\Sigma$. This observation was used in \cite{bhowmickFat46} to prove the existence of germs of horizontal $2$-submanifolds in a certain class of fat distribution of type $(4,6)$. \end{remark} \section{Revisiting $h$-Principle of Regular $K$-Contact Immersions} \label{sec:revisitKContact} Throughout this section $\calD$ will denote an arbitrary corank $p$ distribution on a manifold $M$ and $\lambda:TM\to TM/\calD$ will denote the quotient map. For every pair of local sections $X,Y$ in $\calD$, $\lambda([X,Y])$ is a local section of the bundle $TM/\calD$. The map \begin{align*} \Gamma(\calD)\times\Gamma(\calD) &\to \Gamma(TM/\calD)\\ (X,Y) &\mapsto -\lambda([X,Y]) \end{align*} is $C^\infty(M)$-linear and hence induces a bundle map $\Omega:\Lambda^2\calD \to TM/\calD$, which is called the \emph{curvature form} of the distribution $\calD$. Any local trivialization of the bundle $TM/\calD$ defines local 1-forms $\lambda^i$, $i=1,\dots,p$, such that $\calD\underset{loc.}{=}\cap_{i=1}^p\ker\lambda^i$. Then $\Omega$ can be locally expressed as follows: $$\Omega \underset{loc.}{=} \big(d\lambda^1|_\calD, \,\ldots,\, d\lambda^p|_\calD\big).$$ The span $\langle d\lambda^1|_\calD,\ldots,d\lambda^p|_\calD\rangle$ is clearly independent of the choice of defining $1$-forms $\lambda^1,\ldots,\lambda^p$ for $\calD$. \begin{remark}\label{rmk:curvatureConnection} The quotient map $\lambda$ can be treated as a $TM/\calD$-valued $1$-form on $M$. If $\nabla$ is an arbitrary connection on the quotient bundle $TM/\calD$, then the curvature form $\Omega$ can be given as $\Omega = d_\nabla \lambda|_\calD$. \end{remark} \begin{defn} A smooth map $u:\Sigma\to M$ is \emph{$\calD$-horizontal} if the differential $du$ maps $T\Sigma$ into $\calD$. \end{defn} \begin{defn}\cite[pg. 338]{gromovBook}\label{defn:contactMap} Given a subbundle $K\subset T\Sigma$, we say a map $u:\Sigma\to (M,\calD)$ is \emph{$K$-contact} if $$du(K_\sigma)\subset T_{u(\sigma)}\calD,\quad\text{for each $\sigma\in\Sigma$.}$$ A $K$-contact map $u:(\Sigma,K)\to (M,\calD)$ is called $K$-\emph{isocontact} (or, simply \emph{isocontact}) if we have $K = du^{-1}(\calD)$. \end{defn} In what follows below, $\Sigma$ will denote an arbitrary manifold and $K$ will denote an arbitrary but fixed subbundle of $T\Sigma$, unless mentioned otherwise. For any contact map $u:(\Sigma,K)\to (M,\calD)$, we have an induced bundle map \begin{align*} \tilde{du} : T\Sigma/K &\longrightarrow u^*TM/\calD\\ X \mod K &\longmapsto du(X) \mod\calD \end{align*} Clearly, a contact \emph{immersion} $u:(\Sigma,K)\to (M,\calD)$ is isocontact if and only if $\tilde{du}$ is a monomorphism. Hence, for an isocontact immersion $(\Sigma,K)\to (M,\calD)$ to exist, the following numerical constraints must necessarily be satisfied: $$\rk K \le \rk \calD \quad\text{and}\quad \cork K \le \cork\calD.$$ $K$-contactness automatically imposes a differential condition involving the curvatures of the two distributions. \begin{prop}\label{prop:isocontactCurvatureCondition} If $u : (\Sigma,K) \to (M,\calD)$ is a $K$-contact map, then \begin{equation}\label{eqn:curvatureEquation} u^*\Omega_\calD|_K = \tilde{du} \circ \Omega_K, \end{equation} where $\Omega_K,\Omega_\calD$ are the curvature forms of $K$ and $\calD$ respectively. Equivalently we have the following commutative diagram \[\begin{tikzcd} \Lambda^2K \arrow{d}[swap]{\Omega_K} \arrow{r}{du} & \Lambda^2 \calD \arrow{d}{\Omega_\calD}\\ T\Sigma/K \arrow{r}[swap]{\tilde{du}} & TM/\calD \end{tikzcd} \] \end{prop} If $K=T\Sigma$, then $\Omega_K = \Omega_{T\Sigma} = 0$. Hence, for a horizontal immersion $u:\Sigma\to M$ this gives the \emph{isotropy} condition, namely, $u^*\Omega_\calD = 0$. For simplicity, we assume that $\calD$ is globally defined as the common kernel of $\lambda^1,\ldots,\lambda^p$, and consider the differential operator \begin{align*} \opCont : C^\infty(\Sigma,M) &\to \Gamma \hom(K,\bbR^p) = \Omega^1(K,\bbR^p)\\ u &\mapsto \big(u^*\lambda^s|_K\big)_{s=1}^p. \end{align*} Clearly, $K$-contact maps are solutions of $\opCont(u) = 0$. Recall that the tangent space of $C^\infty(\Sigma,M)$ at some $u:\Sigma\to M$ can be identified with the space of vector fields of $M$ \emph{along the map $u$}, i.e, the space of sections of $u^*TM$. Any such vector field $\xi \in \Gamma u^*TM$ can be represented by a family of maps $u_t : \Sigma\to M$ such that $u_0 = u$ and $\xi_\sigma = \frac{d}{dt}_{|_{t=0}} u_t(\sigma)$ for $\sigma \in\Sigma$. Then, the linearization of $\opCont$ at $u$ is given by $$\linCont_u(\xi) = \frac{d}{dt}\Big|_{t=0} \opCont(u_t)$$ By Cartan formula we get \begin{align*} \linCont_u : \Gamma u^*TM &\to \Gamma \hom(K, \bbR^p)\\ \xi &\mapsto \Big(\iota_\xi d\lambda^s + d\big(\iota_\xi\lambda^s\big)\Big)\Big|_K. \end{align*} Restricting $\linCont_u : \Gamma u^*TM\to \Gamma\hom(K,\bbR^p)$ to the subspace $\Gamma u^*\calD$ we get a $C^\infty(M)$-linear operator \begin{align*} \resLinCont_u : \Gamma u^*\calD&\to \Gamma\hom(K,\bbR^p)\\ \xi &\mapsto \Big(\iota_\xi d\lambda^s\Big)\Big|_K. \end{align*} The associated bundle map will also be denoted by the same symbol. \begin{defn} A smooth immersion $u :\Sigma\to M$ is said to be \emph{$(d\lambda^s)$-regular} if $\resLinCont_u$ is an epimorphism. If we wish to study $K$-\emph{iso}contact immersions, then $u$ must also satisfy the rank condition $\rk(\lambda^s \circ du) \ge \cork K$. \end{defn} We shall denote the space of all $(d\lambda^s)$-regular immersions by $\calS$. Such maps $u$ are solutions to a first order \emph{open} relation $S\subset J^1(\Sigma,M)$. Hence, in view of the above discussion, $\opCont$ has an infinitesimal inversion of order $s=0$ over $\calS$ with defect $d=1$ (see \autoref{sec:generalHPrinciple:analytic}).\smallskip In general, $(d\lambda^s)$-regularity depends on the choice of $\lambda^s$, but the space of $(d\lambda^s)$-regular, $K$-contact immersions $(\Sigma,K)\to (M,\calD)$ is independent of any such choice. Indeed, if $du(K)\subset\calD$, then $$\resLinCont_u(\xi) = \iota_\xi\Omega \big|_K, \quad \text{for $\xi \in \Gamma u^*\calD$,}$$ where $\Omega$ is the curvature $2$-form of $\calD$. \ifarxiv \begin{remark} For a general distribution $\calD$, not necessarily cotrivializable, we look at the operator $$\opCont : u \mapsto u^*\lambda|_K \in \Gamma\hom(K, u^*TM/\calD), \quad \text{for any $u:\Sigma\to M$}.$$ To put this in a rigorous framework, consider the infinite dimensional space $\calB = C^\infty(\Sigma, M)$ and then consider the infinite dimensional vector bundle $\calE\to \calB$ with fibers $\calE_u = \Gamma \hom(K, u^*TM/\calD)$. Then, $\opCont$ can be seen as a section of this vector bundle. To identify the linearization operator, we choose any connection $\nabla$ on $TM/\calD$, which in turn induces a parallel transport on $\calE$. We then have that, $\linCont_u (\xi) = \big(\iota_\xi d_\nabla \lambda + d_\nabla \iota_\xi \lambda\big)|_K$ for $\xi \in\Gamma u^*TM$. Restricting $\linCont_u$ to $\Gamma u^*\calD$, we get the $C^\infty(\Sigma)$-linear map $$\resLinCont_u : \xi \mapsto \iota_\xi d_\nabla \lambda|_K, \quad \xi \in \Gamma u^*\calD.$$ In view of \autoref{rmk:curvatureConnection}, $\resLinCont_u(\xi) = \iota_\xi\Omega|_K$ for a $K$-contact immersion $u : \Sigma \to M$, which matches with our earlier description. \end{remark} \else This leads to the notion of $\Omega$-regular $K$-contact immersions. \fi \begin{defn}\label{defn:contOmegaRegular} A subspace $V\subset \calD_y$ is called \emph{$\Omega$-regular} if the map \begin{equation}\label{eqn:omegaRegular} \begin{aligned} \calD_y &\to \hom(V, TM/\calD|_y)\\ \xi &\mapsto \iota_\xi \Omega|_V \end{aligned} \end{equation} is surjective. A $K$-contact immersion $u:(\Sigma,K)\to (M,\calD)$ is called $\Omega$-\emph{regular} if $du_x(K_x) \subset \calD_{u(x)}$ is $\Omega$-regular for every $x\in \Sigma$, equivalently, if $\resLinCont_u$ is a bundle epimorphism. \end{defn} In order to study the $K$-contact immersions, let $\relCont_\alpha =$ $\relCont_\alpha(\opCont, 0, S) \subset J^{\alpha+1}(\Sigma,M)$ be the relation consisting of of $S$-regular infinitesimal solutions of $\opCont = 0$ of order $\alpha$. Then $\relCont_\alpha$, for all $\alpha \ge d -r = 0$, have the same $C^\infty$-solutions space, namely the $\Omega$-regular $K$-contact immersions. We introduce the following notation for the solution sheaf and the sheaf of sections of $\relCont_\alpha$: $$\solCont = \Sol \relCont_\alpha, \quad \secCont_\alpha = \Gamma \relCont_\alpha.$$ \begin{obs}\label{obs:relContMicroflexibleLocalWHE} From \autoref{thm:microflexibleLocalWHESheafTheorem} we obtain that \begin{itemize} \item $\solCont$ is microflexible, and \item for $\alpha \ge \max\{d + s, 2r + 2s\} = 2$, $\relCont_\alpha$ satisfies the parametric \emph{local} $h$-principle, i.e, the jet map $j^{\alpha + 1} : \solCont \to \secCont_\alpha$ is a \emph{local} weak homotopy equivalence. \end{itemize} \end{obs} \ifarxiv In general, there is no natural $\diff(\Sigma)$ action on $\solCont$. However, when $K = T\Sigma$ then it is the sheaf of horizontal immersions for which we have the following results. \begin{theorem}[\cite{gromovBook}]\label{thm:hPrinHorizOpenAlpha} If $\Sigma$ is an open manifold, then the relation $\relHor_\alpha$ satisfies the parametric $h$-principle for $\alpha \ge 2$. \end{theorem} \begin{proof} We observe that the natural $\diff(\Sigma)$-action on $C^\infty(\Sigma, M)$ preserves $\calD$-horizontality and $\Omega$-regularity. Hence, $\diff(\Sigma)$ acts on $\solHor = \Sol\relHor_\alpha$ for $\alpha \ge 0$. Then a direct application of \autoref{thm:openManifoldHPrin} gives us that $j^{\alpha + 1} : \solHor \to \Gamma \relHor_\alpha$ is a weak homotopy equivalence for $\alpha \ge 2$. In other words, $\relHor_\alpha$ satisfies the parametric $h$-principle for $\alpha \ge 2$. \end{proof} \begin{theorem}\label{thm:hPrinContOpenAlpha} Let $K$ be a contact structure on $\Sigma$. Then the relation $\relCont_\alpha$ satisfies the parametric $h$-principle for $\alpha \ge 2$ near any positive codimensional submanifold $V_0 \subset \Sigma$. \end{theorem} \begin{proof} Since the group of contact diffeomorphisms sharply moves any submanifold of $\Sigma$ (\autoref{exmp:sharplyMovingDiffeo}), for any submanifold $V_0 \subset \Sigma$ of positive codimension, we have the $h$-principle via an application of \autoref{thm:mainFlexibilityTheorem}. \end{proof} \fi \subsection{The Relation $\relCont$} We now define a first order relation, taking into account the curvature condition (\autoref{eqn:curvatureEquation}). This relation will also have the same $C^\infty$-solution sheaf $\solCont$. \begin{defn}\label{defn:relCont} Let $\relCont\subset J^1(\Sigma,M)$ denote the relation consisting of $1$-jets $(x,y, F:T_x\Sigma\to T_y M)$ satisfying the following: \begin{enumerate} \item \label{defn:relCont:1} $F$ is injective and $F(K_x)\subset\calD_y$. \item \label{defn:relCont:2} $F$ is $\Omega$-regular\ifarxiv, i.e, the linear map \begin{align*} \calD_y &\to \hom(K_x, TM/\calD|_y)\\ \xi &\mapsto F^*(\iota_\xi\Omega)|_K = \big(X\mapsto \Omega(\xi, FX)\big) \end{align*} is surjective\fi~ (\autoref{eqn:omegaRegular}). \item \label{defn:relCont:3} $F$ abides by the curvature condition, $F^*\Omega|_{K_x} = \tilde{F}\circ \Omega_K|_x$, where $\tilde F : T\Sigma/K|_x \to TM/\calD|_y$ is the morphism induced by $F$ (\autoref{eqn:curvatureEquation}). \end{enumerate} The subrelation $\relICont\subset\relCont$ further satisfies the condition that \begin{itemize} \item[(4)] \label{defn:relCont:4} $\tilde{F}$ is injective. \end{itemize} If $K = T\Sigma$, we shall denote the corresponding relation by $\relHor$, whose solution space consists of $\Omega$-regular horizontal immersions. \end{defn} \ifarxiv It is immediate from the definition that $\solCont = \Sol \relCont$. We shall refer to a section of $\relCont$ as a \emph{formal} $\Omega$-regular, $K$-contact immersion $(\Sigma,K)\to (M,\calD)$. We have the following result, which will be needed later in the proof of \autoref{prop:relContExtensionHPrin}. \begin{lemma} \label{lemma:relContRetraction} The following holds true for the relation $\relCont$. \begin{enumerate} \item \label{lemma:relContRetraction:1} For each $(x,y)\in \Sigma\times M$, the subset $\relCont_{(x,y)}$ is a submanifold of $J^1_{(x,y)}(\Sigma,M)$. \item \label{lemma:relContRetraction:2} $\relCont$ is a submanifold of $J^1(\Sigma,M)$. \item \label{lemma:relContRetraction:3} The projection map $p = p^1_0:J^1(\Sigma,M)\to J^0(\Sigma,M)$ restricts to a submersion on $\relCont$. \end{enumerate} \end{lemma} \begin{proof} Note that $J^1(\Sigma,M)$ and $\hom(K,TM/\calD)$ are both vector bundles over $J^0(\Sigma,M)=\Sigma\times M$. Consider the bundle map \[\begin{tikzcd} \Xi_1 : J^1(\Sigma,M) \arrow{rd} \arrow{rr} &&\hom(K, TM/\calD) \arrow{ld} \\ &J^0(\Sigma,M) \end{tikzcd}\] defined over $J^0(\Sigma,M) = \Sigma\times M$ by \begin{align*} \Xi_1|_{(x,y)} : J^1_{(x,y)}(\Sigma,M) &\to \hom(K_x, TM/\calD|_y)\\ \big(x,y, F\big) &\mapsto F^*\lambda|_{K_x} = \lambda\circ F|_{K_x} \end{align*} Since $\lambda$ is an epimorphism, it is immediate that $\Xi_1$ is a bundle epimorphism and $\ker \Xi_1$ is a vector bundle over $J^0(\Sigma,M)$ given as $$\ker \Xi_1|_{(x,y)} = \big\{ (x,y, F) \;\big|\; F(K_x) \subset \calD_y \big\}.$$ Next, consider a fiber-preserving map $\Xi_2 : \ker\Xi_1 \to \hom(\Lambda^2 K, TM/\calD)$ over $J^0(\Sigma,M)$ given by \begin{align*} \Xi_2|_{(x,y)} : \ker\Xi_1|_{(x,y)} &\to \hom\big(\Lambda^2 K_x, TM/\calD|_y\big)\\ F &\mapsto F^*\Omega|_{K_x} - \tilde{F} \circ \Omega_{K_x} := \Big(X\wedge Y \mapsto \Omega(FX, FY) - \tilde{F}\circ\Omega_{K_x}(X,Y)\Big) \end{align*} where $\tilde{F} : T\Sigma/K|_x \to TM/\calD|_y$ is the induced map and $\Omega_K:\Lambda^2 K \to T\Sigma/K$ is the curvature $2$-form of $K$. Let $\calR_\Omega\subset J^1(\Sigma,M)$ be the space of jets satisfying (\ref{defn:relCont:1}) and (\ref{defn:relCont:2}) of \autoref{defn:relCont}. We note that $$\relCont_{(x,y)} = \Xi_2|_{(x,y)}^{-1}(0) \cap \underbrace{\{\text{$\Omega$-regular injective linear maps $T_x\Sigma\to T_yM$, mapping $K_x$ into $\calD_y$ }\}}_{\calR_\Omega|_{(x,y)} }.$$ We can verify that $\calR_\Omega|_{(x,y)}$ consists of regular points of $\Xi_2|_{(x,y)}$. Consequently, $\relCont_{(x,y)}$ is a submanifold of $J^1_{(x,y)}(\Sigma,M)$. Now, since $\Xi_2 : \ker\Xi_1 \to \hom(\Lambda^2 K, TM/\calD)$ is a fiber-preserving map, it follows that it is regular at all points of $\calR_\Omega$ and therefore, $$\relCont = \Xi_2^{-1} \big(\textbf{0}\big) \cap \calR_\Omega$$ is a submanifold of $J^1(\Sigma,M)$. Here $\textbf{0} = \textbf{0}_{\Sigma\times M}\hookrightarrow \hom(\Lambda^2 K, TM/\calD)$ is the $0$-section. Lastly, we consider the commutative diagram \[\begin{tikzcd} \calR_\Omega \subset \ker\Xi_1 \arrow{rr}{\Xi_2} \arrow{rd}[swap]{p^1_0|_{\relCont}} &&\hom(\Lambda^2 K, TM/\calD) \arrow{ld}{\pi}\\ &J^0(\Sigma,M) \end{tikzcd}\] Since $\Xi_2$ is a submersion on $\calR_\Omega$, $p^1_0|_{\relCont}$ is also a submersion. \end{proof} We end this section with the following lemma which relates $\relCont_\alpha$ with $\relCont$ for $\alpha\geq 1$. \begin{lemma}\label{lemma:jetLiftingIsoCont}\label{LEMMA:JETLIFTINGISOCONT} For any $\alpha\ge 1$, the jet projection map $p=p^{\alpha+1}_1:J^{\alpha+1}(\Sigma,M)\to J^1(\Sigma,M)$ maps the relation $\relCont_\alpha$ surjectively onto $\relCont$. Furthermore, for each $(x,y)\in\Sigma\times M$, the map $p:\relCont_\alpha|_{(x,y)}\to \relCont|_{(x,y)}$ has contractible fibers. Moreover, any section of $\relCont$ defined over a contractible chart in $\Sigma$ can be lifted to $\relCont_\alpha$ along $p$. \end{lemma} We postpone the proof of the above lemma to \autoref{sec:jetLiftingLemmaProof}. We get the following from \autoref{obs:relContMicroflexibleLocalWHE}. \begin{corr}\label{corr:relContLocalWHE} The induced sheaf map $j^1 : \Sol \relCont \to \Gamma\relCont$ is a local weak homotopy equivalence. \end{corr} \begin{proof} By an argument presented in \cite[pg. 77-78]{gromovBook}, \autoref{lemma:jetLiftingIsoCont} implies that the sheaf map $p : \Gamma\relCont_\alpha\to \Gamma \relCont$ is a \emph{local} weak homotopy equivalence. Then, in view of \autoref{obs:relContMicroflexibleLocalWHE}, $j^1 : \Sol \relCont \to \Gamma \relCont$ is a \emph{local} weak homotopy equivalence. \end{proof} Thus, the relations $\relCont$ (and hence $\relHor$), $\relICont\subset \relCont$ satisfy the \emph{local} parametric $h$-principle. The same is true for $\relICont\subset \relCont$ as well. We have the following corollary to \autoref{thm:hPrinHorizOpenAlpha}. \begin{corr}\label{corr:hPrinHorizImmOpen} If $\Sigma$ is an open manifold, then the relation $\relHor$ satisfies the parametric $h$-principle. \end{corr} \else It is immediate from the definition that $\solCont = \Sol \relCont$. We shall refer to a section of $\relCont$ as a \emph{formal} $\Omega$-regular, $K$-contact immersion $(\Sigma,K)\to (M,\calD)$. $\relCont$ is a submanifold of $J^1(\Sigma,M)$ and the projection map $p = p^1_0:J^1(\Sigma,M)\to J^0(\Sigma,M)$ restricts to a submersion on $\relCont$. Furthermore, for any $\alpha\ge 1$, the jet projection map $p=p^{\alpha+1}_1:J^{\alpha+1}(\Sigma,M)\to J^1(\Sigma,M)$ restricts to a surjective map $\relCont_\alpha\to \relCont$, having contractible fibers. Thus, the relations $\relCont$ (and hence $\relHor$), $\relICont\subset \relCont$ satisfy the \emph{local} parametric $h$-principle. Together with microflexibility we obtain the next theorem by an application of Flexibility theorem (\autoref{thm:mainFlexibilityTheorem}). \begin{theorem}\label{thm:hPrinImmersionNearSigma} Let $\Sigma_0$ be an arbitrary manifold with a distribution $K_0$. If $(\Sigma, K)$ is one of the following pairs: \begin{enumerate} \item $(\Sigma_0 \times \bbR, K_0\times \bbR)$, \item $(J^1(\Sigma_0,\bbR), \xi_{std})$, where $\xi_{std}$ is the standard contact structure on $J^1(\Sigma_0,\mathbb R)$, \end{enumerate} then $\relCont$ satisfies the parametric $h$-principle near $\Sigma_0 \times 0$. \end{theorem} \fi \subsection{$h$-principle for $\relCont$} In order to get the $h$-principle for $\relCont$ on an arbitrary manifold $\Sigma$, the general plan is to embed $\Sigma$ in a manifold $\tilde{\Sigma}$ with a distribution $\tilde{K}$ such that $\tilde{K}|_\Sigma \cap T\Sigma =K$. \begin{example}\label{exmp:extensions} If $(\Sigma, K)$ is an arbitrary pair then we can take $\tilde{\Sigma} = \Sigma \times \bbR$ and $\tilde{K}=K \times \bbR$. If $K=T\Sigma$ then we can take $\tilde{\Sigma}=J^1(\Sigma, \bbR)$ and $\tilde{K}=\xi_{std}$. \end{example} We define an operator $\opContTilde$ for the pair $(\tilde{\Sigma},\tilde{K})$ as we did in the case of $(\Sigma,K)$. Let us denote the associated relations on $\tilde{\Sigma}$ by $\relContTilde_\alpha$, $\alpha\geq 0$, and $\relContTilde \subset \relContTilde_0$. Let $\solContTilde$ be the sheaf of $\Omega$-regular, $\tilde{K}$-contact immersions. As noted earlier, $\solContTilde = \Sol(\relContTilde_\alpha)=\Sol(\relContTilde)$. Since $K = \tilde K|_\Sigma \cap T\Sigma$, the natural restriction morphism $C^\infty(\tilde{\Sigma},M) \to C^\infty(\Sigma, M)$ gives rise to a sheaf homomorphism \begin{align*} ev : \solContTilde|_\Sigma &\to \solCont\\ u &\mapsto u|_{\Sigma} \end{align*} which naturally induces a map $ev: \relContTilde|_{\Sigma} \to \relCont$. To keep the notation light, we have denoted the induced map by $ev$ as well. \ifarxiv \paragraph{\bfseries Notation: } For any subset $C\subset \Sigma$, we shall use $\Op(C)$ (resp. $\tilde \Op(C)$) to denote an unspecified open neighborhood of $C$ in $\Sigma$ (resp. in $\tilde{\Sigma}$). \begin{prop}\label{prop:relContExtensionHPrin} Let $O\subset \Sigma$ be a coordinate chart and $C\subset O$ be a compact subset. Suppose $U\subset M$ is an open subset such that $\calD|_U$ is trivial. Then given any $\Omega$-regular $K$-contact immersion $u:\Op C \to U\subset M$, the 1-jet map $$j^1 : ev^{-1}(u)\to ev^{-1}(F=j^1_u)$$ in the commutative diagram, \[\begin{tikzcd} ev^{-1}(u) \arrow[hookrightarrow]{r} \arrow[dashed]{d}[swap]{j^1} & \solContTilde|_{C\times 0}\arrow{d} \arrow{r} & \solCont|_{C} \arrow{d} & u \arrow[maps to]{d}\\ ev^{-1}(F) \arrow[hookrightarrow]{r} & \secContTilde|_{C \times 0} \arrow{r} & \secCont|_{C} & F = j^1_u \end{tikzcd}\] induces a surjection between the set of path components. \end{prop} \begin{proof} Recall the following sheaves: $$\solCont = \Sol\relCont, \quad \secCont=\Gamma\relCont, \qquad \solContTilde=\Sol\relContTilde,\quad \secContTilde = \Gamma \relContTilde.$$ Fix some neighborhood $V$ of $C$, with $C\subset V\subset O$, over which $u$ is defined. The proof now proceeds through the following steps. \begin{description}[leftmargin=*] \item[Step $1$] \label{prop:relContExtensionHPrin:step1} Given an arbitrary extension $\tilde{F} \in \secContTilde_{C\times 0}$ of $F$ along $ev$, we construct a regular solution $\bar u$ on $\tilde{Op}C$, so that $j^1_{\bar u}|_{\Op C} = \tilde{F}|_{\Op C}$. \item[Step $2$] \label{prop:relContExtensionHPrin:step2} We get a homotopy between $j^1_{\bar u}$ and $\tilde{F}$ in the affine bundle $J^1(W,U)$ which is constant on points of $C$. \item[Step $3$] \label{prop:relContExtensionHPrin:step3} We then push the homotopy obtained in \hyperref[prop:relContExtensionHPrin:step2]{Step $2$} inside $\relContTilde$, using \autoref{lemma:relContRetraction}. Thereby completing the proof. \end{description} \paragraph{\underline{Proof of \hyperref[prop:relContExtensionHPrin:step1]{Step $1$}}:} Suppose $\tilde{F}\in \secContTilde|_{C\times 0}$ is some arbitrary extension of $F$ along $ev$. Using \autoref{lemma:jetLiftingIsoCont}, we then get an arbitrary lift $\hat F\in \Gamma \relContTilde_\alpha|_C$ of $\tilde F$, for $\alpha$ sufficiently large (in fact, $\alpha\ge 4$ will suffice). The formal maps are represented in the following diagram. \[\begin{tikzcd} &\relContTilde_\alpha|_{\Op C} \arrow{d}{p^{\alpha+1}_1}\\ &\relContTilde|_{\Op C} \arrow{d}{ev}\\ \Op C \arrow{ruu}{\hat F} \arrow{ru}[swap]{\tilde{F}} \arrow{r}[swap]{F} &\relCont \end{tikzcd}\] We can now define a map $\hat u : \tilde{\Op}(C)\to U$ so that $j^{\alpha+1}_{\hat u}(p,0) = \hat F(p,0)$, by applying a Taylor series argument. In particular, we have $\hat u|_{C\times 0} = u$ and $\hat u$ is regular on points of $\Op(C)\times 0$. Since $C$ is a compact set and regularity is an open condition, we have that $\hat u$ is regular on some open set $W \subset \tilde\Sigma$ satisfying, $C\subset W\subset \bar W \subset \tilde \Op(C)$. Moreover, $\hat u$ is a regular infinitesimal solution along the set $W_0 = (V\times {0})\cap W\subset \tilde\Op(C)$ of order $$\alpha \ge 2.1 + 3.0 + \max\{1, 2.1 + 0\} = 4,$$ for the equation $\tilde{\frD} = 0$, where $\tilde\frD = \opContTilde : v\mapsto v^*\lambda^s|_{\tilde{K}}$ is defined over $C^\infty(W, U)$. Now, by applying \autoref{thm:consistenInversion} we get an $\Omega$-regular immersion $\bar u: V\to U$ such that, $\tilde\frD(\bar u) = 0$ and furthermore, $$j^1_{\bar u} = j^1_{\hat u} \quad \text{on points of $W_0$.}$$ In particular, $j^1_{\bar{u}}(p,0) = \tilde F(p,0)$ for $(p,0)\in W_0$ and so $u$ on $\Op C$ is extended to $\bar{u}$ on $W$. \smallskip \paragraph{\underline{Proof of \hyperref[prop:relContExtensionHPrin:step2]{Step $2$}}:} Let us denote $\tilde u = \bs \tilde{F}$ and define, $v_t(x,s) = \bar u (x, ts)$ for $(x,s)\in W$. Note that, $$v_0(x,s) = \bar u(x,0) = \hat u(x,0) = \tilde{u}(x, 0)$$ and so $v_t$ is a homotopy between the maps $\bar u$ and $\pi^*(\tilde{u}|_{\Op C})|_W$, where $\pi : \Sigma\times \bbR \to \Sigma$ is the projection. Now, with the help of some auxiliary choice of parallel transport on the vector bundle $J^1(W,U)$, we can get isomorphisms $$\varphi(x,s) : J^1_{((x,0), \bar u(x,0))}(W, U) \to J^1_{((x,s), \bar u(x,s))}(W, U), \quad \text{for $(x,s)\in W$, for $s$ sufficiently small,}$$ so that $\varphi(x,0) = \Id$. We then define the homotopy, $$G_t|_{(x,s)} = (1-t) \cdot \varphi(x,ts) \circ \tilde{F}|_{(x,0)} + t \cdot j^1_{\bar u}(x,ts) \; \in J^1_{((x,ts), \bar u(x,ts))}(W,U).$$ Clearly $G_t$ covers $v_t$; we have $$G_0|_{(x,s)} = \varphi(x,0) \circ \tilde{F}|_{(x,0)} = \tilde{F}|_{(x,0)} = \tilde{F}|_{(x,s)} \qquad\text{and}\qquad G_1|_{(x,s)} = j^1_{\bar u}(x,s).$$ Thus we have obtained a homotopy $G_t$ between $\pi^*(\tilde{F}|_{\Op C})|_{\tilde{Op} C}$ and $j^1_{\bar u}$. Similar argument produces a homotopy between $\tilde{F}$ and $\pi^*(\tilde{F}|_{\Op C})|_W$ as well. Concatenating the two homotopies, we have a homotopy $H_t$ between $\tilde{F}$ and $j^1_{\bar u}$, in the affine bundle $J^1(W, U)\to W\times U$. However, $H_t$ need not lie in $\relContTilde$.\smallskip \paragraph{\underline{Proof of \hyperref[prop:relContExtensionHPrin:step3]{Step $3$}}:} By \autoref{lemma:relContRetraction}, we get a tubular neighborhood $\calN\subset J^1(W,U)$ of $\relContTilde$ which \emph{fiber-wise} deformation retracts onto $\relContTilde$. Suppose $\rho : \calN\to \relContTilde$ is such a retraction. Now, note that on points of $C$ $$H_t|_{(x,0)} = (1-t) \cdot \tilde F|_{(x,0)} + t\cdot j^1_{\bar u}(x,0) = \tilde{F}|_{(x,0)}.$$ Since $C$ is compact, we may get a neighborhood $W'$, satisfying $C\subset W^\prime \subset W$, such that the homotopy $H_t|_{W^\prime}$ takes its values in the open neighborhood $\calN$ of $\im\tilde F$. Then composing with the retraction $\rho$, we can push this homotopy inside the relation $\relContTilde$, obtaining a homotopy $\tilde{F}_t\in\tilde\Psi|_C$ joining $\tilde F$ to $j^1_{\bar u}$. Observe that the homotopy remains constant on points of $C$. In particular, $ev(\tilde F_t) = F$ on points of $C$. This concludes the proof. \end{proof} \fi \begin{theorem}\label{thm:hPrinGenExtn} Suppose $(\Sigma, K)$ is an arbitrary pair and let $(\tilde{\Sigma},\tilde{K})=(\Sigma\times\mathbb R, K\times\mathbb R)$. If $\relContTilde$ is a microextension of $\relCont$ (\autoref{defn:microextension}), then the relation $\relCont$ satisfies the $C^0$-dense $h$-principle. \end{theorem} \ifarxiv The parametric $h$-principle \autoref{thm:hPrinGenExtn} can be derived from the following theorem by a standard argument as in \cite{eliashbergBook}. \begin{theorem}\label{thm:hPrinParametricRelative} Suppose, for any contractible open set $O\subset \Sigma$, the map $ev : \Gamma \relContTilde|_O \to \Gamma \relCont|_O$ is surjective. For a compact polyhedron $P$ along with a subpolyhedron $Q \subset P$, we are given the formal data $\{F_z\}_{z\in P} \in \Gamma \relCont$ such that $F_z$ is holonomic for $z \in \Op Q \subset P$. Then, there exists a homotopy $F_{z,t} \in \Gamma \relCont$ satisfying the following: \begin{itemize} \item $F_{z,0} = F_z$ for all $z \in P$. \item $F_{z,1}$ is holonomic for all $z \in P$. \item $F_{z, t} = F_z$ for all time $t$, if $z \in \Op Q$. \end{itemize} Furthermore, the homotopy can be chosen to be arbitrarily $C^0$-small in the base. \end{theorem} \begin{proof} The proof is essentially done via a cell-wise induction. Let us denote the base maps $u_z = \bs F_z$. \smallskip \paragraph{\underline{Setup}:} First, we fix a cover $\calU$ of $M$ by open balls, so that $\calD|_U$ is cotrivial for each $U\in \calU$. Next, fix a `good cover' $\calO$ of $\Sigma$ subordinate to the open cover $\{u_z^{-1}(U) \;|\; U\in\calU, \, z \in P\}$. By a \emph{good cover}, we mean that $\calO$ consists of contractible open charts of $\Sigma$, which is closed under finite (non-empty) intersections. Then, fix a triangulation $\{\Delta^\alpha\}$ of $\Sigma$ subordinate to $\calO$. For each top-dimensional simplex $\Delta^\alpha$ choose $O_\alpha \in \calO$ such that $\Delta^{\alpha}\subset O_{\alpha}$. For any other simplex $\Delta$ we denote $$O_\Delta = \bigcap_{\Delta\subset\Delta^\beta} O_\beta, \;\text{where $\Delta^\beta$ is top-dimensional.}$$ Since any $\Delta$ is contained in at most finitely many simplices, $O_\Delta\in\calO$ as it is a good cover. For each $z \in P$, let us also fix $U_{z,\Delta} \in \calU$ so that $O_\Delta \subset u_z^{-1}(U_{z,\Delta})$. Throughout the proof, for any fixed simplex $\Delta$ we shall assume $\Op\Delta \subset O_\Delta$. Furthermore, we shall assert that the base maps of any homotopy of $F_{z}$ near $\Delta$ has their value in $U_{z,\Delta}$. Thus, the $C^0$-smallness of the homotopy can be controlled by a priori choosing the open cover $\calU$ sufficiently small. \smallskip \paragraph{\underline{Induction Base Step}:} Fix a $0$-simplex $v\in \Sigma$. The map $j^1: \solCont \to \secCont$ is a \emph{local} weak homotopy equivalence by \autoref{corr:relContLocalWHE}. In particular, $j^1 : \solCont|_v \to \secCont|_v$ is a weak homotopy equivalence and consequently, we have a homotopy $F_{z,t}^v \in \secCont$ defined over $\Op(v)$ so that $$\text{$F_{z,0}^v =F_z$ and $F_{z,1}^v$ is holonomic on $\Op(v)$, for $z \in P$}.$$ Furthermore, we can arrange so that the map $j^1$ is a homotopy equivalence, so that we get $F_{z,t} = F_z$ for all $z \in \Op Q$ as well. Now, by a standard argument using cutoff function, we patch all these homotopies and get a homotopy $F_{z,t}^0 \in \secCont$ satisfying $$\text{$F_{z,1}^0$ is holonomic on $\Op\Sigma^{(0)}$ and $F_{z,t}^0 = F_z$ on $\Sigma\setminus \Op\Sigma^{(0)}$,}$$ where $\Sigma^{(0)}$ is the $0$-skeleton of $\Sigma$. Furthermore, $F_{z,t} = F_z$ for $z \in \Op Q$ by construction. \smallskip \paragraph{\underline{Induction Hypothesis}:} Suppose for $i \ge 0$, we have obtained the homotopy $F_{z,t}^i \in \secCont$ so that $$\text{$F_{z,0}^i = F_{z,1}^{i-1}$,\; $F_{z,1}^i$ is holonomic on $\Op\Sigma^{(i)}$ \; and \; $F_{z,t}^i = F_{z,1}^{i-1}$ on $\Sigma \setminus \Op\Sigma^{(i)}$ for $t\in[0,1], \; z \in P$,}$$ where $\Sigma^{(i)}$ is the $i$-skeleton of $\Sigma$. Furthermore, $F_{z,t}^i = F_z$ for $z \in \Op Q$ and the homotopies are arbitrarily $C^0$-small in the base maps. For notational convenience, we set $F_{z,1}^{-1} = F_z$. \smallskip \paragraph{\underline{Induction Step}:} Fix a $i+1$-simplex $\Delta$. By the hypothesis of the theorem, we first obtain some arbitrary lifts $\tilde F_z^\Delta \in \secContTilde|_\Delta$ of $F_{z,1}^i|_{\Op\Delta} \in\secCont|_\Delta$, along the map $ev$. The lifts can be chosen to be continuous with respect to the parameter space $z \in P$. Since $F_{z,1}^i|_{\Op\partial\Delta}$ is holonomic by the induction hypothesis, applying \autoref{prop:relContExtensionHPrin} for the compact set $C=\partial\Delta$, we obtain a homotopy $$\tilde G_{z,t}^{\partial\Delta}\in\secContTilde|_{\partial\Delta}$$ joining $\tilde F_z^\Delta|_{\Op\partial\Delta}$ to a \emph{holonomic} section $\tilde G_{z,1}^{\partial\Delta}\in \secContTilde|_{\partial\Delta}$. Furthermore, the homotopy satisfies $ev(\tilde G_{z,t}^{\partial \Delta}) = F_{z,1}^i|_{\Op \partial\Delta}$ for $t \in [0,1]$ and for $z \in P$. Using the flexibility of the sheaf $\secContTilde|_\Sigma$ we extend $\tilde G_{z,t}^{\partial\Delta}$ to a homotopy $\tilde G_{z,t}^\Delta \in \secContTilde|_\Delta$ defined on some $\tilde\Op\Delta$, so that $$\text{ $\tilde G_{z,1}^\Delta|_{\tilde\Op\partial\Delta} = \tilde G_{z,1}^{\partial\Delta}$ is holonomic, for $z \in P$.}$$ Furthermore, we can arrange so that $\tilde{G}^\Delta_{z,t} = \tilde{F}_{z}^\Delta$ for all time $t$, whenever $z \in \Op Q$. Denoting $\tilde G_{z,1}^\Delta|_{\tilde \Op \partial\Delta} = j^1_{\tilde u_z^{\partial\Delta}}$ for smooth maps $\tilde u_z^{\partial\Delta}$ defined on $\tilde\Op\partial\Delta$, we consider the map of \emph{fibrations} as follows. \[ \begin{tikzcd} \eta^{-1}\big(\tilde u_z^{\partial\Delta}\big) \arrow[hookrightarrow]{r} \arrow{d}[swap]{J} & \solContTilde|_{\Delta} \arrow{r}{\eta} \arrow{d}{J} & \solContTilde|_{\partial\Delta} \arrow{d}{J} & \tilde{u}_z^{\partial\Delta}\arrow[maps to]{d}\\ \chi^{-1}\big(\tilde{G}_{z,1}^\Delta|_{\tilde\Op\partial\Delta}\big) \arrow[hookrightarrow]{r} & \secContTilde|_{\Delta} \arrow{r}[swap]{\chi} & \secContTilde|_{\partial\Delta} & j^1_{\tilde u_z^{\partial\Delta}} = \tilde G^\Delta_{z,1}|_{\tilde\Op\partial\Delta} \end{tikzcd} \] Here $\eta$ is indeed a fibration, as $\solContTilde|_\Sigma$ is flexible by \autoref{thm:mainFlexibilityTheorem}. Now, the rightmost and the middle $J=j^1$ are \emph{local} weak homotopy equivalences by \autoref{thm:microflexibleLocalWHESheafTheorem} and \autoref{lemma:jetLiftingIsoCont}. Hence, they are in fact weak homotopy equivalences by an application of the sheaf homomorphism theorem (\autoref{thm:sheafHomoTheorem}). By the $5$-lemma argument, we then have $$J:\eta^{-1}(\tilde u_z^{\partial\Delta}) \to \chi^{-1}\big(\tilde{G}_{z,1}^\Delta|_{\tilde\Op\partial\Delta}\big)$$ is a weak homotopy equivalence. Now, $\tilde{G}_{z,1}^\Delta \in \chi^{-1}\big(\tilde{G}_{z,1}^\Delta|_{\tilde\Op\partial\Delta}\big)$. Hence, we have a path $\tilde H_{z,t} \in \chi^{-1}\big(\tilde{G}_{z,1}^\Delta|_{\tilde\Op\partial\Delta}\big)$ joining $\tilde{G}_{z,1}^{\Delta}$ to some \emph{holonomic} section $\tilde H_{z,1}$. In particular, this homotopy is fixed on $\tilde\Op\partial\Delta$. Furthermore, we can make sure that this weak homotopy equivalence is in fact a homotopy equivalence and hence, we can assume $H_{z,t}$ is constant for $z \in \Op Q$. We get the concatenated homotopy $$\tilde{F}_{z,t} : \tilde F_z^\Delta \sim_{\tilde G_{z,t}^\Delta} \tilde G_{z,1}^\Delta \sim_{\tilde H_{z,t}} \tilde H_{z,1}, \quad z \in P,$$ and set $F_{z,t}^\Delta = ev(\tilde F_{z,t})$. Clearly, $F_{z,0}^\Delta = F_{z,1}^{i}$ on $\Op\partial\Delta$ and $F_{z,1}^\Delta$ is holonomic on $\Op\Delta$. Also, $F_{z,t} = F_z$ for $z \in \Op Q$. Using a standard cutoff function argument, we patch these homotopies together and get the homotopy $F_{z,t}^{i+1}\in\secCont$ satisfying $$\text{$F_{z,0}^{i+1} = F_{z,1}^i$, \; $F_{z,1}^{i+1}$ is holonomic on $\Op\Sigma^{(i)}$, \; and $F_{z,t}^{i+1}=F_{z,1}^{i}$ on $\Sigma \setminus \Op\Sigma^{(i+1)}$,}$$ where $\Sigma^{(i+1)}$ is the $i+1$-skeleton of $\Sigma$. Furthermore, $F_{z,t} = F_z$ for $z \in \Op Q$. \smallskip The induction terminates once we have obtained the homotopy $F_{z,t}^k$, where $k = \dim\Sigma$. We end up with a sequence of homotopies in $\secCont$. Concatenating all of them we have the homotopy $$F_{z,t} : F_z = F_{z,1}^{-1} \sim_{F_{z,t}^0} F_{z,1}^0 \sim_{F_{z,t}^1} F_{z,1}^1\sim\cdots\sim_{F_{z,t}^{k-1}} F_{z,1}^{k-1}\sim_{F_{z,t}^k} F_{z,1}^k.$$ Clearly $F_{z,t}\in\secCont$ is the desired homotopy joining $F_z$ to a \emph{holonomic} section $F_{z,1}=F_{z,1}^k\in\secCont$, where $F_{z,t} = F_z$ for $z \in \Op Q$. Since at each step the homotopy can be chosen to be arbitrarily $C^0$-small and since there are only finitely many steps, we see that $F_{z,t}$ can be made arbitrary $C^0$-small in the base map as well. This concludes the proof. \end{proof} \else \begin{proof}[Sketch of Proof] The microextension hypothesis allows us to lift a section $\sigma$ of $\relCont$ to $\tilde{\sigma}_0 \in \Gamma \relContTilde$ locally. Using \autoref{thm:hPrinImmersionNearSigma} we get a homotopy $\tilde{\sigma}_t$ of $\tilde{\sigma}_0$ such that $\tilde{\sigma}_1$ is holonomic. Furthermore, the strong version of Implicit Function Theorem (\autoref{thm:consistenInversion}) ensures that if $\sigma \in \Gamma \relCont$ is already holonomic on a neighborhood of a compact set $A \subset \Sigma$, then the homotopy $ev\circ \tilde{\sigma}_t$ can be kept constant on $\Op A\subset \Sigma$. This helps us to homotope $\sigma$ to a holonomic section of $\relCont$. \end{proof} \fi \begin{remark} For the special case of $\relHor$, \autoref{thm:hPrinGenExtn} can be compared with the the $h$-principle for `overregular' maps (Approximation Theorem in \cite[pg. 258]{gromovCCMetric}). In general, $ev : \relHorTilde|_\Sigma\to \relHor$ fails to be surjective and \emph{overregular maps} are precisely the solutions to $ev\big(\relHorTilde|_\Sigma\big)$. \end{remark} If $\calD$ is a contact distribution then every horizontal immersion $\Sigma\to (M,\mathcal D)$ is regular. Moreover, one does not require overregularity condition to obtain the $h$-principle for Legendrian immersions (\cite{duchampLegendre}). Indeed, by embedding $\Sigma$ in the contact manifold $\tilde{\Sigma}=J^1(\Sigma,\mathbb R)$ one observes that $\relContTilde$ is a microextension of $\relHor$ and hence $\relHor$ satisfies the $h$-principle. In general, we can prove the following. \begin{theorem}\label{thm:hPrinGenExtnContact} Suppose $\Sigma$ is an arbitrary manifold. Let $(\tilde{\Sigma},\tilde{K})=(J^1(\Sigma,\mathbb R), \xi_{std})$. If $\relContTilde$ is a microextension of $\relHor$ then the relation $\relHor$ satisfies the $C^0$-dense $h$-principle. \end{theorem} \ifarxiv \begin{remark} In \cite{duPlessisHPrinciple}, the author has obtained similar $h$-principle for \emph{open} relations which admit $\diff$-invariant ``open extensions''. We also refer to \cite[pg. 127-128]{eliashbergBook} where parametric $h$-principle is obtained under stronger hypothesis. \end{remark} \fi \ifarxiv \input{sec_jet_lifting} \fi
2024-02-18T23:40:27.783Z
2022-01-27T02:12:23.000Z
algebraic_stack_train_0000
2,470
24,549
proofpile-arXiv_065-12007
\section{Introduction} The obstacle problem corresponding to an obstacle \;$f$ in \begin{equation}\label{11} W^{1,2}_g (\Omega) = \{ u \in W^{1,2} (\Omega) : \;\;u = g \, \;\;\;\textrm{on} \, \;\;\;\partial \Omega\} \end{equation} consists of minimizing the Dirichlet energy \[ \int_{\Omega} |Du(x)|^2 \, dx \] over the set \begin{equation}\label{eq:trace} \mathbb K^2_{f, g} = \{ u \in W^{1,2}_g (\Omega) : \;\;u(x) \geq f(x) \,\; \;\;\textrm{in} \ \, \Omega\} \end{equation} where \;$\Omega \subset \mathbb R^n$\; is a bounded and smooth domain, \;$Du$\;is the gradient of \;$u$, \; and \;$g \in tr(W^{1, 2}(\Omega))$ with \;$tr$\; the trace operator. In \eqref{11}, the equality \;$u= g \,\; \textrm{on} \,\; \partial \Omega$ \;is in the sense of trace. This problem is used to model the equilibrium position of an elastic membrane whose boundary is held fixed at $g$ and is forced to remain above a given obstacle $f.$ It is known that the obstacle problem admits a unique solution $v \in \mathbb{K}^2_{f, g} $. That is, there is a unique \;$v \in \mathbb{K}^2_{f, g} $ such that \[ \int_{\Omega} |Dv(x)|^2 \, dx \leq \int_{\Omega} |Du(x)|^2 \, dx, \;\;\;\;\forall u \in \mathbb{K}^2_{f, g}. \] \vspace{8pt} In \cite{ALY98} Adams, Lenhart and Yong introduced an optimal control problem for the obstacle problem by studying the minimizer of the functional \[ J_2(\psi) = \frac{1}{2} \int_{\Omega} (|T_2(\psi) - z|^2 + |D\psi|^2 )\, dx. \] In the above variational problem, following the terminology in control theory \cite{L71}, $\psi$ is called the control variable and $T_2(\psi)$ is the corresponding state. The control $\psi$ lies in the space $W^{1,2}_0(\Omega)$, the state $T_2(\psi)$ is the unique solution for the obstacle problem corresponding to the obstacle $\psi$ and the profile $z$ is in $L^2(\Omega).$ The authors proved that there exists a unique minimizer $\bar \psi \in W^{1,2}_0(\Omega)$ of the functional $J_2$. Furthermore, they showed that \;$T_2(\bar \psi) = \bar\psi.$ \vspace{8pt} Following suit, for \;$1 < p < \infty$, and $z \in L^p(\Omega)$, Lou in \cite{LH02} considered the variational problem of minimizing the functional \begin{equation}\label{eq:pp} \tag{$P_{p}$} \bar J_p(\psi) = \frac{1}{p} \int_{\Omega} |T_p(\psi) - z|^p + |D\psi|^p\, dx \end{equation} for $ \psi \in W^{1,p}_0(\Omega):=\{ u \in W^{1,p} (\Omega) : \;\;u=0 \;\;\; \textrm{on} \, \;\; \partial \Omega\} $ and established that the problem admits a minimizer $\bar \psi.$ Here $T_p(\psi)$ is the unique solution for the $p-$obstacle problem with obstacle \;$\psi\in W^{1,p}_0(\Omega)$, see \cite{ALS15} and references therein for discussions about the \;$p$-obstacle problem. We remind the reader that the $p-$obstacle problem with obstacle $f \in W^{1,p}_g (\Omega)$ refers to the problem of minimizing the $p-$Dirichlet energy \[ \int_{\Omega} |Du(x)|^p \, dx \] among all functions in the class \begin{equation*} \mathbb K^p_{f, g} = \{ u \in W^{1,p} (\Omega) : \;\;\;u \geq f \, \;\;\; \textrm{in} \ \,\;\; \Omega \, \;\;\; \textrm {and} \, \;\;u = g \, \;\;\; \textrm{on} \, \;\;\;\partial \Omega\}, \end{equation*} with \;$g\in tr(W^{1, p}(\Omega)$. It is further shown in \cite{LH02} that, as in the case of $p=2,$ $T_p(\bar \psi) = \bar \psi.$ \vspace{8pt} For the boundary data \;$g\in Lip(\partial \Omega)$, letting \;$p \to \infty$, one obtains a limiting variational problem of $L^{\infty}$-type which is referred in the literature as the infinity obstacle problem or $\infty$-obstacle problem (see \cite{RTU15}) . That is, given an obstacle \;$f \in W^{1, \infty}_g(\Omega)$ one considers the minimization problem: \begin{equation}\label{inftyobstacelvariational} \text{Finding }\;\;u_{\infty}\in \mathbb K^{\infty}_{f, g}:\;\;\; ||Du_{\infty}||_\infty=\inf_{u \in \mathbb K^{\infty}_{f, g}} ||Du||_\infty, \end{equation} where \begin{equation*}\label{eq:defkti} \mathbb K^{\infty}_{f, g} = \{ u \in W^{1,\infty}(\Omega) : \;\;v \geq f\;\;\; \textrm{in} \;\,\;\; \Omega \;\;\;\, u = g \, \;\;\;\textrm{on} \, \;\;\; \partial \Omega\}, \;\;\text{and} \;\;\;||\cdot||_{\infty}:=ess\,sup\;|\cdot |. \end{equation*} It is established in \cite{RTU15} that the minimization problem \eqref{inftyobstacelvariational} has a solution \; \begin{equation}\label{c1} u_{\infty}:=u_{\infty}(f)\in \mathbb K^{\infty}_{f, g} \end{equation} which verififies \begin{equation}\label{infinityp} -\Delta_{\infty}u_{\infty}\geq 0 \;\;\; \text{in}\;\;\Omega\;\;\; \text{in a weak sense }. \end{equation} More importantly, the authors in \cite{RTU15} characterize \;$u_{\infty}$\; as the smallest infinity superharmonic function on $\Omega$ that is larger than the obstacle $f$ and equals $g$ on the boundary. Thus for a fixed \;$F\in Lip(\partial \Omega)$, this generates an obstacle to solution operator $$T_{\infty}: W^{1,\infty}_F(\Omega)\longrightarrow W^{1,\infty}_F(\Omega)$$ defined by \begin{equation}\label{eq:definitionti} T_{\infty}(f):=u_{\infty}(f)\in W^{1,\infty}_F(\Omega),\;\;\;\;f \in W^{1,\infty}_F(\Omega), \end{equation} where $$W^{1,\infty}_F(\Omega):= \{ u \in W^{1,\infty}(\Omega) : u = F\, \;\;\;\textrm{on} \, \;\;\; \partial \Omega\}.$$ \vspace{10pt} In this note, we consider a natural optimal control problem for the infinity obstacle problem. More precisely, for \;$F\in Lip (\partial \Omega)$\; and for \;$z \in L^{\infty}(\Omega)$\; fixed, we introduce the functional \begin{align*} J_{\infty} (\psi) = \max\{||T_{\infty}(\psi) - z||_{\infty} , \, ||D\psi||_{\infty} \},\;\;\;\psi\in W^{1, \infty}_F(\Omega) \end{align*} and study the problem of existence of \;$\psi_{\infty} \in W^{1,\infty}_F (\Omega)$\; such that: \begin{align} \tag{$P_{\infty}$} J_{\infty} (\psi_{\infty}) \leq J_{\infty} (\psi) ,\qquad\;\forall \quad \psi \in W^{1,\infty}_F (\Omega). \label{eq:inftycase} \end{align} In deference to optimal control theory, a function \;$\psi_{\infty}$\; satisfying \eqref{eq:inftycase} is called an \emph {optimal control} and the state \;$T_{\infty}(\psi_{\infty})$\; is called an\emph{ optimal state.} \vspace{10pt} Several variants of control problems where the control variable is the obstacle have been studied by different authors since the first of such works appeared in \cite{ALY98}. The literature is vast, but to mention a few, in \cite{AL03} the authors studied a generalization of \cite{ALY98} by adding a source term. In \cite{AL02} a similar problem is studied when the state is a solution to a parabolic variational inequality. In \cite{L01} the author studied regularity of the optimal state obtained in \cite{ALY98}. When the state is governed by a bilateral variational inequality, results are obtained in \cite{BL04}, \cite {QC00}, \cite{CY05} and \cite{CCT05}. Optimal control for higher order obstacle problems appears in \cite{AHL10} and \cite{GN17}. Related works where the control variable is the obstacle are also studied in \cite{DM15, SM12} and the references therein. \vspace{10pt} In this note, we prove that the optimal control problem \eqref{eq:inftycase} associated to \;$J_{\infty}$\; is solvable. Precisely we show the following result: \begin{theorem}\label{eq:main} Assuming that \;$\Omega \subset \mathbb R^n$\; is a bounded and smooth domain, $F\in Lip (\partial \Omega )$, and \;$z \in L^{\infty}(\Omega)$, \;$J_{\infty}$\; admits an optimal control \;$u_{\infty}\in W^{1, \infty}_F(\Omega)$\; which is also an optimal state, i.e\; $$u_{\infty}=T_{\infty}(u_{\infty}).$$ \end{theorem} \vspace{10pt} Using also arguments similar to the ones used in the proof of Theorem \ref{eq:main}, we show the convergence of the minimal value of an optimal control problem associated to \;$\bar J_p$\; to the minimal value of the optimal control problem corresponding to \;$J_{\infty}$\; as \;$p$ \;tends to infinity. Indeed we prove the following result: \begin{theorem}\label{eq:convminima} Let \;$\Omega \subset \mathbb R^n$\; be a bounded and smooth domain, $F\in Lip (\partial \Omega )$, and \;$z \in L^{\infty}(\Omega)$. Then setting \[ J_p= (p\bar J_p)^{\frac{1}{p}}, \;\;C_p = \min_{\psi \in W^{1, p}_F(\Omega)}J_p(\psi)\;\;\text{for}\;\;\;1<p<\infty,\;\; \text{and}\;\;\;\;\;C_{\infty} = \min_{\psi \in W^{1,\infty}_F(\Omega)}J_{\infty}(\psi), \] where \;$\bar J_p$\; is as in \eqref{eq:pp}, we have \[ \lim_{p \to \infty}C_p =C_{\infty} \] \end{theorem} \vspace{10pt} In the proofs of the above results, we use the \;$p$-approximation technique as in the study of the \;$\infty$-obstacle problem combined with the classical methods of weak convergence in Calculus of Variations. As in the study of the \;$\infty$-obstacle problem, here also the key analytical ingredients are the \;$L^q$-characterization of \;$L^{\infty}$\; and H\"older's inequality. The difficulty arises from the the fact that the unicity question for the $\infty$-obstacle problem is still an open problem to the best of our knowledge. To overcome the latter issue, we make use of the characterization of the solution of the \;$\infty$-obstacle problem by Rossi-Teixeira-Urbano \cite{RTU15}. \section {Preliminaries} One of the most popular way of approaching problems related to minimizing a functional of \;$L^{\infty}$-type is to follow the idea first introduced by Aronsson in \cite{GA65} and which involves interpreting an\;$L^{\infty}$-type minimization problem as a limit when \;$p \to \infty$\; of an \;$L^{p}$-type minimization problem. In this note, this \;$p$-approximation technique will be used to show existence of an optimal control for \;$J_{\infty}$. In order to prepare for our use of the \;$p$-approximation technique, we are going to start this section by discussing some related \;$L^p$-type variational problems. \vspace{10pt} Let \;$\Omega \subset \mathbb R^n$\; be a bounded and smooth domain and \;$g \in Lip(\partial\Omega).$ Moreover let \;$\psi \in W^{1,\infty}_g(\Omega)$\; be fixed and \;$1<p<\infty$. Then as described earlier the \;$p$-obstacle problem with obstacle \;$\psi$\;corresponds to finding a minimizer of the functional \begin{equation}\label{pobstaclevariationalform} I_p(v) = \int_{\Omega} |Dv(x)|^p dx \end{equation} over the space \;\;$\mathbb{K}^p_{\psi, g} = \{ v \in W^{1,p}(\Omega) : \;v \geq \psi, \;\;\; \textrm {and} \, \;\;\;v= g \,\;\;\; \textrm{on} \,\;\;\; \partial \Omega\}$. The energy integral (\ref{pobstaclevariationalform}) admits a unique minimizer \;$u_p\; \in \;\mathbb{K}^p_{\psi, g} .$ The minimizer \;$u_p$\;is not only \;$p$-superharmonic, i.e \;$\Delta_p u_p \leq 0$, but is also a weak solution to the following system \begin{equation}\label{pobstacleproblem} \begin{cases} -\Delta_p u \geq 0 & \quad \quad \textrm{in} \quad \Omega\\ -\Delta_p u \, (u-\psi) = 0 & \quad \quad \textrm{in} \quad \Omega\\ u \geq \psi &\quad \quad \textrm{in} \quad \Omega \end{cases} \end{equation} where $\Delta_p$ is the $p$-Laplace operator given by \[\Delta_p u : =div ( |Du|^{p-2} Du).\] Moreover, it is known that the \;$p$-obstacle problem is equivalent to the system \eqref{pobstacleproblem} (see \cite{L71} or \cite{L15}) and hence we will refer to \eqref{pobstacleproblem} as the \;$p$-obstacle problem as well. On the other hand, by the equivalence of weak and viscosity solutions established in \cite{L15} (and \cite{JJ12} )\;$u_p$\; is also a viscosity solution of \eqref{pobstacleproblem} according to the following definition. \begin{definition} A function $u \in C(\Omega)$ is said to be a viscosity subsolution (supersolution) to \begin{equation} \begin{aligned}\label{NLPDE} F(x, u, Du, D^2 u) & = 0 \quad \textrm{in} \, \quad \Omega\\ u & = 0 \quad \textrm{in} \, \quad \partial \Omega \end{aligned} \end{equation} if for every $\phi \in C^2(\Omega)$ and $x_0 \in \Omega$ whenever $\phi-u$ has a minimum (resp. maximum) in a neighborhood of \;$x_0$\; in \;$\Omega$\; we have: \[ F(x, u, D\phi, D^2 \phi) \leq 0 \quad (\textrm{resp. } \quad \geq 0). \] The function\; $u$\; is called a viscosity solution of \eqref{NLPDE} in \;$\Omega$\; if \;$u$\; is both viscosity subsolution and viscosity supersolution of \eqref{NLPDE} in \;$\Omega.$ \end{definition} \vspace{10pt} The asymptotic behavior of the sequence of minimizers \;$(u_p)_{p>1}$\; as \;$p$\; tends to infinity has been investigated in \cite{RTU15}. In fact, in \cite{RTU15}, it is established that for a fixed \;$\psi\in W^{1, \infty}_g(\Omega)$, there exists \;$u_{\infty}=u_{\infty}(\psi)\in \mathbb{K}^{\infty}_{\psi, g} = \{ v \in W^{1,\infty}_g(\Omega) : v \geq \psi\}$ \;such that \;$u_p \to u_\infty$\; locally uniformly in $\bar\Omega$, and that for every \;$q\geq 1$, \; $u_p$\; converges to \;$u_{\infty}$\; weakly in \;$W^{1,q}(\Omega).$ Furthermore, $u_\infty$\; is a solution to the \;$\infty$-obstacle problem \begin{equation}\label{inftyobstacelvariationalform} \min_{v \in \mathbb{K}^{\infty}_{\psi, g}} ||Dv||_\infty \end{equation} For \;$\Omega$ convex (see \cite{ACJ04}), the variational problem \eqref{inftyobstacelvariationalform} is equivalent to the minimization problem \begin{equation*}\label{inftyobstacelvariationalform1} \min_{v \in \mathbb{K}^{\infty}_{\psi, g}} \mathcal{L}(v), \end{equation*} where \[ \mathcal{L}(v) = \inf_{(x, y)\in \Omega^2, \;x \neq y}\dfrac{|v(x) - v(y)|}{|x - y|}. \] Moreover, in \cite{RTU15}, it is show that \; $u_\infty$\; is a viscosity solution to the following system. \begin{equation*}\label{obstacleproblemforinftylaplacian} \begin{cases} -\Delta_ \infty u \geq 0 &\quad \quad \textrm{in} \quad \Omega\\ -\Delta_ \infty u \, ( u-\psi)= 0 &\quad \quad \textrm{in} \quad \Omega\\ u \geq \psi &\quad \quad \textrm{in} \quad \Omega \end{cases} \end{equation*} where \;$\Delta_ \infty$\; is the \;$\infty$-Laplacian and is defined by \[ \Delta_\infty u = \langle D^2uDu, Du\rangle =\sum_{i=1}^n\sum_{j=1}^nu_{x_i}u_{x_j} u_{x_ix_j}. \] \vspace{10pt} Recalling that \;$u$\; is said to be { \it infinity superharmonic} or \;$\infty$-{ \it superharmonic}, if $-\Delta_ \infty u \geq 0$\; in the viscosity sense, we have the following characterization of \;$u_{\infty}$\; in terms of infinity superharmonic functions and it is proven in \cite{RTU15}. We would like to emphasize that this will play an important role in our arguments. \begin{lemma}\label{eq:infinfhar} Setting $$\mathcal{F}^+=\{v\in C(\Omega), \; \;-\Delta_ \infty v\geq 0\;\;\text{in}\;\;\Omega \;\;\text{in the viscosity sense}\}$$\; and \;$$\mathcal{F}^+_\psi=\{v\in \mathcal{F}^+,\;\;v\geq\psi \;\;\text{in}\;\;\Omega,\;\;\text{and}\;\;v=\psi\;\;\text{on}\;\;\partial \Omega\},$$ we have \begin{equation}\label{eq:infinf} T_{\infty}(\psi)=u_{\infty}=\inf_{v\in \mathcal{F}^+_\psi} v, \end{equation} with \;$T_{\infty}$\; as defined earlier in \eqref{eq:definitionti}. \end{lemma} \vspace{10pt} Lemma \ref{eq:infinfhar} implies the following characterization of infinity superharmonic functions as fixed points of \;$T_{\infty}$. This charactreization plays a key role in our \;$p$-approximation scheme for existence. \vspace{8pt} \begin{lemma}\label{eq:inftc} Assuming that \;$u\in W^{1, \infty}_g(\Omega)$, \;$u$\; being infinity superharmonic is equivalent to \;$u$\; being a fixed point of \;$T_{\infty}$, i.e \;$$T_{\infty}(u) =u.$$ \end{lemma} \begin{proof} Let \;$u\in W^{1, \infty}_g(\Omega)$ \; be an infinity superharmonic function and \;$v$\; be defined by \;$v=T_{\infty}(u)$. Then clearly the definition of \;$v$\; and lemma \ref{eq:infinfhar} imply \;$v \geq u$. On the other hand, since \;$u\in W^{1, \infty}_g(\Omega)$\; and is an infinity superharmonic function, we deduce from lemma \ref{eq:infinfhar} that \;$u\geq T_{\infty}(u)= v$. Thus, we get \;$T_{\infty}(u)=u$. Now if \;$u=T_{\infty}(u)$, then using again lemma \ref{eq:infinfhar} or \eqref{c1}-\eqref{eq:definitionti}, we obtain \;$u$\; is an infinity superharmonic function. Hence the proof of the lemma is complete. \end{proof} \vspace{8pt} To run our \;$p$-approximation scheme for existence, another crucial ingredient that we will need is an appropriate characterization of the limit of sequence of solution \;$w_p$\; of the \;$p$-obstacle problem \eqref{pobstacleproblem} with obstacle \;$\psi_p$\; under uniform convergence of both \;$w_p$\; and \;$\psi_p$. Precisely, we will need the following lemma. \begin{lemma}\label{lem:solnofpobstacleconvergetoinftyobstacel} If \;$w_p$\; is a solution to the\; $p$-obstacle problem \eqref{pobstacleproblem} with obstacle \;$\psi_p$\; that is, $w_p$ satisfies \begin{equation}\label{pobstacleproblem1} \begin{cases} -\Delta_p w_p \geq 0 & \quad \quad \textrm{in} \quad \Omega\\ -\Delta_p w_p \, (w_p-\psi_p)=0 & \quad \quad \textrm{in} \quad \Omega\\ w_p \geq \psi_p &\quad \quad \textrm{in} \quad \Omega \end{cases} \end{equation} in the viscosity sense and if also that \;$w_p \to u_{\infty}$\; and \;$\psi_p \to \psi_{\infty}$\; locally uniformly in \;$\overline{\Omega},$ then \;$u_{\infty}$\; is a solution in the viscosity sense of the following system \begin{equation}\label{obstacleproblemforinftylaplacian1} \begin{cases} -\Delta_ \infty w_{\infty} \geq 0 &\quad \quad \textrm{in} \quad \Omega\\ -\Delta_ \infty w_{\infty} \, (w_{\infty}-\psi_{\infty})=0 &\quad \quad \textrm{in} \quad \Omega\\ w_{\infty} \geq \psi_{\infty} &\quad \quad \textrm{in} \quad \Omega. \end{cases} \end{equation} \end{lemma} \begin{proof} First of all, note that since \;$w_p \geq \psi_p$,\; $-\Delta_p w_p \geq 0$\; in the viscosity sense in \;$\Omega$\; for every \;$p$, \;$w_p \to u_{\infty}$,\;and\;$\psi_p \to \psi_{\infty}$\; both locally uniformly in \;$\overline{\Omega}$, and \;$\overline{\Omega}$ is compact, we have \;$w_{\infty}\geq \psi_{\infty}$\; and \;$-\Delta_ \infty w_{\infty} \geq 0$ in the viscosity sense in $\Omega.$ It thus remains to prove that $-\Delta_ \infty u_{\infty} \, (w_{\infty}-\psi_{\infty}) =0 \quad \textrm{in} \quad \Omega$\; which (because of \;$w_{\infty}\geq \psi_{\infty}$\; in $\Omega$) is equivalent to $-\Delta_ \infty u_{\infty} =0\, \quad \textrm{in} \quad \{w_{\infty} >\psi_{\infty}\}:=\{x\in \Omega:\;\; w_{\infty} (x)>\psi_{\infty}(x)\}$. Thus to conclude the proof, we are going to show \;$-\Delta_ \infty w_{\infty} =0\, \quad \textrm{in} \quad \{w_{\infty} >\psi_{\infty}\}$. To that end, fix $y\in \{w_{\infty} >\psi_{\infty}\}.$ Then, by continuity there exists an open neighborhood \;$V$\; of \;$y$\; in $\Omega$ such that \;$\overline V$\; is a compact subset of $\Omega$, and a small real number \;$\delta>0$ such that $w_{\infty} > \delta > \phi_{\infty}$ in $\overline V$. Thus, from \;$w_p \to w_{\infty}$, \;$\psi_p \to \psi_{\infty}$\; locally uniformly in \;$\overline{\Omega}$, and \;$\overline V$\; compact subset of \;$\Omega$, we infer that for sufficiently large \;$p$ \begin{equation}\label{eq:inside} w_p > \delta > \psi_p \quad \textrm{in} \quad \overline V. \end{equation} On the other hand, since \;$w_p$\; is a solution to the \;$p$ obstacle problem \eqref{pobstacleproblem} with obstacle \;$\psi_p$,\; then clearly \;$-\Delta_p w_p = 0$\; in \;$\{w_p > \psi_p \}:=\{x\in \Omega:\;\; w_p(x) > \psi_p(x) \}$. Thus, \eqref{eq:inside} imply \;$-\Delta_p w_p = 0$\; in the sense of viscosity in \;$V$. Hence, recalling that \;$w_p \to w_{\infty}$\;\; locally uniformly in \;$\overline{\Omega}$\; and letting \;$p \to \infty,$ we obtain \[ -\Delta_ \infty w_{\infty} = 0\;\;\;\text{in the sense of viscosity in}\;\; \;V. \] Thus, since \;$y\in V$\; is arbitrary in \;$\{w_{\infty}>\psi_{\infty}\}$, then we arrive to \[ -\Delta_ \infty w_{\infty} = 0\;\;\;\text{in the sense of viscosity in}\;\; \;\{w_{\infty}>\psi_{\infty}\}, \] thereby ending the proof of the lemma. \end{proof} \vspace{10pt} On the other hand, to show the convergence of the minimal values of \;$J_p$\; to that of \;$J_{\infty}$, we will make use of the following elementary results. \begin{lemma}\label{eq:liminfmax} Suppose \;$\{a_p\}$\; and \;$\{b_p\}$\; are nonnegative sequences with \;$$\liminf_{p \to \infty}a_p=a\; \;\;\text{and }\;\;\;\liminf_{p \to \infty} b_p = b.$$ Then \[ \liminf_{p \to \infty} \max\{a_p, b_p\} = \max\{a, b\}. \] \end{lemma} \begin{proof} Let $\{b_{p_k}\}$ be a subsequence converging to $b = \liminf_{p \to \infty} b_p.$ Then $$\lim_{k \to \infty} \max\{a_{p_k}, b_{p_k}\} = \max \{a, b\}.$$ Since the $\liminf$ is the smallest limit point we have \begin{equation}\label{i1} \liminf_{p \to \infty} \max\{a_p, b_p\} \leq \max\{a, b\}. \end{equation} On the other hand \;$$a_p, \, b_p \leq \max\{a_p, b_p\},\;\;\text{ for all}\;\;\; p.$$Thus $$b= \liminf_{p \to \infty} b_p \leq \liminf_{p \to \infty} \max\{a_p, b_p\},$$ and likewise $$a \leq \liminf_{p \to \infty} \max\{a_p, b_p\}.$$ Consequently \begin{equation}\label{i2} \liminf_{p \to \infty} \max\{a_p, b_p\} \geq \max\{a, b\}. \end{equation} Finally \eqref{i1} and \eqref{i2} conclude the proof of the lemma . \end{proof} \begin{lemma}\label{limofsumofpowersofp} Suppose \;$\{a_p\}$\; and \;$\{b_p\}$\; are nonnegative sequences with \;$$\liminf_{p \to \infty}a_p=a\; \;\;\text{and }\;\;\;\liminf_{p \to \infty} b_p = b.$$Then \[ \liminf_{p \to \infty} (a_p^p + b_p^p)^{1/p} =\max\{a, b\}. \] \end{lemma} \begin{proof} It follows directly from the trivial inequality \[ 2^{\frac{1}{p}}\max\{a_p, b_p\}\geq (a_p^p + b_p^p)^{1/p}\geq\max\{a_p, b_p\} , \;\;\forall p\geq 1, \] lemma \ref{eq:liminfmax} and the fact that \;$\lim\inf_n(a_nb_n)=(\lim _n a_n)(\lim\inf_n b_n)$\; if \;$\lim_n a_n>0$. \end{proof} \section{Existence of optimal control for \:$J_{\infty}$\; and limit of \;$C_p$} In this section, we show the existence of an optimal control for \;$J_{\infty}$\; and show that \;$C_p$\; converges to \;$C_{\infty}$\; as \;$p\to \infty$. We divide it in two subsections. In the first one we show existence of an optimal control for \;$J_{\infty}$\; via the \;$p$-approximation technique, and in the second one we show that \;$C_p$\; converges to \;$C_{\infty}$\; as \;$p$\; tends to infinity. \subsection{Existence of optimal control}\label{limitofpproblem} In this subsection, we show the existence of a minimizer of \;$J_{\infty}$\; via the \;$p$-approximation technique using solutions of the optimal control for \;$J_p$. For this end, we start by recalling some optimality facts about \;$J_p$\; inherited from \;$\bar J_p$\;(see \eqref{eq:pp} for its definition) and mentioned in the introduction. For \;$\Omega \subset \mathbb R^n$\; a bounded and smooth domain, $z \in L^{\infty}(\Omega)$, $F \in Lip(\partial \Omega)$, and \;$1< p < \infty$, we recall that the functional \;$J_p$\; is defined by the formula \begin{equation}\label{def} J_p(\psi) =\left[ \int_{\Omega} |T_p(\psi) - z|^p + |D \psi|^p dx \right] ^{1/p},\;\;\;\psi\in W^{1, p}_{ F}(\Omega) \end{equation} and that the optimal control problem for $J_p$\; is the variational problem of minimizing \;$J_p$, namely \begin{equation}\label{eq:pfnal} \inf_{\psi\in W^{1, p}_{ F}(\Omega)}J_p(\psi) \end{equation} over \;$W^{1, p}_{ F}(\Omega)$, where \[ W^{1, p}_{ F}(\Omega)= \{\psi \in W^{1,p} (\Omega) : \;\psi= F\, \;\;\;\; \textrm{on} \, \;\;\partial \Omega\}, \] and \;$T_p(\psi)$ is the solution to the \;$p$-obstacle problem with obstacle \;$\psi.$ Moreover, as for the functional \;$\bar J_p$, \;$J_p$\; also admits a minimizer \;$\psi_p\in W^{1, p}_{ F}(\Omega)$\; verifying \begin{equation}\label{cspn} T_p(\psi_p) = \psi_p. \end{equation} As mentioned in the introduction, for more details about the latter results, see \cite{ALY98} for \;$p =2$\; and see \cite{LH02} for \;$p>2$. \vspace{10pt} To continue, let us pick \;$\eta \in W^{1, \infty}_F(\Omega)$. Since \;$\eta$\; competes in the minimization problem (\ref{eq:pfnal}), we have \[ \int_{\Omega} |D \psi_p|^p dx \leq J_p(\eta) = \int_{\Omega} |T_p(\eta) - z|^p + |D \eta|^p dx. \] Since \;$\overline \Omega$\; is compact and \;$T_p(\eta)\to T_{\infty}(\eta)$\; as \;$p\to \infty$\; locally uniformly on $\overline \Omega$ (which follows from the definition of \;$T_{\infty}(\eta)$), we deduce that for \;$p$\; very larg \begin{equation}\label{estimate of gradient of psi p} \int_{\Omega} |D \psi_p|^p dx \leq M^p |\Omega| \end{equation} for some \;$M$\; which depends only on \;$||\eta||_{W^{1, \infty}},$ $\;||T_{\infty}(\eta)||_{C^{0}}$\; and \;$||z||_{\infty}.$ Furthermore, let us fix \;$1<q < p.$ Then by using Holder's inequality, we can write \begin{equation}\label{eq:holderonpsi_p} \int_{\Omega}|D \psi_p|^qdx \leq \left\{\int_{\Omega}(|D \psi_p|^q)^{p/q} dx \right\}^{q/p} |\Omega|^{\frac{p-q}{p}} \end{equation} and we obtain by using (\ref{estimate of gradient of psi p}) that for \;$p$\; very large \[ \int_{\Omega}|D \psi_p|^q dx\leq M^q |\Omega|^{\frac{q}{p}}|\Omega|^{\frac{p-q}{p}} \] and raising both sides to $1/q$, we derive that for \;$p$\; very large, there holds \[ ||D \psi_p||_{L^q} \leq M |\Omega|^{1/q}, \] with \;$||\cdot ||_{L^q} $\; denoting the classical \;$L^q(\Omega)$-norm. This shows, that the sequence \;$\{\psi_p\}$\; is bounded in \;$W^{1, q}_F(\Omega)$\; in the gradient norm for every \;$q$\; with a bound independent of \;$q$, and by Poincare's inequality, that for every \;$1<q<\infty$, the sequence \;$\{\psi_p\}$\; is bounded in \;$W^{1, q}_F(\Omega)$\; in the standard \;$W^{1,q}(\Omega)$-norm. Therefore , by classical weak compactness arguments, we have that, up to a subsequence, \begin{equation}\label{uniwe} \psi_p\ \longrightarrow \psi_{\infty}, \;\; \text{as}\; \;p \to \infty\;\;\;\text{locally uniformly in}\;\;\; \overline\Omega\;\; \text{and weakly in} \;\;\;W^{1,q}(\Omega)\;\;\forall\ \;1<q<\infty. \end{equation} Notice that consequently $||D\psi_{\infty}||_{L^q} \leq M |\Omega|^{1/q}$ \;\;for all\;\; $1<q<\infty.$ Thus, we deduce once again by Poincare's inequality that \begin{equation}\label{inspace} \psi_{\infty} \in W^{1,\infty}_F(\Omega). \end{equation} \vspace{10pt} We want now to show that \;$\psi_{\infty}$\; is a minimizer of \;$J_{\infty}.$ To that end, we make the following observation which is a consequence of lemma \ref{lem:solnofpobstacleconvergetoinftyobstacel}. \begin{lemma}\label{lem:solnofTpconvergetoTinfty} The function \;$\psi_{\infty}$\; is a fixed point of\; $T_{\infty}$, namely $$ T_{\infty} (\psi_{\infty}) = \psi_{\infty},$$ and the solutions \;$T_p(\psi_p)$\; of the \;$p$-obstacle problem with obstacle \;$\psi_p$\; verify: as \;$p \to \infty$, $$ T_p(\psi_p)\ \longrightarrow T_{\infty}(\psi_{\infty})\;\;\;\text{locally uniformly in}\;\; \overline\Omega\;\; \text{and weakly in} \;\;W^{1,q}(\Omega)\;\;\forall\ \;1<q<\infty. $$ \end{lemma} \begin{proof} We know that \;$T_p({\psi_p}) = \psi_p$ (see \eqref{cspn}) Thus using \eqref{uniwe} and Lemma \ref{lem:solnofpobstacleconvergetoinftyobstacel} with \;$\phi_p = \psi_p$\; and \;$w_p = T_p(\psi_p) = \psi_p$, we have \;$T_p(\psi_p) \to \psi_{\infty}$\; locally uniformly in \;\;$\overline \Omega$, weakly in \;$W^{1,q}(\Omega)$\; for every \;$1<q<\infty$, and \;$\psi_{\infty}$\; is a infinity superharmonic. Thus, recalling \eqref{inspace}, we have lemma \ref {eq:inftc} implies \;$T_{\infty} (\psi_{\infty}) = \psi_{\infty}$. Hence the proof of the lemma is complete. \end{proof} \vspace{10pt} Now, with all the ingredients at hand, we are ready to show that \;$\psi_{\infty}$\; is a minimizer of \;$J_{\infty}.$ Indeed, we are going to show the following proposition: \begin{proposition} Let \;$\Omega \subseteq \mathbb R^n$\; be a bounded and smooth domain, $F\in Lip (\partial \Omega)$\; and \;$z \in L^{\infty}(\Omega).$ Then \;$\psi_{\infty}$\; is a minimizer of \;$J_{\infty}$\; on \;$W^{1,\infty}_F(\Omega)$ That is: \[ J_{\infty}(\psi_{\infty})= \min_{\eta\in W^{1,\infty}_F(\Omega)} J_{\infty}(\eta) \] \end{proposition} \begin{proof} We first introduce for \;$n<p<\infty$\; and \;$\psi \in W^{1,p}_{F}(\Omega)$ \[ H_p(\psi) = \max\{||T_p(\psi) - z||_{\infty} , \; ||D \psi||_{\infty}\}, \] which is well defined by Sobolev Embedding Theorem. Then for any \;$\eta \in W^{1, \infty}_F(\Omega)$ \[ \int_{\Omega} |D \psi_p|^p dx \leq J_p^p(\eta) = \int_{\Omega} \left(|T_p(\eta) - z|^p + |D \eta|^p \right)dx. \] Therefore, using the trivial identity \;$(|a|^p+|b|^p)^{\frac{1}{p}}\leq 2^{\frac{1}{p}}\max\{|a|,\;|b|\}$, we get \[ \left( \int_{\Omega} |D \psi_p|^p dx \right)^{1/p} \leq 2^{1/p} |\Omega|^{1/p} H_p(\eta). \] If we now set \begin{equation}\label{eq:defnofip} I_p = \inf_{\eta \in W^{1, \infty}_F(\Omega)} H_p(\eta), \end{equation} we deduce that \[ \left( \int_{\Omega} |D \psi_p|^p dx \right)^{1/p} \leq 2^{1/p} |\Omega|^{1/p} I_p. \] Let us fix \;$q$\; such that \;$n<q<\infty$. Then for \;$q<p<\infty$,\; by proceeding as in \eqref{eq:holderonpsi_p}, we obtain \begin{equation*} ||D \psi_p||_{L^q} \leq 2^{1/p} I_p |\Omega|^{1/q}. \end{equation*} Similarly, \[ ||T_p(\psi_p) - z||_{L^q} \leq 2^{1/p} I_p |\Omega|^{1/q}. \] Thus \begin{equation}\label{eq: boundofqnormofpsip} \max \{||T_p(\psi_p) - z||_{L^q}, ||D \psi_p||_{L^q}\} \leq 2^{1/p} I_p |\Omega|^{1/q}. \end{equation} For any \;$\eta \in W^{1, \infty}_F(\Omega)$\; we also have \;$I_p \leq H_p(\eta)$\; and \;$\liminf_{p \to \infty} I_p \leq \liminf_{p \to \infty} H_p(\eta).$ Thus, since \;$\psi_p$\; converges weakly in \;$W^{1,q}(\Omega)$\; to \;$\psi_{\infty}$\; as \;$p\to\infty$\; and \eqref{eq: boundofqnormofpsip} holds, then by weak lower semicontinuity, we conclude that \[ ||D\psi_{\infty}||_{L^q} \leq \liminf_{p \to \infty} ||D\psi_p||_{L^q} \leq |\Omega|^{1/q} \liminf_{p \to \infty} H_p(\eta). \] Moreover, since \;$T_p(\eta)$\; converges locally uniformly on \;$\overline \Omega$\; to \;$T_{\infty}(\eta)$\; as \;$p\to\infty$\; and \;$\overline\Omega$\; is compact, then clearly \[ \lim_{p \to \infty} H_p(\eta) = J_{\infty}(\eta) \] and hence \[ ||D\psi_{\infty}||_{L^q} \leq J_{\infty}(\eta) |\Omega|^{1/q}. \] Since this holds for any element \;$\eta$\; of \;$W^{1, \infty}_F(\Omega)$,\; we conclude that by taking the infimum over \;$W^{1, \infty}_F(\Omega)$\; and letting \;$q \to \infty$ \begin{equation}\label{gradpsilessthaninfjinfty} ||D\psi_{\infty}||_{\infty} \leq \inf_{\eta \in W^{1, \infty}_F(\Omega)} J_{\infty}(\eta) \leq J_{\infty}(\psi_{\infty}). \end{equation} Using lemma \ref{lem:solnofTpconvergetoTinfty} and equation \eqref{eq: boundofqnormofpsip} combined with Rellich compactness Theorem or the continuous embedding of \;$L^{\infty}$\;into \;$L^q$, we conclude that \[ ||T_{\infty}(\psi_{\infty}) - z||_{L^q} = \lim_{p \to \infty} ||T_p(\psi_p) - z||_{L^q} \leq |\Omega|^{1/q} \liminf_{p \to \infty} H_p(\eta). \] Thus, as above letting \;$q$\; goes to infinity and taking infimum in \;$\eta$\; over \;$ W^{1, \infty}_F(\Omega)$, we also have \begin{equation}\label{Tinftypsilessthaninfjinfty} ||T_{\infty}(\psi_{\infty}) - z||_{\infty} \leq \inf_{\eta \in W^{1, \infty}_F(\Omega)} J_{\infty}(\eta) \leq J_{\infty}(\psi_{\infty}). \end{equation} Finally, from \eqref{inspace}, \eqref{gradpsilessthaninfjinfty} and \eqref{Tinftypsilessthaninfjinfty} we deduce \[ J_{\infty}(\psi_{\infty}) = \min_{\eta \in W^{1, \infty}_F(\Omega)} J_{\infty}(\eta), \] as desired. \end{proof} \subsection{Convergence of Minimum Values} In this subsection, we show the convergence of the minimal value of the optimal control problem of \;$J_p$ \; to the one of \;$J_{\infty}$\; as \;$p\to \infty$, namely Theorem \ref{eq:convminima} via the following proposition: \begin{proposition} Let \;$\Omega \subset \mathbb R^n$ be a bounded and smooth domain, $F\in Lip (\partial \Omega)$\; and \;$1<p<\infty.$ Then recalling that \begin{equation*} C_p= \min _{\psi \in W^{1,p}_F(\Omega)}J_p(\psi)\; \;\;\text{and}\;\;\;\;C_{\infty}=\min _{\psi \in W^{1, \infty}_F(\Omega)}J_{\infty}(\psi) , \end{equation*} we have \[ \lim_{p \to \infty} C_p= C_{\infty}. \] \end{proposition} \begin{proof} Let \;$\psi_p \in W^{1,p}_F(\Omega)$\; and \;$\psi_{\infty} \in W^{1,\infty}_F(\Omega)$\; be as in subsection \ref{limitofpproblem}. Then they satisfy \;$J_p(\psi_p) = C_p$\; and \;$J_{\infty}(\psi_{\infty}) = C_{\infty}$. Moreover, up to a subsequence, we have \;$\psi_p$\; and \;$\psi_{\infty}$\; verify \eqref{uniwe} and the conclusions of lemma \ref{lem:solnofTpconvergetoTinfty}. On the other hand, by minimality and H\"older's inequality, we have \begin{equation*} J_p(\psi_p) \leq J_p(\psi_{\infty}) \leq 2^{1/p} |\Omega|^{1/p} \max \{||T_p(\psi_{\infty})-z||_{\infty}, ||D\psi_{\infty}||_{\infty} \}. \end{equation*} Thus \begin{equation}\label{eq:limsup} \limsup_{p \to \infty} J_p(\psi_p) \leq J_{\infty}(u_{\infty}). \end{equation} Now we are going to show the following \begin{equation}\label{minofinftyobstacleisless} J_{\infty} (\psi_{\infty}) \leq \liminf_{p \to \infty} J_p(\psi_p). \end{equation} To that end observe that by definition of \;$J_{\infty}$, we have \begin{equation}\label{eq:apdef} J_{\infty}(\psi_{\infty}) =\max \{ ||T_{\infty} (\psi_{\infty}) - z||_{\infty}, \, ||D\psi_{\infty}||_{\infty}\}. \end{equation} Thus, using the \;$L^q$-characterization of \;$L^{\infty}$, we have that \eqref{eq:apdef} imply \begin{equation}\label{eq:aplq} J_{\infty}(\psi_{\infty}) = \max \{\lim_{q \to \infty} ||T_{\infty} (\psi_{\infty}) - z||_{L^q}, \, \lim_{q \to \infty} ||D\psi_{\infty}||_{L^q}\}, \end{equation} and by using lemma \ref{eq:liminfmax}, we get \begin{equation}\label{eq:aplqs} J_{\infty}(\psi_{\infty}) = \lim_{q \to \infty}\max \{ ||T_{\infty} (\psi_{\infty}) - z||_{L^q}, \,||D\psi_{\infty}||_{L^q}\}. \end{equation} On the other hand, by weak lower semicontinuity, and corollary \ref{lem:solnofTpconvergetoTinfty}, we have \begin{equation}\label{eq:aplowsem}\ \,||D\psi_{\infty}||_{L^q}\leq \liminf_{p\rightarrow \infty} \,||D\psi_{p}||_{L^q}. \end{equation} Now, combining \eqref{eq:aplqs} and \eqref{eq:aplowsem}, we obtain \begin{equation}\label{eq:aplqsn} J_{\infty}(\psi_{\infty}) \leq \liminf_{q \to \infty}\max \{ ||T_{\infty} (\psi_{\infty}) - z||_{L^q}, \liminf_{p\rightarrow \infty} \,||D\psi_p||_{L^q}\}. \end{equation} Next, using lemma \ref{limofsumofpowersofp}, corollary \ref{lem:solnofTpconvergetoTinfty}, and \eqref{eq:aplqsn}, we get \begin{equation}\label{eq:aplqsnn} J_{\infty}(\psi_{\infty}) \leq \liminf_{q \to \infty}\liminf_{p \to \infty} \left\{ (||T_{p} (\psi_{p}) - z||_{L^q})^p + (||D\psi_{p}||_{L^q})^p\right\}^{1/p}. \end{equation} To continue, we are going to estimate the right hand side of \eqref{eq:aplqsnn}. Indeed, using H\"older's inequality, we have \begin{align*} (||T_{p} (\psi_{p}) - z||_{L^q})^p &= \left\{\int_{\Omega} |T_{p} (\psi_{p}) - z|^q dx \right\}^{p/q}\\ &\leq \left\{\int_{\Omega} |T_{p} (\psi_{p}) - z|^p dx \right\} |\Omega| ^{(1-q/p)p/q} \\ &= \left\{\int_{\Omega} |T_{p} (\psi_{p}) - z|^p dx \right\} |\Omega| ^{(1-q/p)p/q}.\\ \end{align*} Similarly, we obtain \;$$(||D\psi_{p}||_{L^q})^p \leq \left\{ \int_{\Omega} |D\psi_{p}|^p \, dx \right\} |\Omega| ^{(1-q/p)p/q}.$$ By using the latter two estimates in \eqref{eq:aplqsnn}, we get \begin{align*} J_{\infty}(\psi_{\infty}) &\leq \liminf_{q \to \infty}\liminf_{p \to \infty} \left[\left\{ \int_{\Omega}( |T_{p} (\psi_{p}) - z|^p + |D\psi_{p}|^p )\, dx \right\}^{1/p} |\Omega| ^{(1-q/p)p/q(1/p)}\right]\\ &= \liminf_{q \to \infty}\liminf_{p \to \infty} \left[\left\{ \int_{\Omega} (|T_{p} (\psi_{p}) - z|^p + |D\psi_{p}|^p )\, dx \right\}^{1/p} |\Omega| ^{\frac{1}{q} - \frac{1}{p} }\right]\\ & = \liminf_{q \to \infty}\left[|\Omega| ^{\frac{1}{q} }\liminf_{p \to \infty} J_p(\psi_p)\right] =\liminf_{p \to \infty} J_p(\psi_p) \numberthis \label{j-inftylessthanliminfj-p} \end{align*} proving claim \eqref{minofinftyobstacleisless}. Combining \eqref{eq:limsup} with \eqref{j-inftylessthanliminfj-p} we obtain \[ \lim_{p \to \infty} J_p(\psi_p)=J_{\infty}(u_{\infty}), \] and recalling that we were working with a possible subsequence, then we have that up to a subsequence \[ \lim_{p \to \infty} C_p = C_{\infty}. \] Hence, since the limit is independent of the subsequence, we have \[ \lim_{p \to \infty} C_p= C_{\infty} \] as required. \end{proof} \begin{bibdiv} \begin{biblist} \bib{AL02}{article}{ author={Adams, David R.}, author={Lenhart, Suzanne}, title={Optimal control of the obstacle for a parabolic variational inequality}, journal={J. Math. Anal. Appl.}, volume={268}, date={2002}, number={2}, pages={602--614}} \bib{AL03}{article}{ author={Adams, David R.}, author={Lenhart, Suzanne}, title={An obstacle control problem with a source term}, journal={Appl. Math. Optim.}, volume={47}, date={2003}, number={1}, pages={79--95}} \bib{ALY98}{article}{ author={Adams, D. R.}, author={Lenhart, S. M.}, author={Yong, J.}, title={Optimal control of the obstacle for an elliptic variational inequality}, journal={Appl. Math. Optim.}, volume={38}, date={1998}, number={2}, pages={121--140}} \bib{MR1941913}{article}{ author={Adams, David R.}, author={Lenhart, Suzanne}, title={An obstacle control problem with a source term}, journal={Appl. Math. Optim.}, volume={47}, date={2003}, number={1}, pages={79--95}, issn={0095-4616}} \bib{AHL10}{article}{ author={Adams, David R.}, author={Hrynkiv, Volodymyr}, author={Lenhart, Suzanne}, title={Optimal control of a biharmonic obstacle problem}, conference={ title={Around the research of Vladimir Maz'ya. III}, }, book={ series={Int. Math. Ser. (N. Y.)}, volume={13}, publisher={Springer, New York}, }, date={2010}, pages={1--24}} \bib{ALS15}{article}{ author={Andersson, John}, author={Lindgren, Erik}, author={Shahgholian, Henrik}, title={Optimal regularity for the obstacle problem for the $p$-Laplacian}, journal={J. Differential Equations}, volume={259}, date={2015}, number={6}, pages={2167--2179}} \bib{GA65}{article}{ author={Aronsson, Gunnar}, title={Minimization problems for the functional ${\rm sup}\sb{x}\,F(x,\,f(x),\,f\sp{\prime} (x))$}, journal={Ark. Mat.}, volume={6}, date={1965}, pages={33--53 (1965)}} \bib{ACJ04}{article}{ author={Aronsson, Gunnar}, author={Crandall, Michael G.}, author={Juutinen, Petri}, title={A tour of the theory of absolutely minimizing functions}, journal={Bull. Amer. Math. Soc. (N.S.)}, volume={41}, date={2004}, number={4}, pages={439--505}} \bib{BL04}{article}{ author={Bergounioux, Ma\"{\i}tine}, author={Lenhart, Suzanne}, title={Optimal control of bilateral obstacle problems}, journal={SIAM J. Control Optim.}, volume={43}, date={2004}, number={1}, pages={240--255}} \bib{QC00}{article}{ author={Chen, Qihong}, title={Optimal control of semilinear elliptic variational bilateral problem}, journal={Acta Math. Sin. (Engl. Ser.)}, volume={16}, date={2000}, number={1}, pages={123--140}} \bib{CY05}{article}{ author={Chen, Qihong}, author={Ye, Yuquan}, title={Bilateral obstacle optimal control for a quasilinear elliptic variational inequality}, journal={Numer. Funct. Anal. Optim.}, volume={26}, date={2005}, number={3}, pages={303--320}} \bib{CCT05}{article}{ author={Chen, Qihong}, author={Chu, Delin}, author={Tan, Roger C. E.}, title={Optimal control of obstacle for quasi-linear elliptic variational bilateral problems}, journal={SIAM J. Control Optim.}, volume={44}, date={2005}, number={3}, pages={1067--1080}} \bib{DM15}{article}{ author={Di Donato, Daniela}, author={Mugnai, Dimitri}, title={On a highly nonlinear self-obstacle optimal control problem}, journal={Appl. Math. Optim.}, volume={72}, date={2015}, number={2}, pages={261--290}} \bib{GN17}{article}{ author={Ghanem, Radouen}, author={Nouri, Ibtissam}, title={Optimal control of high-order elliptic obstacle problem}, journal={Appl. Math. Optim.}, volume={76}, date={2017}, number={3}, pages={465--500}} \bib{JJ12}{article}{ author={Julin, Vesa}, author={Juutinen, Petri}, title={A new proof for the equivalence of weak and viscosity solutions for the $p$-Laplace equation}, journal={Comm. Partial Differential Equations}, volume={37}, date={2012}, number={5}, pages={934--946}} \bib{L71}{book}{ author={Lions, J. L.}, title={Optimal Control of Systems Governed by Partial Differential Equations, 1st ed.}, journal={Springer}, volume={170}, date={1971}, number={Springer} } \bib{LH02}{article}{ author={Lou, Hongwei}, title={An optimal control problem governed by quasi-linear variational inequalities}, journal={SIAM J. Control Optim.}, volume={41}, date={2002}, number={4}, pages={1229--1253}} \bib{L01}{article}{ author={Lou, Hongwei}, title={On the regularity of an obstacle control problem}, journal={J. Math. Anal. Appl.}, volume={258}, date={2001}, number={1}, pages={32--51}} \bib{L15}{article}{ author={ Lindqvist, P.}, title={Notes on the Infinity Laplace Equation}, journal={.}, volume={17}, date={2015}} \bib{RTU15}{article}{ author={Rossi, J. D.}, author={Teixeira, E. V.}, author={Urbano, J. M.}, title={Optimal regularity at the free boundary for the infinity obstacle problem}, journal={Interfaces Free Bound.}, volume={17}, date={2015}, number={3}, pages={381--398}} \bib{SM12}{article}{ author={Str\"{o}mqvist, Martin H.}, title={Optimal control of the obstacle problem in a perforated domain}, journal={Appl. Math. Optim.}, volume={66}, date={2012}, number={2}, pages={239--255}} \end{biblist} \end{bibdiv} \end{document} \bibliographystyle{plain Note. This is the Postscript version with plain bibliographic style. Most of the academic staff in the School of Computer Science and Software Engineering have publications in their field of expertise or research. The majority of the publications appear in journals or proceedings of conferences. Articles appearing in journals may be written by a single author~\cite{Meyer2000}. Where there are multiple authors, the citation in the text usually names only the first author, for example Kim Marriott's article on logic programming~\cite{Codishetal2000}. The same fate befalls Henry Wu~\cite{Huetal2000}. Some authors contribute a chapter to edited books \cite{WallaceandKorb1999}. Others, for example Christine Mingins, jointly publish a book~\cite{Jezequeletal2000}. Damian Conway is a world expert on Perl. As well as having written a book on this topic~\cite{Conway2000}, he has also been the subject of articles~\cite{GM2013}.
2024-02-18T23:40:27.940Z
2020-07-07T02:03:15.000Z
algebraic_stack_train_0000
2,475
6,697
proofpile-arXiv_065-12224
\section{Introduction} Single image super-resolution (SR) aims at recovering a high-resolution (HR) image from a given low-resolution (LR) one. Recent SR algorithms are mostly learning-based (or patch-based) methods~\cite{Dong2014,Dong2015,yang2013fast,Timofte2013,Timofte2014,Cui2014,Schulter2015,Wang2015} that learn a mapping between the LR and HR image spaces. Among them, the Super-Resolution Convolutional Neural Network (SRCNN)~\cite{Dong2014,Dong2015} has drawn considerable attention due to its simple network structure and excellent restoration quality. Though SRCNN is already faster than most previous learning-based methods, the processing speed on large images is still unsatisfactory. For example, to upsample an $240\times 240$ image by a factor of 3, the speed of the original SRCNN~\cite{Dong2014} is about 1.32 fps, which is far from real-time (24 fps). To approach real-time, we should accelerate SRCNN for at least 17 times while keeping the previous performance. This sounds implausible at the first glance, as accelerating by simply reducing the parameters will severely impact the performance. However, when we delve into the network structure, we find \textit{two inherent limitations} that restrict its running speed. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{pic/timevspsnr3.pdf} \caption{The proposed FSRCNN networks achieve better super-resolution quality than existing methods, and are tens of times faster. Especially, the FSRCNN-s can run in real-time ($>24$ fps) on a generic CPU. The chart is based on the Set14~\cite{Zeyde2012} results summarized in Tables~\ref{results91} and~\ref{results}. } \label{fig:runtime} \end{figure} First, as a pre-processing step, the original LR image needs to be upsampled to the desired size using bicubic interpolation to form the input. Thus the computation complexity of SRCNN grows quadratically with the spatial size of the HR image (not the original LR image). For the upscaling factor $n$, the computational cost of convolution with the interpolated LR image will be $n^2$ times of that for the original LR one. This is also the restriction for most learning-based SR methods~\cite{Yang2010a,yang2013fast,Timofte2013,Timofte2014,Schulter2015,Wang2015}. If the network was learned directly from the original LR image, the acceleration would be significant, \emph{i.e.,}~about $n^2$ times faster The second restriction lies on the costly non-linear mapping step. In SRCNN, input image patches are projected on a high-dimensional LR feature space, then followed by a complex mapping to another high-dimensional HR feature space. Dong~\emph{et al.}~\cite{Dong2015} show that the mapping accuracy can be substantially improved by adopting a wider mapping layer, but at the cost of the running time. For example, the large SRCNN (SRCNN-Ex)~\cite{Dong2015} has 57,184 parameters, which are six times larger than that for SRCNN (8,032 parameters). Then the question is how to shrink the network scale while still keeping the previous accuracy. According to the above observations, we investigate a more concise and efficient network structure for fast and accurate image SR. \textit{To solve the first problem}, we adopt a deconvolution layer to replace the bicubic interpolation. To further ease the computational burden, we place the deconvolution layer\footnote{We follow~\cite{Zeiler2014} to adopt the terminology `deconvolution'. We note that it carries very different meaning in classic image processing, see~\cite{xu2014deep}.} at the end of the network, then the computational complexity is only proportional to the spatial size of the original LR image. It is worth noting that the deconvolution layer is not equal to a simple substitute of the conventional interpolation kernel like in FCN~\cite{Long2015}, or `unpooling+convolution' like~\cite{dosovitskiy2015learning}. Instead, it consists of diverse automatically learned upsampling kernels (see Figure~\ref{fig:filters}) that work jointly to generate the final HR output, and replacing these deconvolution filters with uniform interpolation kernels will result in a drastic PSNR drop (\emph{e.g.,}~at least 0.9 dB on the Set5 dataset~\cite{Bevilacqua2012} for $\times 3$). \textit{For the second problem}, we add a shrinking and an expanding layer at the beginning and the end of the mapping layer separately to restrict mapping in a low-dimensional feature space. Furthermore, we decompose a single wide mapping layer into several layers with a fixed filter size $3\times 3$. The overall shape of the new structure looks like an \textit{hourglass}, which is symmetrical on the whole, thick at the ends and thin in the middle. Experiments show that the proposed model, named as Fast Super-Resolution Convolutional Neural Networks (FSRCNN)~\footnote{The implementation is available on the project page \url{http://mmlab.ie.cuhk.edu.hk/projects/FSRCNN.html}.}, achieves a speed-up of more than $40\times$ with even superior performance than the SRCNN-Ex. In this work, we also present a small FSRCNN network (FSRCNN-s) that achieves similar restoration quality as SRCNN, but is $17.36$ times faster and can run in real time (24 fps) with a generic CPU. As shown in Figure~\ref{fig:runtime}, the FSRCNN networks are much faster than contemporary SR models yet achieving superior performance. Apart from the notable improvement in speed, the FSRCNN also has another appealing property that could facilitate fast training and testing across different upscaling factors. Specifically, in FSRCNN, all convolution layers (except the deconvolution layer) can be shared by networks of different upscaling factors. During training, with a well-trained network, we only need to fine-tune the deconvolution layer for another upscaling factor with almost no loss of mapping accuracy. During testing, we only need to do convolution operations once, and upsample an image to different scales using the corresponding deconvolution layer. Our contributions are three-fold: 1) We formulate a compact hourglass-shape CNN structure for fast image super-resolution. With the collaboration of a set of deconvolution filters, the network can learn an end-to-end mapping between the original LR and HR images with no pre-processing. 2) The proposed model achieves a speed up of at least $40\times$ than the SRCNN-Ex~\cite{Dong2015} while still keeping its exceptional performance. One of its small-size version can run in real-time ($>$24 fps) on a generic CPU with better restoration quality than SRCNN~\cite{Dong2014}. 3) We transfer the convolution layers of the proposed networks for fast training and testing across different upscaling factors, with no loss of restoration quality. \section{Related Work} \noindent \textbf{Deep learning for SR:} Recently, the deep learning techniques have been successfully applied on SR. The pioneer work is termed as the Super-Resolution Convolutional Neural Network (SRCNN) proposed by Dong~\emph{et al.}~\cite{Dong2014,Dong2015}. Motivated by SRCNN, some problems such as face hallucination~\cite{Zhu2016} and depth map super-resolution~\cite{Hui2016} have achieved state-of-the-art results. Deeper structures have also been explored in~\cite{Kim2015a} and~\cite{Kim2015}. Different from the conventional learning-based methods, SRCNN directly learns an end-to-end mapping between LR and HR images, leading to a fast and accurate inference. The inherent relationship between SRCNN and the sparse-coding-based methods ensures its good performance. Based on the same assumption, Wang~\emph{et al.}~\cite{Wang2015} further replace the mapping layer by a set of sparse coding sub-networks and propose a sparse coding based network (SCN). With the domain expertise of the conventional sparse-coding-based method, it outperforms SRCNN with a smaller model size. However, as it strictly mimics the sparse-coding solver, it is very hard to shrink the sparse coding sub-network with no loss of mapping accuracy. Furthermore, all these networks~\cite{Wang2015,Kim2015a,Kim2015} need to process the bicubic-upscaled LR images. The proposed FSRCNN does not only perform on the original LR image, but also contains a simpler but more efficient mapping layer. Furthermore, the previous methods have to train a totally different network for a specific upscaling factor, while the FSRCNN only requires a different deconvolution layer. This also provides us a faster way to upscale an image to several different sizes. \noindent \textbf{CNNs acceleration:} A number of studies have investigated the acceleration of CNN. Denton~\emph{et al.}~\cite{Denton2014} first investigate the redundancy within the CNNs designed for object detection. Then Zhang~\emph{et al.}~\cite{Zhang2015} make attempts to accelerate very deep CNNs for image classfication. They also take the non-linear units into account and reduce the accumulated error by asymmetric reconstruction. Our model also aims at accelerating CNNs but in a different manner. First, they focus on approximating the existing well-trained models, while we reformulate the previous model and achieves better performance. Second, the above methods are all designed for high-level vision problems (\emph{e.g.,}~image classification and object detection), while ours are for the low-level vision task. As the deep models for SR contain no fully-connected layers, the approximation of convolution filters will severely impact the performance. \section{Fast Super-Resolution by CNN} We first briefly describe the network structure of SRCNN~\cite{Dong2014,Dong2015}, and then we detail how we reformulate the network layer by layer. The differences between FSRCNN and SRCNN are presented at the end of this section. \subsection{SRCNN} SRCNN aims at learning an end-to-end mapping function $F$ between the bicubic-interpolated LR image $Y$ and the HR image $X$. The network contains all convolution layers, thus the size of the output is the same as that of the input image. As depicted in Figure~\ref{fig:structure}, the overall structure consists of three parts that are analogous to the main steps of the sparse-coding-based methods~\cite{Yang2010a}. The patch extraction and representation part refers to the first layer, which extracts patches from the input and represents each patch as a high-dimensional feature vector. The non-linear mapping part refers to the middle layer, which maps the feature vectors non-linearly to another set of feature vectors, or namely HR features. Then the last reconstruction part aggregates these features to form the final output image. The computation complexity of the network can be calculated as follows, \begin{equation} \label{eqn:computationSRCNN} O\{(f_1^2 n_1 + n_1 f_2^2 n_2 + n_2 f_3^2) S_{HR}\}, \end{equation} where $\{f_{i}\}_{i=1}^3$ and $\{n_{i}\}_{i=1}^3$ are the filter size and filter number of the three layers, respectively. $S_{HR}$ is the size of the HR image. We observe that the complexity is proportional to the size of the HR image, and the middle layer contributes most to the network parameters. In the next section, we present the FSRCNN by giving special attention to these two facets. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{pic/framework.pdf} \caption{This figure shows the network structures of the SRCNN and FSRCNN. The proposed FSRCNN is different from SRCNN mainly in three aspects. First, FSRCNN adopts the original low-resolution image as input without bicubic interpolation. A deconvolution layer is introduced at the end of the network to perform upsampling. Second, The non-linear mapping step in SRCNN is replaced by three steps in FSRCNN, namely the shrinking, mapping, and expanding step. Third, FSRCNN adopts smaller filter sizes and a deeper network structure. These improvements provide FSRCNN with better performance but lower computational cost than SRCNN.} \label{fig:structure} \end{figure*} \subsection{FSRCNN} As shown in Figure~\ref{fig:structure}, FSRCNN can be decomposed into five parts -- feature extraction, shrinking, mapping, expanding and deconvolution. The first four parts are convolution layers, while the last one is a deconvolution layer. For better understanding, we denote a convolution layer as $Conv(f_i,n_i,c_i)$, and a deconvolution layer as $DeConv(f_i,n_i,c_i)$, where the variables $f_i,n_i,c_i$ represent the filter size, the number of filters and the number of channels, respectively. As the whole network contains tens of variables (\emph{i.e.,}~$\{f_i,n_i,c_i\}_{i=1}^6$), it is impossible for us to investigate each of them. Thus we assign a reasonable value to the insensitive variables in advance, and leave the sensitive variables unset. We call a variable sensitive when a slight change of the variable could significantly influence the performance. These sensitive variables always represent some important influential factors in SR, which will be shown in the following descriptions. \noindent \textbf{Feature extraction:} This part is similar to the first part of SRCNN, but different on the input image. FSRCNN performs feature extraction on the original LR image without interpolation. To distinguish from SRCNN, we denote the small LR input as $Y_s$. By doing convolution with the first set of filters, each patch of the input (1-pixel overlapping) is represented as a high-dimensional feature vector. We refer to SRCNN on the choice of parameters -- $f_1,n_1,c_1$. In SRCNN, the filter size of the first layer is set to be 9. Note that these filters are performed on the upscaled image $Y$. As most pixels in $Y$ are interpolated from $Y_s$, a $5\times 5$ patch in $Y_s$ could cover almost all information of a $9\times 9$ patch in $Y$. Therefore, we can adopt a smaller filter size $f_1=5$ with little information loss. For the number of channels, we follow SRCNN to set $c_1=1$. Then we only need to determine the filter number $n_1$. From another perspective, $n_1$ can be regarded as the number of LR feature dimension, denoted as $d$ -- the first sensitive variable. Finally, the first layer can be represented as $Conv(5,d,1)$. \noindent \textbf{Shrinking:} In SRCNN, the mapping step generally follows the feature extraction step, then the high-dimensional LR features are mapped directly to the HR feature space. However, as the LR feature dimension $d$ is usually very large, the computation complexity of the mapping step is pretty high. This phenomenon is also observed in some deep models for high-level vision tasks. Authors in \cite{Lin2014} apply $1\times 1$ layers to save the computational cost. With the same consideration, we add a shrinking layer after the feature extraction layer to reduce the LR feature dimension $d$. We fix the filter size to be $f_2=1$, then the filters perform like a linear combination within the LR features. By adopting a smaller filter number $n_2=s<<d$, the LR feature dimension is reduced from $d$ to $s$. Here $s$ is the second sensitive variable that determines the level of shrinking, and the second layer can be represented as $Conv(1,s,d)$. This strategy greatly reduces the number of parameters (detailed computation in Section~\ref{sec:parameters}). \noindent \textbf{Non-linear mapping:} The non-linear mapping step is the most important part that affects the SR performance, and the most influencing factors are the width (\emph{i.e.,}~the number of filters in a layer) and depth (\emph{i.e.,}~the number of layers) of the mapping layer. As indicated in SRCNN~\cite{Dong2015}, a $5\times 5$ layer achieves much better results than a $1\times 1$ layer. But they are lack of experiments on very deep networks. The above experiences help us to formulate a more efficient mapping layer for FSRCNN. First, as a trade-off between the performance and network scale, we adopt a medium filter size $f_3=3$. Then, to maintain the same good performance as SRCNN, we use multiple $3\times 3$ layers to replace a single wide one. The number of mapping layers is another sensitive variable (denoted as $m$), which determines both the mapping accuracy and complexity. To be consistent, all mapping layers contain the same number of filters $n_3=s$. Then the non-linear mapping part can be represented as $m\times Conv(3, s, s)$. \noindent \textbf{Expanding:} The expanding layer acts like an inverse process of the shrinking layer. The shrinking operation reduces the number of LR feature dimension for the sake of the computational efficiency. However, if we generate the HR image directly from these low-dimensional features, the final restoration quality will be poor. Therefore, we add an expanding layer after the mapping part to expand the HR feature dimension. To maintain consistency with the shrinking layer, we also adopt $1\times 1$ filters, the number of which is the same as that for the LR feature extraction layer. As opposed to the shrinking layer $Conv(1,s,d)$, the expanding layer is $Conv(1,d,s)$. Experiments show that without the expanding layer, the performance decreases up to 0.3 dB on the Set5 test set~\cite{Bevilacqua2012}. \noindent \textbf{Deconvolution:} The last part is a deconvolution layer, which upsamples and aggregates the previous features with a set of deconvolution filters. The deconvolution can be regarded as an inverse operation of the convolution. For convolution, the filter is convolved with the image with a stride $k$, and the output is $1/k$ times of the input. Contrarily, if we exchange the position of the input and output, the output will be $k$ times of the input, as depicted in Figure~\ref{fig:deconvolution}. We take advantage of this property to set the stride $k=n$, which is the desired upscaling factor. Then the output is directly the reconstructed HR image. When we determine the filter size of the deconvolution filters, we can look at the network from another perspective. Interestingly, the reversed network is like a downscaling operator that accepts an HR image and outputs the LR one. Then the deconvolution layer becomes a convolution layer with a stride $n$. As it extracts features from the HR image, we should adopt $9\times 9$ filters that are consistent with the first layer of SRCNN. Similarly, if we reverse back, the deconvolution filters should also have a spatial size $f_5=9$. Experiments also demonstrate this assumption. Figure~\ref{fig:filters} shows the learned deconvolution filters, the patterns of which are very similar to that of the first-layer filters in SRCNN. Lastly, we can represent the deconvolution layer as $DeConv(9,1,d)$. Different from inserting traditional interpolation kernels (\emph{e.g.,}~bicubic or bilinear) in-network~\cite{Long2015} or having `unpooling+convolution'~\cite{dosovitskiy2015learning}, the deconvolution layer learns a set of upsampling kernel for the input feature maps. As shown in Figure~\ref{fig:filters}, these kernels are diverse and meaningful. If we force these kernels to be identical, the parameters will be used inefficiently (equal to sum up the input feature maps as one), and the performance will drop at least 0.9 dB on the Set5. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{pic/filters.pdf} \caption{The learned deconvolution layer (56 channels) for the upscaling factor 3.} \label{fig:filters} \end{figure} \noindent \textbf{PReLU:} For the activation function after each convolution layer, we suggest the use of the Parametric Rectified Linear Unit (PReLU)~\cite{He2015} instead of the commonly-used Rectified Linear Unit (ReLU). They are different on the coefficient of the negative part. For ReLU and PReLU, we can define a general activation function as $f(x_i)=max(x_i,0)+a_imin(0,x_i)$, where $x_i$ is the input signal of the activation $f$ on the $i$-th channel, and $a_i$ is the coefficient of the negative part. The parameter $a_i$ is fixed to be zero for ReLU, but is learnable for PReLU. We choose PReLU mainly to avoid the ``dead features''~\cite{Zeiler2014} caused by zero gradients in ReLU. Then we can make full use of all parameters to test the maximum capacity of different network designs. Experiments show that the performance of the PReLU-activated networks is more stable, and can be seen as the up-bound of that for the ReLU-activated networks. \noindent \textbf{Overall structure:} We can connect the above five parts to form a complete FSRCNN network as $Conv(5,d,1)-PReLU-Conv(1,s,d)-PReLU-m\times Conv(3,s,s)-PReLU-Conv(1,d,s)-PReLU-DeConv(9,1,d)$. On the whole, there are three sensitive variables (\emph{i.e.,}~the LR feature dimension $d$, the number of shrinking filters $s$, and the mapping depth $m$) governing the performance and speed. For simplicity, we represent a FSRCNN network as $FSRCNN(d,s,m)$. The computational complexity can be calculated as \begin{equation} \begin{array}{rcl} \label{eqn:computationFSRCNN} O\{(25d + sd + 9ms^2 + ds+ 81d)S_{LR}\} = O\{(9ms^2 + 2sd + 106d)S_{LR}\}. \end{array} \end{equation} We exclude the parameters of PReLU , which introduce negligible computational cost. Interestingly, the new structure looks like an \textit{hourglass}, which is symmetrical on the whole, thick at the ends, and thin in the middle. The three sensitive variables are just the controlling parameters for the appearance of the hourglass. Experiments show that this hourglass design is very effective for image super-resolution. \noindent \textbf{Cost function:} Following SRCNN, we adopt the mean square error (MSE) as the cost function. The optimization objective is represented as \begin{equation} \label{eqn:loss} \min_{\theta} \frac{1}{n}\sum\nolimits_{i=1}^n||F(Y_s^{i} ;\theta) - X^i||_2^2, \end{equation} where $Y_s^i$ and $X^i$ are the $i$-th LR and HR sub-image pair in the training data, and $F(Y_s^{i} ; \theta)$ is the network output for $Y_s^i$ with parameters $\theta$. All parameters are optimized using stochastic gradient descent with the standard backpropagation. \subsection{Differences against SRCNN: From SRCNN to FSRCNN} \label{sec:parameters} To better understand how we accelerate SRCNN, we transform the SRCNN-Ex to another FSRCNN (56,12,4) within three steps, and show how much acceleration and PSNR gain are obtained by each step. We use a representative upscaling factor $n=3$. The network configurations of SRCNN, FSRCNN and the two transition states are shown in Table~\ref{tab:transition}. We also show their performance (average PSNR on Set5) trained on the 91-image dataset~\cite{Yang2010a}. \begin{table}[t]\scriptsize \caption{The transitions from SRCNN to FSRCNN.}\label{tab:transition} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & SRCNN-Ex & Transition State 1 & Transition State 2 & FSRCNN (56,12,4)\\ \hline First part & Conv(9,64,1) & Conv(9,64,1) & Conv(9,64,1) & \textbf{Conv(5,56,1)} \\ \hline & & & \textbf{Conv(1,12,64)-} & \textbf{Conv(1,12,56)-} \\ Mid part & Conv(5,32,64)& Conv(5,32,64)& \textbf{4Conv(3,12,12)} & 4Conv(3,12,12) \\ & & & \textbf{-Conv(1,64,12)} & \textbf{-Conv(1,56,12)} \\ \hline Last part & Conv(5,1,32) & \textbf{DeConv(9,1,32)} & \textbf{DeConv(9,1,64)} & \textbf{DeConv(9,1,56)} \\ \hline Input size & $S_{HR}$ & $S_{LR}$ & $S_{LR}$ & $S_{LR}$ \\ \hline Parameters & 57184 & 58976 & 17088 & 12464 \\ \hline\hline Speedup &1$\times$ &8.7$\times$ &30.1$\times$ &41.3$\times$ \\ \hline PSNR (Set5) & 32.83 dB & 32.95 dB & 33.01 dB & 33.06 dB \\ \hline \end{tabular} \end{center} \end{table} First, we replace the last convolution layer of SRCNN-Ex with a deconvolution layer, then the whole network will perform on the original LR image and the computation complexity is proportional to $S_{LR}$ instead of $S_{HR}$. This step will enlarge the network scale but achieve a speedup of $8.7\times$ (\emph{i.e.,}~$57184/58976\times3^2$). As the learned deconvolution kernels are better than a single bicubic kernel, the performance increases roughly by 0.12 dB. Second, the single mapping layer is replaced with the combination of a shrinking layer, 4 mapping layers and an expanding layer. Overall, there are 5 more layers, but the parameters are decreased from 58,976 to 17,088. Also, the acceleration after this step is the most prominent -- $30.1\times$. It is widely observed that depth is the key factor that affects the performance. Here, we use four ``narrow'' layers to replace a single ``wide'' layer, thus achieving better results (33.01 dB) with much less parameters. Lastly, we adopt smaller filter sizes and less filters (\emph{e.g.,}~from $Conv(9,64,1)$ to $Conv(5,56,1)$), and obtain a final speedup of $ 41.3\times$. As we remove some redundant parameters, the network is trained more efficiently and achieves another 0.05 dB improvement. It is worth noting that this acceleration is NOT at the cost of performance degradation. Contrarily, the FSRCNN (56,12,4) outperforms SRCNN-Ex by a large margin (\emph{e.g.,}~0.23dB on the Set5 dataset). The main reasons of high performance have been presented in the above analysis. This is the main difference between our method and other CNN acceleration works~\cite{Denton2014,Zhang2015}. Nevertheless, with the guarantee of good performance, it is easier to cooperate with other acceleration methods to get a faster model. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{pic/deconvolution.pdf} \caption{The FSRCNN consists of convolution layers and a deconvolution layer. The convolution layers can be shared for different upscaling factors. A specific deconvolution layer is trained for different upscaling factors.} \label{fig:deconvolution} \end{figure} \subsection{SR for Different Upscaling Factors} \label{subsec:across_factor} Another advantage of FSRCNN over the previous learning-based methods is that FSRCNN could achieve fast training and testing across different upscaling factors. Specifically, we find that all convolution layers on the whole act like a complex feature extractor of the LR image, and only the last deconvolution layer contains the information of the upscaling factor. This is also proved by experiments, of which the convolution filters are almost the same for different upscaling factors\footnote{Note that in SRCNN and SCN, the convolution filters differ a lot for different upscaling factors.}. With this property, we can transfer the convolution filters for fast training and testing. In practice, we train a model for an upscaling factor in advance. Then during training, we only fine-tune the deconvolution layer for another upscaling factor and leave the convolution layers unchanged. The fine-tuning is fast, and the performance is as good as training from scratch (see Section~\ref{sec:transfer}). During testing, we perform the convolution operations once, and upsample an image to different sizes with the corresponding deconvolution layer. If we need to apply several upscaling factors simultaneously, this property can lead to much faster testing (as illustrated in Figure~\ref{fig:deconvolution}). \section{Experiments} \subsection{Implementation Details} \noindent \textbf{Training dataset.} The 91-image dataset is widely used as the training set in learning-based SR methods~\cite{Yang2010a,Timofte2014,Dong2014}. As deep models generally benefit from big data, studies have found that 91 images are not enough to push a deep model to the best performance. Yang~\emph{et al.}~\cite{Yang2014} and Schulter~\emph{et al.}~\cite{Schulter2015} use the BSD500 dataset~\cite{martin2001database}. However, images in the BSD500 are in JPEG format, which are not optimal for the SR task. Therefore, we contribute a new General-100 dataset that contains 100 bmp-format images (with no compression)\footnote{We follow \cite{Huang2015} to introduce only 100 images in a new super-resolution dataset. A larger dataset with more training images will be released on the project page.}. The size of the newly introduced 100 images ranges from $710\times 704$ (large) to $131\times 112$ (small). They are all of good quality with clear edges but fewer smooth regions (\emph{e.g.,}~sky and ocean), thus are very suitable for the SR training. In the following experiments, apart from using the 91-image dataset for training, we will also evaluate the applicability of the joint set of the General-100 dataset and the 91-image dataset to train our networks. To make full use of the dataset, we also adopt data augmentation as in \cite{Wang2015}. We augment the data in two ways. 1) Scaling: each image is downscaled with the factor 0.9, 0,8, 0.7 and 0.6. 2) Rotation: each image is rotated with the degree of 90, 180 and 270. Then we will have $5\times 4-1=19$ times more images for training. \noindent \textbf{Test and validation dataset.} Following SRCNN and SCN, we use the Set5~\cite{Bevilacqua2012}, Set14 \cite{Zeyde2012} and BSD200~\cite{martin2001database} dataset for testing. Another 20 images from the validation set of the BSD500 dataset are selected for validation. \noindent \textbf{Training samples.} To prepare the training data, we first downsample the original training images by the desired scaling factor $n$ to form the LR images. Then we crop the LR training images into a set of $f_{sub}\times f_{sub}$-pixel sub-images with a stride $k$. The corresponding HR sub-images (with size $(nf_{sub})^2$) are also cropped from the ground truth images. These LR/HR sub-image pairs are the primary training data. For the issue of padding, we empirically find that padding the input or output maps does little effect on the final performance. Thus we adopt zero padding in all layers according to the filter size. In this way, there is no need to change the sub-image size for different network designs. Another issue affecting the sub-image size is the deconvolution layer. As we train our models with the \textit{Caffe} package~\cite{Jia2014}, its deconvolution filters will generate the output with size $(nf_{sub}-n+1)^2$ instead of $(nf_{sub})^2$. So we also crop $(n-1)$-pixel borders on the HR sub-images. Finally, for $\times$2, $\times$3 and $\times$4, we set the size of LR/HR sub-images to be $10^2/19^2$, $7^2/19^2$ and $6^2/21^2$, respectively. \noindent \textbf{Training strategy.} For fair comparison with the state-of-the-arts (Sec.~\ref{subsec:sota}), we adopt the 91-image dataset for training. In addition, we also explore a two-step training strategy. First, we train a network from scratch with the 91-image dataset. Then, when the training is saturated, we add the General-100 dataset for fine-tuning. With this strategy, the training converges much earlier than training with the two datasets from the beginning. When training with the 91-image dataset, the learning rate of the convolution layers is set to be $10^{-3}$ and that of the deconvolution layer is $10^{-4}$. Then during fine-tuning, the learning rate of all layers is reduced by half. For initialization, the weights of the convolution filters are initialized with the method designed for PReLU in \cite{He2015}. As we do not have activation functions at the end, the deconvolution filters are initialized by the same way as in SRCNN (\emph{i.e.,}~drawing randomly from a Gaussian distribution with zero mean and standard deviation 0.001). \subsection{Investigation of Different Settings} \label{sec:Investigation} To test the property of the FSRCNN structure, we design a set of controlling experiments with different values of the three sensitive variables -- the LR feature dimension $d$, the number of shrinking filters $s$, and the mapping depth $m$. Specifically, we choose $d=48,56$, $s=12,16$ and $m=2,3,4$, thus we conduct a total of $2\times 2\times 3=12$ experiments with different combinations. The average PSNR values on the Set5 dataset of these experiments are shown in Table~\ref{tab:settings}. We analyze the results in two directions,~\emph{i.e.,}~horizontally and vertically in the table. First, we fix $d,s$ and examine the influence of $m$. Obviously, $m=4$ leads to better results than $m=2$ and $m=3$. This trend can also be observed from the convergence curves shown in Figure~\ref{fig:com}(a). Second, we fix $m$ and examine the influence of $d$ and $s$. In general, a better result usually requires more parameters (\emph{e.g.,}~a larger $d$ or $s$), but more parameters do not always guarantee a better result. This trend is also reflected in Figure~\ref{fig:com}(b), where we see the three largest networks converge together. From all the results, we find the best trade-off between performance and parameters -- FSRCNN (56,12,4), which achieves one of the highest results with a moderate number of parameters. It is worth noticing that the smallest network FSRCNN (48,12,2) achieves an average PSNR of 32.87 dB, which is already higher than that of SRCNN-Ex (32.75 dB) reported in~\cite{Dong2015}. The FSRCNN (48,12,2) contains only 8,832 parameters, then the acceleration compared with SRCNN-Ex is $57184/8832\times 9=58.3$ times. \begin{table}[t] \caption{The comparison of PSNR (Set5) and parameters of different settings.}\label{tab:settings} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Settings & $m=2$ & $m=3$ & $m=4$ \\ \hline $d=48,s=12$ & 32.87 (8832) & 32.88 (10128) & 33.08 (11424)\\ \hline $d=56,s=12$ & 33.00 (9872) & 32.97 (11168) & 33.16 (12464)\\ \hline $d=48,s=16$ & 32.95 (11232) & 33.10 (13536) & 33.18 (15840)\\ \hline $d=56,s=16$ & 33.01 (12336) & 33.12 (14640) & 33.17 (16944)\\ \hline \end{tabular} \end{center} \vspace{-0.45cm} \end{table} \begin{figure}[t]\footnotesize \centering \includegraphics[width=\linewidth]{pic/FSRCNN_com.pdf} \caption{Convergence curves of different network designs.} \label{fig:com} \end{figure} \subsection{Towards Real-Time SR with FSRCNN} Now we want to find a more concise FSRCNN network that could realize real-time SR while still keep good performance. First, we calculate how many parameters can meet the minimum requirement of real-time implementation (24 fps). As mentioned in the introduction, the speed of SRCNN to upsample an image to the size $760\times 760$ is 1.32 fps. The upscaling factor is 3, and SRCNN has 8032 parameters. Then according to Equation~\ref{eqn:computationSRCNN} and \ref{eqn:computationFSRCNN}, the desired FSRCNN network should have at most $8032\times 1.32/24\times 3^2\approx 3976$ parameters. To achieve this goal, we find an appropriate configuration -- FSRCNN (32,5,1) that contains 3937 parameters. With our C++ test code, the speed of FSRCNN (32,5,1) reaches 24.7 fps, satisfying the real-time requirement. Furthermore, the FSRCNN (32,5,1) even outperforms SRCNN (9-1-5)~\cite{Dong2014} (see Table~\ref{results91} and~\ref{results}). \subsection{Experiments for Different Upscaling Factors} \label{sec:transfer} Unlike existing methods~\cite{Dong2014,Dong2015} that need to train a network from scratch for a different scaling factor, the proposed FSRCNN enjoys the flexibility of learning and testing across upscaling factors through transferring the convolution filters (Sec.~\ref{subsec:across_factor}). We demonstrate this flexibility in this section. We choose the FSRCNN (56,12,4) as the default network. As we have obtained a well-trained model under the upscaling factor 3 (in Section~\ref{sec:Investigation}), we then train the network for $\times$2 on the basis of that for $\times$3. To be specific, the parameters of all convolution filters in the well-trained model are transferred to the network of $\times$2. During training, we only fine-tune the deconvolution layer on the 91-image and General-100 datasets of $\times$2. For comparison, we train another network also for $\times$2 but from scratch. The convergence curves of these two networks are shown in Figure~\ref{fig:upscalingfactor}. Obviously, with the transferred parameters, the network converges very fast (only a few hours) with the same good performance as that training form scratch. In the following experiments, we only train the networks from scratch for $\times$3, and fine-tune the corresponding deconvolution layers for $\times$2 and $\times$4. \begin{figure}[t]\footnotesize \centering \includegraphics[width=0.7\linewidth]{pic/FSRCNN_com3.pdf} \caption{Convergence curves for different training strategies.} \label{fig:upscalingfactor} \end{figure} \subsection{Comparison with State-of-the-Arts} \label{subsec:sota} \textbf{Compare using the same training set.} First, we compare our method with four state-of-the-art learning-based SR algorithms that rely on external databases, namely the super-resolution forest (SRF)~\cite{Schulter2015}, SRCNN~\cite{Dong2014}, SRCNN-Ex~\cite{Dong2015} and the sparse coding based network (SCN)~\cite{Wang2015}. The implementations of these methods are all based on their released source code. As they are written in different programming languages, the comparison of their test time may not be fair, but still reflects the main trend. To have a fair comparison on restoration quality, all models are trained on the augmented 91-image dataset, so the results are slightly different from that in the corresponding paper. We select two representative FSRCNN networks -- FSRCNN (short for FSRCNN (56,12,4)), and FSRCNN-s (short for FSRCNN (32,5,1)). The inference time is tested with the C++ implementation on an Intel i7 CPU 4.0 GHz. The quantitative results (PSNR and test time) for different upscaling factors are listed in Table~\ref{results91}. We first look at the test time, which is the main focus of our work. The proposed FSRCNN is undoubtedly the fastest method that is at least $40$ times faster than SRCNN-Ex, SRF and SCN (with the upscaling factor 3), while the fastest FSRCNN-s can achieve real-time performance ($>24$ fps) on almost all the test images. Moreover, the FSRCNN still outperforms the previous methods on the PSNR values especially for $\times$2 and $\times$3. We also notice that the FSRCNN achieves slightly lower PSNR than SCN on factor 4. This is mainly because that the SCN adopts two models of $\times$2 to upsample an image by $\times$4. We have also tried this strategy and achieved comparable results. However, as we pay more attention to speed, we still present the results of a single network. \noindent \textbf{Compare using different training sets (following the literature).} To follow the literature, we also compare the best PSNR results that are reported in the corresponding paper, as shown in Table~\ref{results}. We also add another two competitive methods -- KK~\cite{Kim2010} and A+~\cite{Timofte2014} for comparison. Note that these results are obtained using different datasets, and our models are trained on the 91-image and General-100 datasets. From Table~\ref{results}, we can see that the proposed FSRCNN still outperforms other methods on most upscaling factors and datasets. We have also done comprehensive comparisons in terms of SSIM and IFC~\cite{sheikh2005information} in Table~\ref{result1} and~\ref{result2}, where we observe the same trend. The reconstructed images of FSRCNN (shown in Figure~\ref{fig:result1} and ~\ref{fig:result2}), more examples can be found on the project page) are sharper and clearer than other results. In another aspect, the restoration quality of small models (FSRCNN-s and SRCNN) is slightly worse than large models (SRCNN-Ex, SCN and FSRCNN). In Figure~\ref{fig:result1}, we could observe some "jaggies" or ringing effects in the results of FSRCNN-s and SRCNN. \section{Conclusion} While observing the limitations of current deep learning based SR models, we explore a more efficient network structure to achieve high running speed without the loss of restoration quality. We approach this goal by re-designing the SRCNN structure, and achieves a final acceleration of more than 40 times. Extensive experiments suggest that the proposed method yields satisfactory SR performance, while superior in terms of run time. The proposed model can be adapted for real-time video SR, and motivate fast deep models for other low-level vision tasks. \noindent \textbf{Acknowledgment}. This work is partially supported by SenseTime Group Limited. \begin{table*}\scriptsize \caption{The results of PSNR (dB) and test time (sec) on three test datasets. All models are trained on the 91-image dataset. }\label{results91} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline test & upscaling & \multicolumn{2}{c|}{Bicubic} & \multicolumn{2}{c|}{SRF~\cite{Schulter2015}} & \multicolumn{2}{c|}{SRCNN~\cite{Dong2014}} & \multicolumn{2}{c|}{SRCNN-Ex~\cite{Dong2015}} &\multicolumn{2}{c|}{SCN~\cite{Wang2015}} & \multicolumn{2}{c|}{FSRCNN-s} & \multicolumn{2}{c|}{FSRCNN}\\ \cline{3-16} dataset & factor & PSNR&Time & PSNR&Time& PSNR&Time &PSNR&Time & PSNR&Time &PSNR&Time& PSNR&Time\\ \hline\hline Set5 & 2 & 33.66 & -& 36.84 & 2.1 & 36.33 & 0.18 & 36.67 & 1.3 &36.76 & 0.94 & 36.53 & \textbf{0.024} & \textbf{36.94} &0.068 \\ Set14 & 2 & 30.23 & -& 32.46 & 3.9 & 32.15 & 0.39 & 32.35 & 2.8 & 32.48 & 1.7 & 32.22 & \textbf{0.061} & \textbf{32.54} &0.16 \\ BSD200 & 2 & 29.70 &- & 31.57 & 3.1 & 31.34 & 0.23 & 31.53 & 1.7 & 31.63 & 1.1 & 31.44 &\textbf{0.033}& \textbf{31.73} &0.098 \\ \hline\hline Set5 & 3 & 30.39 & -& 32.73 & 1.7 & 32.45 & 0.18& 32.83 & 1.3 & 33.04 & 1.8 & 32.55 &\textbf{0.010}& \textbf{33.06} &0.027 \\ Set14 & 3 & 27.54 & -& 29.21 & 2.5 & 29.01 & 0.39& 29.26 & 2.8 &29.37 & 3.6 & 29.08 &\textbf{0.023}& \textbf{29.37} &0.061 \\ BSD200 & 3 & 27.26 &- & 28.40 & 2.0 & 28.27 & 0.23& 28.47 & 1.7 &28.54 & 2.4 & 28.32 &\textbf{0.013}& \textbf{28.55} &0.035 \\ \hline\hline Set5 & 4 & 28.42 & -& 30.35 & 1.5 & 30.15 & 0.18 & 30.45 & 1.3& \textbf{30.82} & 1.2 & 30.04 & \textbf{0.0052} & 30.55 &0.015 \\ Set14 & 4 & 26.00 & -& 27.41 & 2.1 & 27.21 & 0.39 & 27.44 & 2.8& \textbf{27.62} & 2.3 & 27.12 & \textbf{0.0099} & 27.50 & 0.029 \\ BSD200 & 4 & 25.97 &- & 26.85 & 1.7 & 26.72 & 0.23 & 26.88 & 1.7 & \textbf{27.02} & 1.4 & 26.73 & \textbf{0.0072} & 26.92 &0.019 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}\scriptsize \caption{The results of PSNR (dB) on three test datasets. We present the best results reported in the corresponding paper. The proposed FSCNN and FSRCNN-s are trained on both 91-image and General-100 dataset. More comparisons with other methods on PSNR, SSIM and IFC~\cite{sheikh2005information} can be found in the supplementary file.} \label{results} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline test & upscaling & {Bicubic} & KK~\cite{Kim2010} & A+~\cite{Timofte2014} & {SRF~\cite{Schulter2015}} & {SRCNN~\cite{Dong2014}} & {SRCNN-Ex~\cite{Dong2015}} &{SCN~\cite{Wang2015}} & {FSRCNN-s} & {FSRCNN}\\ \cline{3-11} dataset & factor & PSNR & PSNR & PSNR &PSNR & PSNR &PSNR & PSNR &PSNR & PSNR \\ \hline\hline Set5 & 2 & 33.66 & 36.20 & 36.55 & 36.89 & 36.34 & 36.66 & 36.93 & 36.58 & \textbf{37.00} \\ Set14 & 2 & 30.23 & 32.11 & 32.28 & 32.52 & 32.18 & 32.45 & 32.56 & 32.28 & \textbf{32.63} \\ BSD200 & 2 & 29.70 & 31.30 & 31.44 & 31.66 & 31.38 & 31.63 & 31.63 & 31.48 & \textbf{31.80} \\ \hline\hline Set5 & 3 & 30.39 & 32.28 & 32.59 & 32.72 & 32.39 & 32.75 & 33.10 & 32.61 & \textbf{33.16} \\ Set14 & 3 & 27.54 & 28.94 & 29.13 & 29.23 & 29.00 & 29.30 & 29.41 & 29.13 & \textbf{29.43} \\ BSD200 & 3 & 27.26 & 28.19 & 28.36 & 28.45 & 28.28 & 28.48 & 28.54 & 28.32 & \textbf{28.60} \\ \hline\hline Set5 & 4 & 28.42 & 30.03 & 30.28 & 30.35 & 30.09 & 30.49 & \textbf{30.86} & 30.11 & 30.71 \\ Set14 & 4 & 26.00 & 27.14 & 27.32 & 27.41 & 27.20 & 27.50 & \textbf{27.64} & 27.19 & 27.59 \\ BSD200 & 4 & 25.97 & 26.68 & 26.83 & 26.89 & 26.73 & 26.92 & \textbf{27.02} & 26.75 & 26.98 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[!h]\scriptsize \caption{The results of PSNR (dB), SSIM and IFC~\cite{sheikh2005information} on the Set5~\cite{Sheikh2005}, Set14~\cite{Zeyde2012} and BSD200~\cite{martin2001database} datasets.}\label{result1} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline test & upscaling & Bicubic & KK~\cite{Kim2010} & ANR~\cite{Timofte2013} & A+~\cite{Timofte2013} & SRF~\cite{Schulter2015} \\ dataset & factor & PSNR/SSIM/IFC & PSNR/SSIM/IFC &PSNR/SSIM/IFC & PSNR/SSIM/IFC & PSNR/SSIM/IFC\\ \hline\hline Set5 & 2 & 33.66/0.9299/6.10 & 36.20/0.9511/6.87 & 35.83/0.9499/8.09 & 36.55/0.9544/8.48 & 36.87/0.9556/\textbf{8.63} \\ Set14 & 2 & 30.23/0.8687/6.09 & 32.11/0.9026/6.83 & 31.80/0.9004/7.81 & 32.28/0.9056/8.11 &32.51/0.9074/\textbf{8.22} \\ BSD200 & 2 & 29.70/0.8625/5.70 & 31.30/0.9000/6.26 & 31.02/0.8968/7.27 & 31.44/0.9031/7.49 & 31.65/0.9053/\textbf{7.60}\\ \hline\hline Set5 & 3 & 30.39/0.9299/6.10 & 32.28/0.9033/4.14 & 31.92/0.8968/4.52 & 32.59/0.9088/4.84 & 32.71/0.9098/\textbf{4.90} \\ Set14 & 3 & 27.54/0.7736/3.41 & 28.94/0.8132/3.83 & 28.65/0.8093/4.23 & 29.13/0.8188/4.45 & 29.23/0.8206/\textbf{4.49} \\ BSD200 & 3 & 27.26/0.7638/3.19 & 28.19/0.8016/3.49 & 28.02/0.7981/3.91 & 28.36/0.8078/4.07 & 28.45/0.8095/4.11 \\ \hline\hline Set5 & 4 & 28.42/0.8104/2.35 & 30.03/0.8541/2.81 & 29.69/0.8419/3.02 & 30.28/0.8603/\textbf{3.26} & 30.35/0.8600/3.26\\ Set14 & 4 & 26.00/0.7019/2.23 & 27.14/0.7419/2.57 & 26.85/0.7353/2.78 & 27.32/0.7471/2.74 & 27.41/0.7497/\textbf{2.94}\\ BSD200 & 4 & 25.97/0.6949/2.04 & 26.68/0.7282/2.22 & 26.56/0.7253/2.51 & 26.83/0.7359/2.62 & 26.89/0.7368/\textbf{2.62}\\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[!h]\scriptsize \caption{The results of PSNR (dB), SSIM and IFC~\cite{sheikh2005information} on the Set5~\cite{Sheikh2005}, Set14~\cite{Zeyde2012} and BSD200~\cite{martin2001database} datasets.}\label{result2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline test & upscaling & SRCNN~\cite{Dong2014} & SRCNN-Ex~\cite{Dong2015} & SCN~\cite{Wang2015} & FSRCNN-s & FSRCNN \\ dataset & factor & PSNR/SSIM/IFC& PSNR/SSIM/IFC &PSNR/SSIM/IFC & PSNR/SSIM/IFC & PSNR/SSIM/IFC\\ \hline\hline Set5 & 2 & 36.34/0.9521/7.54 & 36.66/0.9542/8.05 & 36.76/0.9545/7.32 & 36.58/0.9532/7.75 & \textbf{37.00}/\textbf{0.9558}/8.06 \\ Set14 & 2 & 32.18/0.9039/7.22 & 32.45/0.9067/7.76 & 32.48/0.9067/7.00 & 32.28/0.9052/7.47 & \textbf{32.63}/\textbf{0.9088}/7.71\\ BSD200 & 2 & 31.38/0.9287/6.80 & 31.63/0.9044/7.26 & 31.63/0.9048/6.45 & 31.48/0.9027/7.01 & \textbf{31.80}/\textbf{0.9074}/7.25\\ \hline\hline Set5 & 3 & 32.39/0.9033/4.25 & 32.75/0.9090/4.58 & 33.04/0.9136/4.37 & 32.54/0.9055/4.56 & \textbf{33.16}/\textbf{0.9140}/4.88\\ Set14 & 3 & 29.00/0.8145/3.96 & 29.30/0.8215/4.26 & 29.37/0.8226/3.99 & 29.08/0.8167/4.24 & \textbf{29.43}/\textbf{0.8242}/4.47 \\ BSD200 & 3 & 28.28/0.8038/3.67 & 28.48/0.8102/3.92 & 28.54/0.8119/3.59 & 28.32/0.8058/3.96 & \textbf{28.60}/\textbf{0.8137}/\textbf{4.11} \\ \hline\hline Set5 & 4 & 30.09/0.8530/2.86 & 30.49/0.8628/3.01 & \textbf{30.82}/\textbf{0.8728}/3.07 & 30.11/0.8499/2.76 & 30.71/0.8657/3.01\\ Set14 & 4 & 27.20/0.7413/2.60 & 27.50/0.7513/2.74 & \textbf{27.62}/\textbf{0.7571}/2.71 & 27.19/0.7423/2.55 & 27.59/0.7535/2.70 \\ BSD200 & 4 & 26.73/0.7291/2.37 & 26.92/0.7376/2.46 & \textbf{27.02}/\textbf{0.7434}/2.38 & 26.75/0.7312/2.32 & 26.98/0.7398/2.41\\ \hline \end{tabular} \end{center} \end{table*} \begin{figure*}[hp] \footnotesize \begin{center} \includegraphics[width=\linewidth]{pic/lenna1.pdf} \caption{The ``lenna" image from the Set14 dataset with an upscaling factor 3.} \label{fig:result1} \end{center} \end{figure*} \begin{figure*}[hp]\footnotesize \begin{center} \includegraphics[width=\linewidth]{pic/butterfly1.pdf} \caption{The ``butterfly" image from the Set5 dataset with an upscaling factor 3.} \label{fig:result2} \end{center} \end{figure*} \clearpage \bibliographystyle{splncs}
2024-02-18T23:40:28.804Z
2016-08-02T02:12:14.000Z
algebraic_stack_train_0000
2,521
8,137
proofpile-arXiv_065-12230
\section{Supplemental Material} For our scheme, using a large enough sample size is important when tracking the polarization state, because the statistical fluctuation of the samples will weaken the reference tracking ability. Determining the required unmber of keys in a sample is a considerable problem. The photons, before going through the PBS with an arbitrary polarization, (we use the rectilinear base in Fig. \ref{fig:EX1 setup} as an example), can be described by \cite{muga2011qber} \begin{equation} \left| \phi\right\rangle = \sin \theta \left| V\right\rangle + \cos \theta e^{i \phi} \left| H\right\rangle \end{equation} where $\left| V\right\rangle$ and $\left| H\right\rangle$ represent the states at the exit through the vertical and horizontal ports for PBS, respectively. $\theta$ is related to the polarization projection, and $\phi$ is the retardation between $ \left| H\right\rangle$ and $\left| V\right\rangle$. Then, we use $A_1$, $A_2$ to represent the fraction of photons that exit through the PBS. \begin{equation} A_1=\left|\left\langle V| \phi \right\rangle \right| ^2= 1-\cos^2 \theta \end{equation} \begin{equation} A_2=\left|\left\langle H| \phi\right\rangle \right| ^2= \cos^2 \theta \end{equation} so the probability for detector $D1$ and $D2$ to obtain counts is\cite{ma2005practical} \begin{equation} P_1=1-e^{-\eta\left\langle n_1\right\rangle } \label{1} \end{equation} \begin{equation} P_2=1-e^{-\eta\left\langle n_2\right\rangle } \label{2} \end{equation} where $\left\langle n_1\right\rangle = A_1 \left\langle n\right\rangle$ and $\left\langle n_2\right\rangle = A_2 \left\langle n\right\rangle$ are the mean number of data photons per pulse to $D1$ and $D2$ after PBS. $\left\langle n\right\rangle$ is the mean photons number per pulse for light launched by Alice. $\eta$ is the overall transmission and detection efficiency between Alice and Bob \begin{equation} \label{eta} \eta=t_{ab}\eta_{Bob}=10^{-\alpha l/10}\eta_{Bob} \end{equation} For simplicity, we suppose the devices in our experiment are perfect, so we ignore the influence of the dark counts in the SPDs there. Where $\alpha$, $l$ and $\eta_{Bob}$ are the loss coefficient in dB/Km, the length of fiber in Km and the detection efficiency of Bob's detectors, respectively. Without loss of generality, we suppose Alice only sends one polarization state $\left| H\right\rangle$. We use $QBER_t$ to represent the true error rite in the fiber. Then, we have \begin{equation} QBER_t = \left\langle n_1\right\rangle/ (\left\langle n_1\right\rangle+\left\langle n_2\right\rangle)= 1-\cos^2 \theta \end{equation} In our scheme, we use $QBER_e$ to estimate $QBER_t$ which is used as feedback by consuming shift keys. The sample size of consumed shift keys is $B$, and the counts for $D1$ and $D2$ are $M$ and $N$ after Alice sends $B$ pulses to Bob. Then \begin{equation}\label{6} QBER_e = M/(M+N) =M/B \end{equation} $M$ and $N$ approach the Binomial distribution \begin{equation} P(M=m)=\binom{B}{m}P_1^m (1-P_1)^{Q-m} \label{3} \end{equation} \begin{equation} P(N=n)=\binom{B}{n}P_2^n (1-P_2)^{Q-n} \label{4} \end{equation} If the sample size is infinite, we have $QBER_e = QBER_t$, but for real circumstances, there is a deviation between $QBER_e$ and $QBER_t$. According to probability theory, we know there is a $99.7\%$ probability for $QBER_e$ to be in the section $[E(QBER_e)-3*\sigma_{QBER_e} \qquad E(QBER_t)+3*\sigma_{QBER_e}]$, where $E(QBER_e)$ and $\sigma_{QBER_e}$ are the expectation and standard deviation of $QBER_e$. Because $E(QBER_e) = QBER_t$ obviously, we use $\Delta_{QBER}=3*\sigma_{QBER_e}$ as the maximum offset for $QBER_e$ from $QBER_t$. According to Eq.\ref{6} and Ref.\cite{johnson1999survival,struart1999kendall}, we have \begin{equation}\begin{split}\label{5} \sigma_{QBER_e} &=\sqrt{Var(\frac{M}{B})}\\ &\approx \sqrt{\frac{1}{E^2(Q')}Var(M) - 2\frac{E(M)}{E^3(Q')}Cov(M,B) + \frac{E^2(M)}{E^4(Q')}Var(B)}\\ \end{split}\end{equation} where $Var$ means variance, and $Cov$ represent covariance. By substituting Eqs.\ref{3},\ref{4} and \ref{5}, we get \begin{equation}\label{8} \Delta_{QBER} = 3*\sigma_{QBER_e} \approx 3*\sqrt{\frac{1}{B}*\frac{P_1 P_2 (P_1+P_2-2P_1P_2)}{(P_1+P_2)^3}} \end{equation} The maximum offset $\Delta_{QBER}$ plotted versus sample size is show in Fig. \ref{fig:QBER}. Consider Eqs\ref{1}, \ref{2}, \ref{8}, with $QBER=1\%, 2\%, 3\%$ respectively and with $\eta=10\%$, $\left\langle n\right\rangle=0.1$ in all the case. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{figure35.png} \caption{(Color online) the relationship of the consumed bits used to track the SOP and the standard deviation for $QBER_e$ given different $QBER_t$.} \label{fig:QBER} \end{figure} According the analysis above, and considering the low repetition rate of our QKD system, the sample size is chosen to be 2500 bits for one polarization base. Under these circumstances, $\Delta_{QBER}$ is under 0.6$\%$ when $QBER_t$ below $10\%$. If $QBER_t$ is greater than $10\%$, we use all the sifted keys to track the polarization reference disalignment, and it is enough for our scheme obviously. Of course, when the repetition rate rises, we can sample more bits for polarization tracking, and $\Delta_{QBER}$ will obviously be less .
2024-02-18T23:40:28.820Z
2016-08-03T02:03:29.000Z
algebraic_stack_train_0000
2,523
897
proofpile-arXiv_065-12232
\chapter{Introduction} \label{chap:1} Explaining the syntactic variation and universals including the constraints on that variation across languages in the world is essential both from a theoretical and practical point of view. It is in fact one of the main goals in linguistics \cite{Greenberg-1963,Dryer-1992,Evans2009}. In computational linguistics, these kinds of syntactic regularities and constraints could be utilized as prior knowledge about grammars, which would be valuable for improving the performance of various syntax-oriented systems such as parsers or grammar induction systems, e..g, by being encoded as {\it features} in a system \cite{collins-thesis1999,McDonald:2005:NDP:1220575.1220641}. This thesis is about such syntactic universals. Our goal in this thesis is to identify a good syntactic constraint that fits well to the natural language sentences and thus could be exploited to improve the performance of syntax-oriented systems such as parsers. For this end, we pick up a well known linguistic phenomenon that might be universal across languages, empirically examine its language universality across diverse languages using cross-linguistic datasets, and present computational experiments to demonstrate its utility in a real application. Along with this, we also define several computational algorithms that efficiently exploit the constraint during sentence processing. For an application, we show that our constraint will help in the task of unsupervised syntactic parsing, or grammar induction where the goal is to find salient syntactic patterns without explicit supervision about grammars. In linguistics, one pervasive hypothesis about the origin of such syntactic constraints is that they come from the limitations on the human cognitive mechanism and pressures associated with language acquisition and use \cite{jaeger2011language-gsc,Fedzechkina30102012}. In other words, since the language is a tool for communication, it is natural to assume that its shape has been formed to increase the daily communication efficiency or the learnability for language learners. The underlying commonalities in diverse languages are then understood as the outcome of such pressures that every language user might naturally suffer from. Our focused constraint in this thesis also has its origin in the restriction of the human ability of comprehension observed in several psycholinguistic experiments, which we introduce next. \paragraph{Center-embedding} It is well known in the psycholinguistic literature that a nested, or center-embedded structure is particularly difficult for compherension: \enumsentence{\# The reporter [who the senator [who Mary met attacked] ignored] the president.} \label{sent:intro:embedding} \noindent This sentence is called center-embedding by its syntactic construction indicated with brackets. This observation will be the starting point of the current study. The difficulty of center-embedded structures has been testified across a number of languages including English \cite{Gibson2000The-dependency-,Chen2005144} and Japanese \cite{COGS:COGS1067}. Compared to these behavioral studies, the current study aims to characterize the phenomenon of center-embedding from {\it computational} and quantitative perspectives. For instance, one significance of the current study is to show that center-embedding is in fact a rarely observed syntactic phenomenon across a variety of languages. We verify this fact using syntactically annotated corpora, i.e., treebanks of more than 20 languages. \paragraph{Left-corner} Another important concept in this thesis is {\it left-corner} parsing. A left-corner parser parses an input sentence on a {\it stack}; the distinguished property of it is that its stack depth increases only when generating, or accepting center-embedded structures. These formal properties of left-corner parsers were studied more than 20 years ago \cite{abney91memory,conf/coling/Resnik92} although until now there exists little study concerning its empirical behaviors as well as its potential for a device to exploit syntactic regularities of languages as we investigate in this thesis. One previous attempt for utilizing a left-corner parser in a practical application is Johnson's linear-time tabular parsing by approximating the state space of a parser by a finite state machine. However, this trial was not successful \cite{conf/acl/Johnson98}.\footnote{The idea is that since a left-corner parser can recognize most of (English) sentences within a limited stack depth bound, e.g., 3, the number of possible stack configurations will be constant and we may construct a finite state machine for a given context-free grammar. However in practice, the grammar constant for this algorithm gets much larger, leading to $O(n^3)$ asymptotic runtime, the same as the ordinary parsing method, e.g., CKY.} \paragraph{Dependency} Our empirical examinations listed above will be based on the syntactic representation called {\it dependency} structures. In computational linguistics, constituent structures have long played a central role as a representation of natural language syntax \cite{Stolcke:1995:EPC:211190.211197,P97-1003,J98-4004,klein-manning:2003:ACL} although this situation has been changed and the recent trend in the parsing community has favored dependency-based representations, which are conceptually simpler and thus often lead to more efficient algorithms \cite{Nivre2003,Yamada03,McDonald:2005:NDP:1220575.1220641,GoldbergTDP13}. Another important reason for us to focus on this representation is that its {\it unsupervised} induction is more tractable than the constituent representation, such as phrase-structure grammars. In fact, significant research on unsupervised parsing has been done in this decade though much of it assumes dependency trees as the underlying structure \cite{klein-manning:2004:ACL,smith-eisner-2006-acl-sa,bergkirkpatrick-EtAl:2010:NAACLHLT,marevcek-vzabokrtsky:2012:EMNLP-CoNLL,spitkovsky-alshawi-jurafsky:2013:EMNLP}. We discuss this computational issue more in the next chapter (Section \ref{sec:2:unsupervised}). The last, and perhaps the most essential advantage of a dependency representation is its cross-linguistic suitability. For studying the empirical behavior of some system across a variety of languages, the resources for those languages are essential. Compared to constituent structures, dependency annotations are available in many corpora covering more than 20 languages across diverse language families. Each treebank typically contains thousands of sentences with manually parsed syntactic trees. We use such large datasets to examine our hypotheses about universal properties of languages. Though the concepts introduced above, center-embedding and left-corner parsing, are both originally defined on constituent structures, we describe in this thesis a method by which they can be extended to dependency structures via a close connection between two representations. \section{Tasks and Motivations} \label{sec:1:problem} More specifically, the tasks we tackle in this thesis can be divided into the following two categories, each of which is based on specific motivations. \subsection{Examining language universality of center-embedding avoidance} We first examine the hypothesis that center-embedding is a language phenomenon that every language user tries to avoid {\it regardless} of language. The quantitative study for this question across diverse languages has not yet been performed. Two motivations exist for this analysis: One is rather scientific: we examine the explanatory power of center-embedding avoidance as a universal grammatical constraint. This is ambitious though we put more weight on the second, rather practical motivation: the possibility that avoidance of center-embedding is a good syntactic bias to restrict the space of possible tree structures of natural language sentences. These analyses are the main topic of Chapter \ref{chap:transition}. \subsection{Unsupervised grammar induction} \label{sec:intro:unsupervised} We then consider applying the constraint with center-embedding into the application of {\it unsupervised grammar induction}. In this task, the input to the algorithm is a collection of sentences, from which the model tries to extract the salient patterns as a grammar. This setting contrasts with the more familiar {\it supervised} parsing task in which typically some machine learning algorithm learns the mapping from a sentence to the syntactic tree based on the training examples, i.e., sentences paired with their corresponding parse trees. In the unsupervised setting, our goal is to obtain a model that can parse a sentence without access to the correct trees for training sentences. This is a particularly hard problem though we expect the universal syntactic constraint may help in improving the performance since it can effectively restrict the possible search space for the model. \paragraph{Motivations} A typical reason to tackle this task is a purely engineering one: Although the number of languages that we can access to the resource (i.e., treebank) increases, there are still so many languages in the world for which little to no resources are available since the creation of a new treebank from scratch is still very hard and time consuming. Unsupervised learning of grammars would be helpful for this situation, as it provides a cheap solution without requiring the manual efforts of linguistic experts. A more realistic setting might be to use the output of an unsupervised system as the initial annotation, which could then be corrected by experts later. In short, a better unsupervised system can reduce the effort of experts in preparing new treebanks. This motivation can be held in any efforts of unsupervised grammar induction. However, as we do in this thesis, the grammar induction with particular syntactic biases or constraintes would also be appealing for the following reasons as well: \begin{itemize} \item We can regard this task as a typical example of more broad problems of learning syntax without explicit supervision. An example of such problem is a grounding task, in which the learner induces the model of (intermediate) tree structures that bridge an input sentence and its semantics, which may be represented in one of several different forms, depending on the task and corpus, e.g., the logical expression \cite{Zettlemoyer05learningto,kwiatkowksi-EtAl:2010:EMNLP} and the answer to the given question \cite{liang-jordan-klein:2011:ACL-HLT2011,berant-EtAl:2013:EMNLP,kwiatkowski-EtAl:2013:EMNLP,kushman-EtAl:2014:P14-1}. In these tasks, though some implicit supervision is provided, the search space is typically still very large. Obtaining a positive result for the current unsupervised learning, we argue, would present an important starting point for extending the current idea into such related grammar induction tasks. What type of supervision we should give for those tasks is also still an open problem; one possibility is that a good {\it prior} for general natural language syntax, as we investigate here, would reduce the amount of supervision necessary for successful learning. Finally, we claim that although the current study focuses on inducing dependency structures, the presented idea, avoiding center-embedding during learning, is general enough and not necessarily restricted to the dependency induction tasks. The main reason why we focus on dependency structures is rather computational (see Section \ref{sec:2:unsupervised}), but it may not hold in the grounded learning tasks in the previous works cited above. Moreover, recently more sophisticated grammars such as combinatory categorical grammars (CCGs) are shown to be learnable when appropriate light supervision is given as seed knowledge \cite{DBLP:journals/tacl/BiskH13,bisk-hockenmaier:2015:ACL-IJCNLP,AAAI159835}. We thus believe that the lesson from the current study will also shed light on those related learning tasks that do not assume dependency trees as the underlying structures. \item The final motivation is in the relevance to understanding of child language acquisition. Computational modeling of the language acquisition process, in particular using probabilistic models, has gained much attention in recent years \cite{Goldwater200921,kwiatkowski-EtAl:2012:EACL2012,johnson-demuth-frank:2012:ACL2012,doyle-levy:2013:NAACL-HLT}. Although many of those works cited above focus on modeling of relatively early acquisition problems, e.g., word segmentation from phonological inputs, some initial studies regarding acquisition mechanism of grammar also exist \cite{TACL504}. We argue here that our central motivation is {\it not} to get insights into the language acquisition mechanism although the structural constraint that we consider in this thesis (i.e., center-embedding) originally comes from observation of human sentence processing. This is because our problem setting is far from the real language acquisition scenario that a child may undergo. There exist many discrepancies between them; the most problematic one is found in the input to the learning algorithm. For resource reasons, the input sentences to our learning algorithm are largely written texts for adults, e.g, newswires, novels, and blogs. This contrasts with the relevant studies cited above on word segmentation in which the input for training is phonological forms of child directed speech, which is, however, available in only a few languages such as English. This poses a problem since our interest in this thesis is the language universality of the constraint, which needs many language treebanks to be evaluated. Another limitation of the current approach is that every model in this thesis assumes the part-of-speech (POS) of words in a sentence as its input rather than the surface form. This simplification makes the problem much simpler and is employed in most previous studies \cite{klein-manning:2004:ACL,smith-eisner-2006-acl-sa,bergkirkpatrick-EtAl:2010:NAACLHLT,DBLP:journals/tacl/BiskH13,grave-elhadad:2015:ACL-IJCNLP}, but it is of course an unrealistic assumption about inputs that children receive. Our main claim in this direction is that the success of the current approach would lead to the further study about the connection between the child language acquisition and computational modeling of the acquisition process. We leave the remaining discussion about this topic in the conclusion of this thesis. \end{itemize} \section{What this thesis is not about} \label{sec:intro:notabout} This thesis is not about psycholinguistic study, i.e., we do not attempt to reveal the mechanism of human sentence processing. Our main purpose in referring to the literature in psycholinguistics is to get insights for the syntactic patterns that every language might share to some extent and thus could be exploited from a system of computational linguistics. We would be pleased if our findings about the universal constraint affect the thinking of psycholinguists but this is not the main goal of the current thesis. \section{Contributions} Our contributions can be divided into the following four parts. The first two are our conceptual, or algorithmic contributions while the latter two are the contributions of our empirical study. \begin{description} \item[Left-corner dependency parsing algorithm] We show how the idea of left-corner parsing, which was originally developed for recognizing constituent structures, can be extended to dependency structures. We formalize this algorithm in the framework of transition-based parsing, a similar device to the pushdown automata often used to describe the parsing algorithm for dependency structures. The resulting algorithm has the property that its stack depth captures the degree of center-embedding of the recognizing structure. \item[Efficient dynamic programming] We extend this algorithm into the tabular method, i.e., chart parsing, which is necessary to combine the ideas of left-corner parsing and unsupervised grammar induction. In particular, we describe how the idea of head splitting \cite{Eisner:1999:EPB:1034678.1034748,eisner-2000-iwptbook}, a technique to reduce the time complexity of chart-based dependency parsing, can be applied in the current setting. \item[Evidence on the universality of center-embedding avoidance] We show that center-embedding is a rare construction across languages using treebanks of more than 20 languages. Such large-scale investigation has not been performed before in the literature. Our experiment is composed of two types of complementary analyses: a static, counting-based analysis of treebanks and a supervised parsing experiment to see the effect of the constraint when some amount of parse errors occurs. \item[Unsupervised parsing experiments with structural constraints] We finally show that our constraint does improve the performance of unsupervised induction of dependency grammars in many languages. \end{description} \section{Organization of the thesis} The following chapters are organized as follows. \begin{itemize} \item In Chapter \ref{chap:bg}, we summarize the backgrounds necessary to understand the following chapters of the thesis including several syntactic representations, the EM algorithm for acquiring grammars, and left-corner parsing. \item In Chapter \ref{chap:corpora}, we summarize the multilingual corpora we use in our experiments in the following chapters. \item Chapter \ref{chap:transition} covers the topics of the first and third contributions in the previous section. We first define a tool, i.e., a left-corner parsing algorithm for dependency structures, for our corpus analysis in the remainder of the chapter. \item Chapter \ref{chap:induction} covers the remaining, second and fourth contributions in the previous section. Our experiments on unsupervised parsing require the formulation of the EM algorithm, which relies on chart parsing for calculating sufficient statistics. We thus first develop a new dynamic programming algorithm and then apply it to the unsupervised learning task. \item Finally, in Chapter \ref{chap:conclusion}, we summarize the results obtained in this research and give directions for future studies. \end{itemize} \chapter{Background} \label{chap:bg} The topics covered in this chapter can be largely divided into four parts. Section \ref{sec:bg:representation} defines several important concepts for representing syntax, such as constituency and dependency, which become the basis of all topics discussed in this thesis. We then discuss left-corner parsing and related issues in Section \ref{sec:bg:left-corner}, such as the formal definition of center-embedding, which are in particular important to understand the contents in Chapter \ref{chap:transition}. The following two sections are more relevant to our application of unsupervised grammar induction discussed in Chapter \ref{chap:induction}. In Section \ref{sec:2:learning}, we describe the basis of learning probabilistic grammars, such as the EM algorithm. Finally in Section \ref{sec:2:unsupervised}, we provide the thorough survey of the unsupervised grammar induction, and make clear our motivation and standpoint for this task. \section{Syntax Representation} \label{sec:bg:representation} This section introduces several representations to describe the natural language syntax appearing in this thesis, namely context-free grammars, constituency, and dependency grammars, and discuss the connection between them. Though our focused representation in this thesis is dependency, the concepts of context-free grammars and constituency are also essential for us. For example, context-free grammars provide the basis for probabilistic modeling of tree structures as well as parameter estimation for it; We discuss how our dependency-based model can be represented as an instance of context-free grammars in Section \ref{sec:2:learning}. The connection between constituency and dependency appears many times in this thesis. For instance, the concept of {\it center-embedding} (Section \ref{sec:bg:left-corner}) is more naturally understood with constituency rather than with dependency. This section is about syntax representation or grammars and we do not discuss {\it parsing} but to see how the analysis with a grammar looks like, we mention a {\it parse} or a parse tree, which is the result of parsing for an input string (sentence). \subsection{Context-free grammars} \label{sec:2:cfg} A context-free grammar (CFG) is a useful model to describe the hierarchical syntactic structure of an input string (sentence). Formally a CFG is a quadruple $G=(N,\Sigma,P,S)$ where $N$ and $\Sigma$ are disjoint finite set of symbols called nonterminal and terminal symbols respectively. Terminal symbols are symbols that appear at leaf positions of a tree while nonterminal symbols appear at internal positions. $S \in N$ is the start symbol. $P$ is the set of rules of the form $A\rightarrow\beta$ where $A \in N$ and $\beta \in (N \cup \Sigma)^*$. Figure \ref{fig:2:cfg} shows an example of a CFG while Figure \ref{fig:2:cfg-deriv} shows an example of a parse with that CFG. On a parse tree {\it terminal nodes} refer to the nodes with terminal symbols (at leaf positions) while {\it nonterminal nodes} refer to other internal nodes with nonterminal symbols. {\it Preterminal nodes} are nodes that appear just above terminal nodes (e.g., VBD in Figure \ref{fig:2:cfg-deriv}). This model is useful because there is a well-known polynomial (cubic) time algorithm for parsing an input string with it, which also provides the basis for parameters estimation when we develop probabilistic models on CFGs (see Section \ref{sec:2:em}). \begin{figure}[t] \centering \begin{tabular}[t]{|rll|} \hline S &$\rightarrow$ &NP~~VP \\ VP&$\rightarrow$ &VBD~~NP \\ NP&$\rightarrow$ &DT~~NN \\ NP&$\rightarrow$ &Mary \\ VBD&$\rightarrow$&met \\ DT&$\rightarrow$&the \\ NN&$\rightarrow$&senator \\\hline \end{tabular} \caption{A set of rules in a CFG in which $N = \{$S, NP, VP, VBD, DT, NN$\}$, $\Sigma = \{$Mary, met, the, senator $\}$, and $S = $ S (the start symbol).} \label{fig:2:cfg} \end{figure} \begin{figure}[t] \centering \begin{tikzpicture}[sibling distance=10pt] \Tree [.S [.NP Mary ] [.VP [.VBD met ] [.NP [.DT the ] [.NN senator ] ] ] ] ] \end{tikzpicture} \caption{A parse tree with the CFG in Figure \ref{fig:2:cfg}.} \label{fig:2:cfg-deriv} \end{figure} \paragraph{Chomsky normal form} A CFG is said to be in Chomsky normal form (CNF) if every rule in $P$ has the form $A \rightarrow B~C$ or $A \rightarrow a$ where $A,B,C \in N$ and $a \in \Sigma$; that is, every rule is a binary rule or a unary rule and a unary rule is only allowed on a preterminal node. The CFG in Figure \ref{fig:2:cfg} is in CNF. We often restrict our attention to CNF as it is closely related to projective dependency grammars, our focused representation described in Section \ref{sec:2:dependency}. \subsection{Constituency} \label{sec:bg:constituency} The parse in Figure \ref{fig:2:cfg-deriv} also describes the {\it constituent} structure of the sentence. Each constituent is a group of consecutive words that function as a single cohesive unit. In the case of tree in Figure \ref{fig:2:cfg-deriv}, each constituent is a phrase spanned by some nonterminal symbol (e.g., ``the senator'' or ``met the senator''). We can see that the rules in Figure \ref{fig:2:cfg} define how a smaller constituents combine to form a larger constituent. This grammar is an example of {\it phrase-structure grammars}, in which each nonterminal symbol such as NP and VP describes the syntactic role of the constituent spanned by that nonterminal. For example, NP means the constituent is a noun phrase while VP means the one is a verb phrase. The phrase-structure grammar is often contrasted with dependency grammars, but we note that the concept of constituency is not restricted to phrase-structure grammars and plays an important role in dependency grammars as well, as we describe next. \subsection{Dependency grammars} \label{sec:2:dependency} \begin{figure}[t] \centering \begin{dependency}[theme=simple,label style={font=\sf}] \begin{deptext}[column sep=0.8cm] Mary \& met \& the \& senator \\ \end{deptext} \depedge{2}{1}{${\sf sbj}$} \depedge{2}{4}{${\sf obj}$} \depedge{4}{3}{${\sf nmod}$} \deproot[edge unit distance=2ex]{2}{${\sf root}$} \end{dependency} \caption{Example of labelled projective dependency tree.} \label{fig:2:projtree} \end{figure} \begin{figure}[t] \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.8cm] Mary \& met \& the \& senator \\ \end{deptext} \depedge{2}{1}{} \depedge{2}{4}{} \depedge{4}{3}{} \deproot[edge unit distance=2ex]{2}{} \end{dependency} \caption{Example of unlabelled projective dependency tree.} \label{fig:2:projtree-unlabelled} \end{figure} Dependency grammars analyze the syntactic structure of a sentence as a directed tree of word-to-word dependencies. Each dependency is represented as a directed arc from a {\it head} to a {\it dependent}, which is argument or adjunct and is modifying the head syntactically or semantically. Figure \ref{fig:2:projtree} shows an example of an analysis with a dependency grammar. We call these directed trees {\it dependency trees}. The question of what is the head is a matter of debate in linguistics. In many cases this decision is generally agreed but the analysis of certain cases is not settled, in particular those around function words \cite{zwicky199313}. For example although ``senator'' is the head of the dependency between ``the'' and ``senator'' in Figure \ref{fig:2:projtree} some linguists argue ``the'' should be the head \cite{abney1987english}. We discuss this problem more in Chapter \ref{chap:corpora} where we describe the assumed linguistic theory in each treebank used in our experiments. See also Section \ref{sec:ind:eval} where we discuss that such discrepancies in head definitions cause a problem in evaluation for unsupervised systems (and our solution for that). \paragraph{Labelled and unlabelled tree} If each dependency arc in a dependency tree is annotated with a label describing the syntactic role between two words as in Figure \ref{fig:2:projtree}, that tree is called a {\it labeled} dependency tree. For example the ${\sf sbj}$ label between ``Mary'' and ``met'' describes the subject-predicate relationship. A tree is called {\it unlabeled} if those labels are omitted, as in Figure \ref{fig:2:projtree-unlabelled}. In the remainder of this thesis, we only focus on {\it unlabeled} dependency trees although now most existing dependency-based treebanks provide labeled annotations of dependency trees. For our purpose, dependency labels do not play the essential role. For example, our analyses in Chapter \ref{chap:transition} are based only on the tree shape of dependency trees, which can be discussed with unlabeled trees. In the task of unsupervised grammar induction, our goal is to induce the unlabeled dependency structures as we discuss in detail in Section \ref{sec:2:unsupervised}. \paragraph{Constituents in dependency trees} The idea of constituency (Section \ref{sec:bg:constituency}) is not limited to phrase-structure grammars and we can identify the constituents in dependency trees as well. In dependency trees, a constituent is a phrase that comprises of a head and its descendants. For example, ``met the senator'' in Figure \ref{fig:2:projtree-unlabelled} is a constituent as it comprises of a head ``met'' and its descendants ``the senator''. Constituents in dependency trees may be more directly understood by considering a CFG for dependency grammars and the parses with it, which we describe in the following. \subsection{CFGs for dependency grammars and spurious ambiguity} \label{sec:bilexical} \begin{figure}[t] \centering \begin{tabular}[t]{rlll} \hline \multicolumn{3}{l}{Rewrite rule}& Semantics \\ \hline S &$\rightarrow$ &X[{\it a}] & Select {\it a} as the root word. \\ X[{\it a}] &$\rightarrow$ &X[{\it a}]~~X[{\it b}] & Select {\it b} as a right modifier of {\it a}. \\ X[{\it a}] &$\rightarrow$ &X[{\it b}]~~X[{\it a}] & Select {\it b} as a left modifier of {\it a}. \\ X[{\it a}] &$\rightarrow$ &{\it a} & Generate a terminal symbol. \\\hline \end{tabular} \caption{A set of template rules for converting dependency grammars into CFGs. {\it a} and {\it b} are lexical tokens (words) in the input sentence. X[{\it a}] is a nonterminal symbol indicating the head of the corresponding span is {\it a}.} \label{fig:2:bcfg} \end{figure} \begin{figure}[t] \centering \begin{tikzpicture}[sibling distance=10pt] \Tree [.S [.X[met] [.X[Mary] Mary ] [.X[met] [.X[met] met ] [.X[senator] [.X[the] the ] [.X[senator] senator ] ] ] ] ] \end{tikzpicture} \caption{A CFG parse that corresponds to the dependency tree in Figure \ref{fig:2:projtree-unlabelled}.} \label{fig:2:bcfg-deriv} \end{figure} Figure \ref{fig:2:bcfg-deriv} shows an example of a CFG parse, which corresponds to the dependency tree in Figure \ref{fig:2:projtree-unlabelled} but looks very much like the constituent structure in Figure \ref{fig:2:cfg-deriv}. With this representation, it is very clear that {\it the senator} or {\it met the senator} is a constituent in the tree. We often rewrite an original dependency tree in this CFG form to represent the underlying constituents explicitly, in particular when discussing the extension of the concept of center-embedding and left-corner algorithm, which have originally assumed (phrase-structure-like) constituent structure, to dependency. In this parse, every rewrite rule has one of the forms in Figure \ref{fig:2:bcfg}. Each rule specifies one dependency arc between a head and a dependent. For example, the rule X[senator] $\rightarrow$ X[the]~~X[senator] means that ``senator'' takes ``the'' as its left dependent. \paragraph{Spurious ambiguity} \begin{figure}[t] \centering \begin{tikzpicture}[sibling distance=10pt] \Tree [.S [.X[met] [.X[met] [.X[Mary] Mary ] [.X[met] met ] ] [.X[senator] [.X[the] the ] [.X[senator] senator ] ] ] ] \end{tikzpicture} \caption{Another CFG parse that corresponds to the dependency tree in Figure \ref{fig:2:projtree-unlabelled}.} \label{fig:2:bcfg-deriv2} \end{figure} On the tree in Figure \ref{fig:2:projtree-unlabelled}, we can identify ``'Mary met'' is also a constituent, which is although not a constituent in the parses in Figure \ref{fig:2:bcfg-deriv} and Figure \ref{fig:2:bcfg}. This divergence is related to the problem of {\it spurious ambiguity}, which indicates each dependency tree may correspond to more than one CFG parse. In fact, we can also build a CFG parse corresponding to Figure \ref{fig:2:projtree-unlabelled}, in which contrary to Figure \ref{fig:2:cfg-deriv} the constituent of ``Mary met'' is explicitly represented with the nonterminal X[met] dominating ``Mary met''. This ambiguity becomes the problem when we analyze the type of structure for a given dependency tree, e.g., whether a tree contains any center-embedded constructions We will see the details and our solution for this problem later in Sections \ref{sec:oracle} and \ref{sec:memorycost}. Another related issue with this ambiguity is that it prevents us to use the EM algorithm for learning of the models built on this CFG, which we discuss in detail in Section \ref{sec:2:sbg}. \subsection{Projectivity} \label{sec:bg:projective} \begin{figure*}[t] \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.6cm] Mary \& met \& the \& senator \& yesterday \& who \& attacked \& the \& reporter \\ \end{deptext} \depedge{2}{1}{} \depedge{2}{4}{} \depedge{4}{3}{} \depedge{2}{5}{} \depedge{4}{6}{} \depedge{6}{7}{} \depedge{7}{9}{} \depedge{9}{8}{} \deproot[edge unit distance=2.5ex]{2}{} \end{dependency} \caption{Example of non-projective dependency tree.} \label{fig:2:nonprojtree-unlabelled} \end{figure*} A dependency tree is called {\it projective} if the tree does not contain any {\it crossing dependencies}. Every dependency tree appeared so far is projective. An example of non-projective tree is shown in Figure \ref{fig:2:nonprojtree-unlabelled}. Though we have not mentioned explicitly, the conversion method above can only handle projective dependency trees. If we allow non-projective structures in our analysis, then the model or the algorithm typically gets much more complex \cite{mcdonald-satta:2007:IWPT2007,journals/coling/Gomez-RodriguezCW11,DBLP:journals/coling/Kuhlmann13,TACL23}.\footnote{ The maximum spanning tree (MST) algorithm \cite{McDonald:2005:NDP:1220575.1220641} enables non-projective parsing in time complexity $O(n^2)$, which is more efficient than the ordinary CKY-based algorithm \cite{Eisner:1999:EPB:1034678.1034748} though the model form (i.e., features or conditioning contexts) is restricted to be quite simple.} Non-projective constructions are known to be relatively rare cross-linguistically \cite{nivre-EtAl:2007:EMNLP-CoNLL2007,DBLP:journals/coling/Kuhlmann13}. Thus, along with the mathematical difficulty for handling them, often the dependency parsing algorithm is restricted to deal with only projective structures. For example, as we describe in Section \ref{sec:2:unsupervised}, most existing systems of unsupervised dependency induction restrict their attention only on projective structures. Note that existing treebanks contain non-projective structures in varying degree so the convention is to restrict the model to generate only projective trees and to evaluate its quality against the (possibly) non-projective gold trees. We follow this convention in our experiments in Chapter \ref{chap:induction} and generally focus only on projective dependency trees in other chapters as well, if not mentioned explicitly. \begin{figure*}[t] \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.6cm] Mary \& met \& the \& senator \& yesterday \& who \& attacked \& the \& reporter \\ \end{deptext} \depedge{2}{1}{} \depedge{2}{4}{} \depedge{4}{3}{} \depedge{2}{5}{} \depedge[thick,red]{2}{6}{} \depedge{6}{7}{} \depedge{7}{9}{} \depedge{9}{8}{} \deproot[edge unit distance=2.5ex]{2}{} \end{dependency} \caption{The result of pseudo projectivization to the tree in Figure \ref{fig:2:nonprojtree-unlabelled}.} \label{fig:2:pseudo-unlabelled} \end{figure*} \paragraph{Pseudo-projectivity} There is a known technique called pseudo-projectivization \cite{nivre-nilsson:2005:ACL}, which converts any non-projective dependency trees into some projective trees with minimal modifications. The tree in Figure \ref{fig:2:pseudo-unlabelled} shows the result of this procedure into the non-projective tree in Figure \ref{fig:2:nonprojtree-unlabelled}.\footnote{ In the original formalization, pseudo-projectivization also performs label conversions. That is, the label on a (modified) dependency arc is changed for memorizing the performed operations; With this memorization, the converted tree does not loose the information. \newcite{nivre-nilsson:2005:ACL} show that non-projective dependency parsing is possible with this conversion and parsing algorithms that assume projectivity, by training and decoding with the converted forms and recovering the non-projective trees from the labeled (projective) outputs. Since our focus in basically only unlabeled trees, we ignore those labels in Figure \ref{fig:2:pseudo-unlabelled}. } We perform this conversion on every tree when we analyze the structural properties of dependency trees in existing corpora in Chapter \ref{chap:transition}. See Section \ref{sec:analysis:settings} for details. \section{Left-corner Parsing} \label{sec:bg:left-corner} In this section we describe left-corner parsing and summarize related issues, e.g., its relevance to psycholinguistic studies. Previous studies on left-corner parsing have focused only on a (probabilistic) CFG; We will extend the discussion in this section for dependency grammars in later chapters. In Chapter \ref{chap:transition}, we extend the idea into transition-based dependency parsing while in Chapter \ref{chap:induction}, we further extend the algorithm with efficient tabulation (dynamic programming). A somewhat confusing fact about left-corner parsing is that there exist two variants of very different algorithms, called arc-standard and arc-eager algorithms. The arc-standard left-corner parsing has been appeared first in the programming language literature \cite{4569645,Aho:1972:TPT:578789} and later extended for natural language parsing for improving efficiency \cite{Nederhof:1993:GLP:976744.976780} or expanding contexts captured by the model \cite{manning2000,henderson:2004:ACL}. In the following we do {\it not} discuss these arc-standard algorithms, and only focus on the arc-eager algorithm, which has its origin in psycholinguistics \cite{Cognitive:MentalModels,abney91memory}\footnote{\newcite{Cognitive:MentalModels} introduced his left-corner parser as a cognitively plausible human parser but it has been pointed out that his parser is actually not arc-eager but arc-standard \cite{conf/coling/Resnik92}, which is (at least) not relevant to a human parser.} rather than in computer science. \REVISE{ Left-corner parsing is closely relevant to the notion of center-embedding, a kind of branching pattern, which we characterize formally in Section \ref{sec:bg:embedding}. We then introduce the idea of left-corner parsing through a parsing strategy in Section \ref{chap:2:left-corner-strategy} for getting intuition into parser behavior. During Sections \ref{sec:bg:left-corner-pda} -- \ref{sec:bg:anothervariant}, we discuss the push-down automata (PDAs), a way for implementing the strategy as a parsing algorithm. While previous studies on the arc-eager left-corner PDAs pay less attention on its theoretical properties beyond its asymptotic behavior, in Section \ref{sec:bg:prop-left-corner}, we present a detailed, thorough analysis on the properties of the presented PDA as it plays an essential role in our exploration in the following chapters. Although we carefully design the left-corner PDA as the realization of the presented strategy, as we see later, this algorithm differs from the one previously explained as the left-corner PDAs in the literature \cite{conf/coling/Resnik92,conf/acl/Johnson98}. This difference is important for us. In Section \ref{sec:bg:anothervariant} we discuss why this discrepancy occurs, as well as why we do not take the standard formalization. Finally in Section \ref{sec:bg:psycho} we summarize the psycholinguistic relevance of the presented algorithms. } \subsection{Center-embedding} \label{sec:bg:embedding} We first define some additional notations related to CFGs that we introduced in Section \ref{sec:2:cfg}. Let us assume a CFG $G=(N,\Sigma,P,S)$. Then each symbol used below has the following meaning: \begin{itemize} \item $A,B,C,\cdots$ are nonterminal symbols; \item $v,w,x,\cdots$ are strings of terminal symbols, e.g., $v \in \Sigma^*$; \item $\alpha,\beta,\gamma,\cdots$ are strings of terminal or nonterminal symbols, e.g., $\alpha \in (N \cup \Sigma)^*$. \end{itemize} In the following, we define the notion of center-embedding using left-most {\it derives} relation $\Rightarrow_{lm}$ though it is also possible to define with right-most one. $\Rightarrow_{lm}^*$ denotes derivation in zero ore more steps while $\Rightarrow_{lm}^+$ denotes derivation in one or more steps. $\alpha \Rightarrow_{lm}^* \beta$ means $\beta$ can be derived from $\alpha$ by applying a list of rules in left-most order (always expanding the current left-most nonterminal). In this order, the derivation with nonterminal symbols followed by terminal symbols, i.e., $S \Rightarrow_{lm}^+ \alpha A v$ does not appear. For simplicity, we assume the CFG is in CNF. It is possible to define center-embedding for general CFGs but notations are more involved, and it is sufficient for discussing our extension for dependency grammars. Center-embedding can be characterized by the specific branching pattern found in a CFG parse, which we define precisely below. We note that the notion of center-embedding could be defined in a different way. In fact, as we describe later, the existence of several variants of arc-eager left-corner parsers is relevant to this arbitrariness for the definition of center-embedding. We postpone the discussion of this issue until Section \ref{sec:bg:anothervariant}. \begin{newdef} \label{def:bg:embedding} A CFG parse involves center-embedding if the following derivation is found in it: \begin{equation*} S \Rightarrow_{lm}^* v \underline{A} \alpha \Rightarrow_{lm}^+ v w \underline{B} \alpha \Rightarrow_{lm} v w \underline{C} D \alpha \Rightarrow_{lm}^+ v w x D \alpha; ~~ |x| \geq 2, \end{equation*} where the underlined symbol indicates that that symbol is expanded by the following $\Rightarrow$. The condition $|x| \geq 2$ means the constituent rooted at $C$ must comprise of more than one word. \end{newdef} Figure \ref{fig:2:center-embedding-pattern} shows an example of the branching pattern. The pattern always begins with right branching edges, which are indicated by $v \underline{A} \beta \Rightarrow_{lm}^+ v w \underline{B} \beta$. Then the center-embedding is detected if some $B$ is found which has a left child that constitutes a span larger than one word (e.g., $C$). The final condition of the span length means the embedded subtree (rooted at $C$) has a right child. This {\it right $\rightarrow$ left $\rightarrow$ right} pattern is the distinguished branching pattern in center-embedding. By detecting a series of these zig-zag patterns recursively, we can measure the {\it degree} of center-embedding in a given parse. Formally, \begin{newdef} \label{def:bg:embed-depth} If the following derivation is found in a CFG parse: \begin{align} S \Rightarrow_{lm}^* v \underline{A} \alpha &\Rightarrow_{lm}^+ v w_1 \underline{B_1} \alpha \Rightarrow_{lm}^+ v w_1 \underline{C_1} \beta_1 \alpha \nonumber\\ &\Rightarrow_{lm}^+ v w_1 w_2 \underline{B_2} \beta_1 \alpha \Rightarrow_{lm}^+ v w_1 w_2 \underline{C_2} \beta_2 \beta_1 \alpha \nonumber\\ &\Rightarrow_{lm}^+ \cdots \label{eq:bg:embed-depth} \\ &\Rightarrow_{lm}^+ v w_1 \cdots w_{m'} \underline{B_{m'}} \beta_{{m'}-1} \cdots \beta_1 \alpha \Rightarrow_{lm}^+ v w_1 \cdots w_{m'} \underline{C_{m'}} \beta_{m'} \beta_{{m'}-1} \cdots \beta_1 \alpha \nonumber\\ &\Rightarrow_{lm}^+ v w_1 \cdots w_{m'} x \beta_{m'} \beta_{{m'}-1} \cdots \beta_1 \alpha;~~|x| \geq 2. \nonumber, \end{align} the degree of center-embedding in it is the maximum value $m$ among all possible values of $m'$ (i.e., $m \geq m'$). \end{newdef} Each line in Eq. \ref{eq:bg:embed-depth} corresponds to the detection of additional embedding, except the last line that checks the length of the most embedded constituent. Figures \ref{fig:2:depth-1} and \ref{fig:2:depth-2} show examples of degree one and two parses, respectively. These are the {\it minimal} parses for each degree, meaning that degree two occurs only when the sentence length $\geq$ 6. Note that the form of the last transform in the first line (i.e., $v w_1 \underline{B_1} \alpha \Rightarrow_{lm}^+ v w_1 \underline{C_1} \beta_1 \alpha$) does not match to the one in Definition \ref{def:bg:embedding} (i.e., $v w \underline{B} \alpha \Rightarrow_{lm} v w \underline{C} D \alpha$). This modification is necessary because the first left descendant of $B_1$ in Eq. \ref{eq:bg:embed-depth} is not always the starting point of further center-embedding. \label{sec:bg:center-embed} \begin{figure}[t] \centering \begin{minipage}[t]{.4\linewidth} \centering \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.$S$ \edge[densely dashed]; [.$A$ $w$ [.$B$ [.$C$ \edge[roof]; {~~~~~~~} ] $D$ ] ] ] \end{tikzpicture} \subcaption{Pattern of center-embedding} \label{fig:2:center-embedding-pattern} \end{minipage} \begin{minipage}[t]{.4\linewidth} \centering \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.$S$ [.$A'$ $a$ ] [.$B$ [.$C$ [.$B'$ $b$ ] [.$C'$ $c$ ] ] [.$D$ $d$ ] ] ] \end{tikzpicture} \subcaption{Degree one parse} \label{fig:2:depth-1} \end{minipage} \begin{minipage}[t]{.4\linewidth} \centering \vspace{10pt} \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.$S$ [.$A''$ $a'$ ] [.$B_1$ [.$C_1$ [.$A'$ $a$ ] [.$B_2$ [.$C_2$ [.$B'$ $b$ ] [.$C'$ $c$ ] ] [.$D_2$ $d$ ] ] ] [.$D_1$ $d'$ ] ] ] \end{tikzpicture} \subcaption{Degree two parse} \label{fig:2:depth-2} \end{minipage} \begin{minipage}[t]{.4\linewidth} \centering \vspace{58pt} \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.$S$ [.$B$ [.$A'$ $a$ ] [.$C$ [.$B'$ $b$ ] [.$C'$ $c$ ] ] ] [.$D$ $d$ ] ] \end{tikzpicture} \subcaption{Not center-embedding} \label{fig:2:not-center-embedding} \end{minipage} \caption{A parse involves center-embedding if the pattern in (a) is found in it. (b) and (c) are the minimal patterns with degree one and two respectively. (d) is the symmetry of (b) but we regard this as not center-embedding. } \label{fig:2:define-center-embedding} \end{figure} Other notes about center-embedding are summarized as follows: \begin{itemize} \item It is possible to calculate the depth $m$ by just traversing every node in a parse once in a left-to-right, depth-first manner. The important observation for this algorithm is that the value $m'$ in Eq. \ref{eq:bg:embed-depth} is deterministic for each node, suggesting that we can fill the depth of each node top-down. We then pick up the maximum depth $m$ among the values found at terminal nodes. \item In the definitions above, the pattern of center-embedding always starts with a right directed edge. This means the similar, but opposite pattern found in Figure \ref{fig:2:not-center-embedding} is not center-embedding. Left-corner parsing that we will introduce next also distinguish only the pattern in Figure \ref{fig:2:depth-1}, not Figure \ref{fig:2:not-center-embedding}. \end{itemize} \subsection{Left-corner parsing strategy} \label{chap:2:left-corner-strategy} \begin{figure}[t] \centering \newcommand{\enum}[1]{{\small {\color{blue}{#1}}}} \newcommand{\mynode}[2]{\hspace{-8pt}{\enum{#1}} \hspace{-2pt}#2} \definecolor{g70}{gray}{0.7} \begin{minipage}[t]{.4\linewidth} \centering \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.\mynode{10}{$S$} \edge node[left, pos=.3]{\enum{11}}; [.\mynode{6}{$E$} \edge node[left, pos=.3]{\enum{7}}; [.\mynode{2}{$F$} \edge node[left, pos=.3]{\enum{3}}; [.\mynode{1}{$A$} {$a$} ] \edge node[right, pos=.3]{\enum{5}}; [.\mynode{4}{$B$} {$b$} ] ] \edge node[right, pos=.3]{\enum{9}}; [.\mynode{8}{$C$} {$c$} ] ] \edge node[right, pos=.3]{\enum{13}}; [.\mynode{12}{$D$} {$d$} ] ] \end{tikzpicture} \subcaption{Left-branching}\label{subfig:left-branching} \end{minipage} \begin{minipage}[t]{.4\linewidth} \centering \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.\mynode{2}{$S$} \edge node[left, pos=.3]{\enum{3}}; [.\mynode{1}{$A$} {$a$} ] \edge node[right, pos=.3]{\enum{7}}; [.\mynode{5}{$E$} \edge node[left, pos=.3]{\enum{6}}; [.\mynode{4}{$B$} {$b$} ] \edge node[right, pos=.3]{\enum{11}}; [.\mynode{9}{$F$} \edge node[left, pos=.3]{\enum{10}}; [.\mynode{8}{$C$} {$c$} ] \edge node[right, pos=.3]{\enum{13}}; [.\mynode{12}{$D$} {$d$} ] ] ] ] \end{tikzpicture} \subcaption{Right-branching}\label{subfig:right-branching} \end{minipage} \begin{minipage}[t]{.4\linewidth} \centering \vspace{10pt} \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.\mynode{2}{$S$} \edge node[left, pos=.3]{\enum{3}}; [.\mynode{1}{$A$} {$a$} ] \edge node[right, pos=.3]{\enum{11}}; [.\mynode{9}{$E$} \edge node[left, pos=.3]{\enum{10}}; [.\mynode{5}{$F$} \edge node[left, pos=.3]{\enum{6}}; [.\mynode{4}{$B$} {$b$} ] \edge node[right, pos=.3]{\enum{8}}; [.\mynode{7}{$C$} {$c$} ] ] \edge node[right, pos=.3]{\enum{13}}; [.\mynode{12}{$D$} {$d$} ] ] ] \end{tikzpicture} \subcaption{Center-embedding}\label{subfig:embedding} \end{minipage} \begin{minipage}[t]{.4\linewidth} \centering \vspace{10pt} \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.{$S$} \edge node[left, pos=.3]{}; [.$A$ {$a$} ] \edge[g70] node[right, pos=.3]{}; [.{\color{g70}{$E$}} \edge[g70] node[left, pos=.3]{}; [.{$F$} \edge node[left, pos=.3]{}; [.$B$ {$b$} ] \edge[g70] node[right, pos=.3]{}; [.\color{g70}{$C$} \edge[g70]; \color{g70}{$c$} ] ] \edge[g70] node[right, pos=.3]{}; [.\color{g70}{$D$} \edge[g70]; \color{g70}{$d$} ] ] ] \end{tikzpicture} \subcaption{Connected subtrees}\label{subfig:connected} \end{minipage} \caption{(a)--(c) Three kinds of branching structures with numbers on symbols and arcs showing the order of recognition with a left-corner strategy. (d) a partial parse of (c) using a left-corner strategy just after reading symbol $b$, with gray edges and symbols showing elements not yet recognized; The number of connected subtrees here is 2.}\label{fig:structures} \end{figure} A parsing strategy is a useful abstract notion for characterizing the properties of a parser and gaining intuition into parser behavior. Formally, it can be understood as a particular mapping from a CFG to the push-down automata that generate the same language \cite{nederhof-satta:2004:ACL1}. Here we follow \newcite{abney91memory} and consider a parsing strategy as a specification of the order that each node and arc on a parse is recognized during parsing. The corresponding push-down automata can then be understood as the device that provides the operational specification to realize such specific order of recognition, as we describe in Section \ref{sec:bg:left-corner-pda}. We first characterize left-corner parsing with a parsing strategy to discuss its notable behavior for center-embedded structures. The left-corner parsing strategy is defined by the following order of recognizing nodes and arcs on a parse tree: \begin{enumerate} \item A node is recognized when the subtree of its first (left most) child has been recognized. \label{enum:bg:strategy-node} \item An arc is recognized when two nodes it connects have been recognized. \label{enum:bg:strategy-arc} \end{enumerate} We discuss the property of the left-corner strategy based on its behavior on three kinds of distinguished tree structures called left-branching, right-branching, and center-embedding, each shown in Figure \ref{fig:structures}. The notable property of the left-corner strategy is that it generates disconnected tree fragments only on a center-embedded structure as shown in Figure \ref{fig:structures}. Specifically, in Figure \ref{subfig:embedding}, which is center-embedding, after reading $b$ it reaches 6 but $a$ and $b$ cannot be connected at this point. It does not generate such fragments for other structures; e.g., for the right-branching structure (Figure \ref{subfig:right-branching}), it reaches 7 after reading $b$ so $a$ and $b$ are connected by a subtree at this point. The number of tree fragments grows as the degree of center-embedding increases. As we describe later, the property of the left-corner strategy is appealing from a psycholinguistic viewpoint. Before discussing this relevance, which we summarize in Section \ref{sec:bg:psycho}, in the following we will see how this strategy can be actually realized as the parsing algorithm first. \subsection{Push-down automata} \label{sec:bg:left-corner-pda} We now discuss how the left-corner parsing strategy described above is implemented as a parsing algorithm, in particular as push-down automata (PDAs), the common device to define a parsing algorithm following a specific strategy. \REVISE{ As we mentioned, this algorithm is not exactly the same as the one previously proposed as the left-corner PDA \cite{conf/coling/Resnik92,conf/acl/Johnson98}, which we summarize in Section \ref{sec:bg:anothervariant}. PDAs assume a CFG, and specify how to build parses with that grammar given an input sentence. Note that for simplicity we only present algorithms specific for CFGs in CNF, although both presented algorithms can be extended for general CFGs. } \paragraph{Notations} We define a PDA as a tuple $(\Sigma, Q, q_{init}, q_{final}, \Delta)$ where $\Sigma$ is an alphabet of input symbols (words) in a CFG, $Q$ is a finite set of stack symbols (items), including the initial stack symbol $q_{init}$ and the final stack symbol $q_{final}$, and $\Delta$ is a finite set of transitions. A transition has the form $\sigma_{1} \xmapsto{a} \sigma_{2}$ where $\sigma_1,\sigma_2 \in Q^*$ and $a \in \Sigma \cup \{ \varepsilon \}$; $\varepsilon$ is an empty string. This can be applied if the stack symbols $\sigma_{1}$ are found to be the top few symbols of the stack and $a$ is the first symbol of the unread part of the input. After such a transition, $\sigma_1$ is replaced with $\sigma_2$ and the next input symbol $a$ is treated as having been read. If $a=\varepsilon$, the input does not proceed. Note that our PDA does not explicitly have a set of states; instead, we encode each state into stack symbols for simplicity as \newcite{DBLP:journals/corr/cs-CL-0404009}. Given a PDA and an input sentence of length $n$, a {\it configuration} of a PDA is a pair ($\sigma$, $i$) where a stack $\sigma \in Q^*$ and $i$ is an input position $1 \leq i \leq n$, indicating how many symbols are read from the input. The initial configuration is $(q_{initial}, 0)$ and the PDA {\it recognizes} a sentence if it reaches $(q_{final}, n)$ after a finite number of transitions. \begin{figure*}[t] \centering \begin{tabular}[t]{|lll|} \hline Name & Transition & Condition \\ \hline {\sc Shift} & $\varepsilon \xmapsto{a} A$ & $A \rightarrow a \in P$ \\ {\sc Scan} & $A/B \xmapsto{a} A$ & $B \rightarrow a \in P$ \\ {\sc Prediction} & $A \xmapsto{\varepsilon} B/C$ & $B \rightarrow A~C \in P$ \\ {\sc Composition} & $A/B~C \xmapsto{\varepsilon} A/D$ & $B \rightarrow C~D \in P$ \\ \hline \end{tabular} \caption{ \REVISE{ The set of transitions in a push-down automaton that parses a CFG $(N,\Sigma,P,S)$ with the left-corner strategy. $a \in \Sigma$; $A,B,C,D\in N$. The initial stack symbol $q_{init}$ is the start symbol of the CFG $S$, while the final stack symbol $q_{final}$ is an empty stack symbol $\varepsilon$. } } \label{fig:bg:ourpda} \end{figure*} \begin{figure}[t] \begin{tikzpicture}[level distance=0.8cm] \begin{scope}[xshift=2cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Prediction:}; \begin{scope}[xshift=2cm, yshift=-0.7cm] \Tree [.$A$ \edge[roof]; {~~~~~~~~} ]; \draw (-1.2, -0.9) -- +(2.4, 0) node[right] {$B \rightarrow A ~ C \in P$}; \begin{scope}[xshift=0cm, yshift=-1.35cm] \Tree [.$B$ [.$A$ \edge[roof]; {~~~~~~~~} ] \underline{$C$} ]; \end{scope} \end{scope} \end{scope} \begin{scope}[xshift=8cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Composition:}; \begin{scope}[xshift=3cm, yshift=-0.7cm] \begin{scope}[xshift=0cm, yshift=0cm] \Tree [.$A$ [.$X$ \edge[roof]; {~~~~~~~~} ] $\underline{B}$ ]; \end{scope} \begin{scope}[xshift=2cm, yshift=-0.8cm] \Tree [.$C$ \edge[roof]; {~~~~~~~~} ]; \end{scope} \draw (-1.2, -1.7) -- +(4.2, 0) node[right] {$B \rightarrow C ~ D \in P$}; \begin{scope}[xshift=1cm, yshift=-2.25cm] \Tree [.$A$ [.$X$ \edge[roof]; {~~~~~~~~} ] [.$B$ [.$C$ \edge[roof]; {~~~~~~~~} ] \underline{$D$} ] ]; \end{scope} \end{scope} \end{scope} \end{tikzpicture} \caption{Graphical representations of inferences rules of {\sc Prediction} and {\sc Composition} defined in Figure \ref{fig:bg:ourpda}. An underlined symbol indicates that the symbol is predicted top-down.} \label{fig:2:pda-binary} \end{figure} \paragraph{PDA} \REVISE{ We develop a left-corner PDA to achieve the recognition order of nodes and arcs by the left-corner strategy that we formulated in Section \ref{chap:2:left-corner-strategy}. In our left-corner PDA, each stack symbol is either a nonterminal $A\in N$, or a pair of nonterminals $A/B$, where $A,B \in N$. $A/B$ is used for representing an {\it incomplete} constituent, waiting for a subtree rooted at $B$ being substituted. In this algorithm, $q_{init}$ is an empty stack symbol $\varepsilon$ while $q_{final}$ is the stack symbol $S$, the start symbol of a given CFG. Figure \ref{fig:bg:ourpda} lists the set of transitions in this PDA. {\sc Prediction} and {\sc Composition} are the key operations for achieving the left-corner strategy. Specifically, {\sc Prediction} operation first recognizes a parent node of a subtree (rooted at $A$) bottom-up, and then predicts its sibling node top-down. This is graphically explained in Figure \ref{fig:2:pda-binary}. We notice that this operation just corresponds to the policy \ref{enum:bg:strategy-node} of the strategy about the order of recognizing nodes.\footnote{ \label{fn:bg:prediction} Though the strategy postpones the recognition of the sibling node, we can interpret that the predicted sibling (i.e., $C$) by {\sc Prediction} is still not recognized. It is recognized by {\sc Scan} or {\sc Composition}, which introduce the same node bottom-up and matches two nodes, i.e., the top-down predicted node and the bottom-up recognized node. } The policy \ref{enum:bg:strategy-arc} about the order of connecting nodes is also essential, and it is realized by another key operation of {\sc Composition}. This operation involves two steps. First, it performs the same prediction operation as {\sc Prediction} for the top stack symbol. It is $C$ in Figure \ref{fig:bg:ourpda}, and the result is $B/D$, i.e., a subtree rooted at $B$ predicting the sibling node $D$. It then connects this subtree and the second top subtree, i.e., $A/B$. This is done by matching two identical nodes of different views, i.e., top-down predicted node $B$ in $A/B$ and bottom-up recognized node $B$ in $B/D$. This matching operation is the key for achieving the policy \ref{enum:bg:strategy-arc}, which demands that two recognized nodes be connected immediately. In {\sc Composition}, these two nodes are $A$, which is already recognized, and $B$, which is just recognized bottom-up by the first prediction step.\footnote{ As mentioned in footnote \ref{fn:bg:prediction}, we regard the predicted node $B$ in $A/B$ as not yet being recognized. } In the following, we distinguish two kinds of transitions in Figure \ref{fig:bg:ourpda}: {\sc Shfit} and {\sc Scan} operations belong to {\it shift} transitions\footnote{We use small caps to refer to a specific action, e.g., {\sc Shift}, while ``shift'' refers to an action type.}, as they proceed the input position of the configuration. This is not the case in {\sc Prediction} and {\sc Composition}, and we call them {\it reduce} transitions. The left-corner strategy of \newcite{abney91memory} has the property that the maximum number of unconnected subtrees during enumeration equals the degree of center-embedding. The presented left-corner PDA is an implementation of this strategy and essentially has the same property; that is, its maximum stack depth during parsing is equal the degree of center-embedding of the resulting parse. The example of this is shown next, while the formal discussion is provided in Section \ref{sec:bg:prop-left-corner}. } \begin{figure*}[t] \centering \begin{tabular}[t]{rlll} \hline Step & Action & Stack & Read symbol \\ \hline & & $\varepsilon$ & \\ 1 & {\sc Shift} & $A'$ & $a$ \\ 2 & {\sc Predict} & $S/B$ & \\ 3 & {\sc Shift} & $S/B~B'$ & $b$ \\ 4 & {\sc Predict} & ${\color{red}{S/B~C/C'}}$ & \\ 5 & {\sc Scan} & $S/B~C$ & $c$ \\ 6 & {\sc Composition} & $S/D$ & \\ 7 & {\sc Scan} & $S$ & $d$ \\ \hline \end{tabular} \caption{An example of parsing process by the left-corner PDA to recover the parse in Figure \ref{fig:2:depth-1} given an input sentence $a~b~c~d$. It is step 4 that occurs stack depth two after a reduce transition.} \label{fig:bg:pda-example} \end{figure*} \begin{figure}[t] \centering \begin{minipage}[t]{0.3\textwidth} \centering \begin{tabular}{ccc} $S$ &$\rightarrow$ & $A'~B$ \\ $B$ &$\rightarrow$ & $C~D$ \\ $C$ &$\rightarrow$ & $B'~C'$ \\ \end{tabular} \end{minipage} \begin{minipage}[t]{0.3\textwidth} \centering \begin{tabular}{ccc} $A'$ &$\rightarrow$ & $a$ \\ $B'$ &$\rightarrow$ & $b$ \\ $C'$ &$\rightarrow$ & $c$ \\ $D$ &$\rightarrow$ & $d$ \\ \end{tabular} \end{minipage} \caption{A CFG that is parsed with the process in Figure \ref{fig:bg:pda-example}.} \label{fig:bg:cfg-embed} \end{figure} \paragraph{Example} Figure \ref{fig:bg:pda-example} shows an example of parsing process given a CFG in Figure \ref{fig:bg:cfg-embed} and an input sentence $a~b~c~d$. The parse tree contains one degree of center-embedding found in Figure \ref{fig:2:depth-1}, and this is illuminated in Figure \ref{fig:bg:pda-example} with the appearances of stack depth of two, in particular before reading symbol $c$, which exactly corresponds to the step 4 on the Figure \ref{subfig:embedding}. \subsection{Properties of the left-corner PDA} \label{sec:bg:prop-left-corner} \REVISE{ In this section, we formally establish the connection between the left-corner PDA and the center-embeddedness of a parse. The result is also essential when discussing the property of our extended algorithm for dependency grammars presented in Chapter \ref{chap:transition}; see Section \ref{sec:memorycost} for details. The following lemmas describe the basic properties of the left-corner PDA, which will be the basis in the further analysis. \begin{newlemma} \label{lemma:bg:shift-reduce} In a sequence of transitions to arrive the final configuration $(q_{final}, n)$ of the left-corner PDA (Figure \ref{fig:bg:ourpda}), shift (i.e., {\sc Shift} or {\sc Scan}) and reduce (i.e., {\sc Prediction} or {\sc Composition}) transitions occur alternately. \end{newlemma} \begin{proof} Reduce transitions are only performed when the top symbol of the stack is {\it complete}, i.e., of the form $A$. Then, since each reduce transition makes the top symbol of the stack incomplete, two consecutive reduce transitions are not applicable. Conversely, shift transitions make the top stack symbol complete. We cannot perform {\sc Scan} after some shift transition, since it requires an incomplete top stack symbol. If we perform {\sc Shift} after a shift transition, the top two stack symbols become complete, but we cannot combine these two symbols since the only way to combine two symbols on the stack is {\sc Composition}, while it requires the second top symbol to be incomplete. \end{proof} \begin{newlemma} \label{lemma:bg:after-reduce} In the left-corner PDA, after each reduce transition, every item remained on the stack is an incomplete stack symbol of the form $A/B$. \end{newlemma} \begin{proof} From Lemma \ref{lemma:bg:shift-reduce}, a shift action is always followed by a reduce action, and vice versa. We call a pair of some shift and reduce operations a {\it push} operation. In each push operation, a shift operation adds at most one stack symbol on the stack, which is always replaced with an incomplete symbol by the followed reduce transition. Thus after a reduce transition no complete symbol remains on the stack. \end{proof} We can see that transitions in Figure \ref{fig:bg:pda-example} satisfy these conditions. Intuitively, the existence of center-embedding is indicated by the accumulated incomplete symbols on the stack, each of which corresponds to each line on the derivation in Eq. \ref{eq:bg:embed-depth}. This is formally stated as the following theorem, which establishes the connection between the stack depth of the left-corner PDA and the degree of center-embedding. \begin{mytheorem} \label{thoerem:bg:stack-depth} Given a CFG parse, its degree of center-embedding is equal to the maximum value of the stack depth after a reduce transition minus one for recognizing that parse on the left-corner PDA. \end{mytheorem} For example, for a CFG parse with one degree of center-embedding, the maximum stack depth after a reduce transition is two, which is indicated at step 4 in Figure \ref{fig:bg:pda-example}. We leave the proof of this theorem in Appendix \ref{chap:app:analyze-pda}. Note that Theorem \ref{thoerem:bg:stack-depth} says nothing about the stack depth after a shift transition, which generally is not equal to the degree of center-embedding. We discuss this issue more when presenting the algorithm for dependency grammars; see Section \ref{sec:memorycost}. } \subsection{Another variant} \label{sec:bg:anothervariant} \begin{figure*}[t] \centering \begin{tabular}[t]{|lll|} \hline Name & Transition & Condition \\ \hline {\sc Shift} & $A \xmapsto{a} A{-}B$ & $B \rightarrow a \in P$ \\ {\sc Scan} & $A \xmapsto{a} \varepsilon$ & $A \rightarrow a \in P$ \\ {\sc Prediction} & $A{-}B \xmapsto{\varepsilon} A{-}C~D$ & $C \rightarrow B~D \in P$ \\ {\sc Composition} & $A{-}B \xmapsto{\varepsilon} C$ & $A \rightarrow B~C \in P$ \\ \hline \end{tabular} \caption{A set of transitions in another variant of the left-corner PDA appeared in Resnik (1992). $a \in \Sigma$; $A,B,C,D\in N$. Differently from the PDA in Figiure \ref{fig:bg:ourpda}, the initial stack symbol $q_{init}$ is $S$ while $q_{final}$ is an empty stack symbol $\varepsilon$. } \label{fig:bg:pda1} \end{figure*} \REVISE{ We now present another variant of the left-corner PDA appeared in the literature \cite{conf/coling/Resnik92,conf/acl/Johnson98}. We will see that this algorithm has a different property with respect to the stack depth and the degree of center-embedding than Theorem \ref{thoerem:bg:stack-depth}. In particular, this difference is relevant to the structures that are recognized as center-embedding for the algorithm, which has not been precisely discussed so far; \newcite{journals/coling/SchulerAMS10} give comparison of two algorithms but from a different perspective. Figure \ref{fig:bg:pda1} shows the list of possible transitions in this variant. The crucial difference between two PDAs is in the form of initial and final stack symbols. That is, in this PDA the initial stack symbol $q_{initial}$ is $S$, while $q_{final}$ is an empty symbol $\varepsilon$, which are opposite in the PDA that we discussed so far (Section \ref{sec:bg:left-corner-pda}). \begin{figure}[t] \centering \begin{minipage}[t]{0.3\textwidth} \centering \begin{tikzpicture}[level distance=0.8cm] \Tree [.$A$ \edge[densely dashed]; [.$B$ \edge[roof]; {~~~~~~~~~} ] \edge[densely dashed]; {\color{white}{A}} ] \end{tikzpicture} \subcaption{}\label{fig:bg:a-b:a} \end{minipage} \begin{minipage}[t]{0.3\textwidth} \centering \begin{tikzpicture}[level distance=0.8cm] \Tree [.$X$ \edge[densely dashed]; {\color{white}{A}} \edge[densely dashed]; [.$A$ \edge[densely dashed]; [.$B$ \edge[roof]; {~~~~~~~~~} ] \edge[densely dashed]; {\color{white}{A}} ] ] \end{tikzpicture} \subcaption{}\label{fig:bg:a-b:b} \end{minipage} \caption{Stack symbols of the left-corner PDA of Figure \ref{fig:bg:pda1}. Both trees correspond to symbol $A{-}B$ where $A$ is the current goal while $B$ is the recognized nonterminal. Note that $A$ may be a right descendant of another nonterminal (e.g., $X$), which dominates a larger subtree.} \label{fig:bg:a-b} \end{figure} Also in this variant, the form of stack symbols is different. Instead of $A/B$, which represents a subtree waiting for $B$, it has $A{-}B$, which means that $B$ is the {\it left-corner} in a subtree rooted at $A$, and has been already recognized. In other words, $A$ is the current goal, which the PDA tries to build, while $B$ represents a finished subtree. This is schematically shown in Figure \ref{fig:bg:a-b:a}. Parsing starts with $q_{initial}=S$, which immediately changes to $S{-}A$ where $A\rightarrow a$, and $a \in \Sigma$ is the initial token of the sentence. {\sc Prediction} is similar to the one in our variant: It expands the currently recognized structure, and also predicts the sibling symbol (i.e., $D$), which becomes a new goal symbol. {\sc Composition} looks very different, but has the similar sense of transition. In the symbol $A/B$, $A$ is not limited to $S$, in which case $A$ is some right descendant of another nonterminal, as depicted in Figure \ref{fig:bg:a-b:b}. The sense of {\sc Composition} in Figure \ref{fig:bg:pda1} is that we finish recognition of the left subtree of $A$ (i.e., the tree rooted at $B$) and change the goal symbol to $C$, the sibling of $B$. If we consider this transition in the form of Figure \ref{fig:bg:a-b:b}, it looks similar to the one in Figure \ref{fig:bg:ourpda}; that is, the corresponding transition in our variant is $X/A~B \xmapsto{\varepsilon} C$. Instead, in the current variant, the root nonterminal of a subtree $X$ is not kept on the stack, and the goal symbol is moved from top to bottom. This is the reason why the final stack symbol $q_{final}$ is empty. The final goal for the PDA is always the preterminal for the last token of the sentence, which is then finally removed by {\sc Scan}. \paragraph{Example} \begin{figure}[t] \centering \begin{tabular}[t]{rlll} \hline Step & Action & Stack & Read symbol \\ \hline & & $S$ & \\ 1 & {\sc Shift} & $S{-}A'$ & $a$ \\ 2 & {\sc Composition} & $B$ & \\ 3 & {\sc Shift} & $B{-}B'$ & $b$ \\ 4 & {\sc Predict} & $B{-}C~C'$ & \\ 5 & {\sc Scan} & $B{-}C$ & $c$ \\ 6 & {\sc Composition} & $D$ & \\ 7 & {\sc Scan} & $\varepsilon$ & $d$ \\ \hline \end{tabular} \caption{Parsing process of the PDA in Figure \ref{fig:bg:pda1} to recover the parse in Figure \ref{fig:2:depth-1} given the CFG in Figure \ref{fig:bg:cfg-embed} and an input sentence $a~b~c~d$. The stack depth keeps one in every step after a shift transition.} \label{fig:bg:pda-example:2} \end{figure} This PDA has slightly different characteristics in terms of stack depth and the degree of center-embedding, which we point out here with some examples. In particular, it regards the parse in Figure \ref{fig:2:not-center-embedding} as singly (degree one) center-embedded, while the one in Figure \ref{fig:2:depth-1} as not center-embedded. That is, it has just the opposite properties to the PDA that we discussed in Section \ref{sec:bg:left-corner-pda}. We first see how the situation changes for the CFG that we gave an example in Figure \ref{fig:bg:pda-example}, which analyzed the parse in Figure \ref{fig:2:depth-1}. See Figure \ref{fig:bg:pda-example:2}. Contrary to our variant, this PDA has the property that its stack depth after some {\it shift} transitions increases as the degree of center-embedding increases.\footnote{ This again contrasts with our variant (Theorem \ref{thoerem:bg:stack-depth}). This is because in the PDA in \ref{fig:bg:pda1}, new stack element is introduced with a reduce transition (i.e., {\sc Prediction}), and center-embedding is detected with the followed {\sc Shift}, which does not decrease the stack depth. In our variant, on the other hand, new stack element is introduced by {\sc Shift}. Center-embedding is detected if this new element remains on the stack after a reduce transition (by {\sc Prediction}). } In this case, these are steps 3, 5, and 7, all of which has a stack with only one element. The main reason why it does not increase the stack depth is in the first {\sc Composition} operation, which changes the stack symbol to $B$. After that, since the outside structure of $B$ is already processed, the remaining tree looks just like left-branching, which the left-corner PDA including this variant processes without increasing the stack depth. \begin{figure}[t] \centering \begin{tabular}[t]{rlll} \hline Step & Action & Stack & Read symbol \\ \hline & & $S$ & \\ 1 & {\sc Shift} & $S{-}A'$ & $a$ \\ 2 & {\sc Prediction} & $S{-}B~C$ & \\ 3 & {\sc Shift} & {\color{red}{$S{-}B~C{-}B'$}} & $b$ \\ 4 & {\sc Composition} & $S{-}B~C'$ & \\ 5 & {\sc Scan} & $S{-}B$ & $c$ \\ 6 & {\sc Composition} & $D$ & \\ 7 & {\sc Scan} & $\varepsilon$ & $d$ \\ \hline \end{tabular} \caption{Parsing process of the PDA in Figure \ref{fig:bg:pda1} to recover the parse in Figure \ref{fig:2:not-center-embedding}. The stack depth after a shift transition increases at step 3.} \label{fig:bg:pda-example:3} \end{figure} On the other hand, for the parse in Figure \ref{fig:2:not-center-embedding}, this PDA increases the stack depth as simulated in Figure \ref{fig:bg:pda-example:3}. At step 2, the PDA introduces new goal symbol $C$, which remains on the stack after the followed {\sc Shift}. This is the pattern of transitions with which this PDA increase its stack depth, and it occurs when processing the zig-zag patterns starting from left edges, not right edges as in our variant. \paragraph{Discussion} We have pointed out that there are two variants of (arc-eager) left-corner PDAs, which suffer from slightly different conditions under which their stack depth increases. From an empirical point of view, the only common property is its asymptotic behavior. That is, both linearly increase the stack depth as the degree of center-embedding increases. The difference is rather subtle, i.e., the condition of beginning center-embedding (left edges or right edges). Historically, the variant introduced in this section (Figure \ref{fig:bg:pda1}) has been thought as the realization of the left-corner PDA \cite{conf/coling/Resnik92,conf/acl/Johnson98}. However, as we have seen, if we base development of the algorithm on the parsing strategy (Section \ref{chap:2:left-corner-strategy}), our variant can be seen as the correct implementation of it, as only our variant preserves the transparent relationship between the stack depth and the disconnected trees generated during enumeration by the strategy. \newcite{conf/coling/Resnik92} did not design the algorithm based on the parsing strategy, but from an existing arc-standard left-corner PDA \cite{4569645,Cognitive:MentalModels}, which also accepts an empty stack symbol as the final configuration. His main argument is that the arc-eager left-corner PDA can be obtained by introducing a {\sc Composition} operation, which does not exist in the arc-standard PDA. Interestingly, there is another variant of the arc-standard PDA \cite{Nederhof:1993:GLP:976744.976780}, which instead accepts the $S$ symbol\footnote{To be precise, the stack item of \newcite{Nederhof:1993:GLP:976744.976780} is a dotted rule like [S$\rightarrow$ NP$\bullet$VP] and parsing finishes with an item of the form [S$\rightarrow\alpha\bullet$] with some $\alpha$.}. If we extend this algorithm by introducing {\sc Composition}, we get very similar algorithm to the one we presented in Section \ref{sec:bg:left-corner-pda} with the same stack depth property. Thus, we can conclude that Resnik's argument is correct in that a left-corner PDA can be {\it arc-eager} by adding composition operations, but depending on which arc-standard PDA we employ as the basis, the resulting arc-eager PDA may have different characteristics in terms of stack depth. In particular, the initial and final stack configurations are important. If the based arc-standard PDA accepts the empty stack symbol as in \newcite{4569645}, the corresponding arc-eager PDA regards the pattern beginning with right edges as center-embedding. The direction becomes opposite if we start from the PDA that accepts the non-empty stack symbol as in \newcite{Nederhof:1993:GLP:976744.976780}. Our discussion in the following chapters is based on the variant we presented in Section \ref{sec:bg:left-corner-pda}, which is relevant to \newcite{Nederhof:1993:GLP:976744.976780}. However, we do not make any claims such that this algorithm is superior to the variant we introduced in this section. Both are correct arc-eager left-corner PDAs, and we argue that the choice is rather arbitrary. This arbitrariness is further discussed next, along with the limitation of both approaches as the psycholinguistic models. Finally, our variant of the PDA in Section \ref{sec:bg:left-corner-pda} has been previously presented in \newcite{journals/coling/SchulerAMS10} and \newcite{vanschijndel-schuler:2013:NAACL-HLT}, though they do not mention the relevance of the algorithm to the parsing strategy. Their main concern is in the psychological plausibility of the parsing model, and they argue that this variant is more plausible due to its inherent bottom-up nature (not starting from the predicted $S$ symbol). They do not point out the difference of two algorithms in terms of the recognized center-embedded structures as we discussed here. } \subsection{Psycholinguistic motivation and limitation} \label{sec:bg:psycho} We finally summarize left-corner parsing and relevant theories in the psycholinguistics literature. One well known observation about human language processing is that the sentences with multiply center-embedded constructions are quite difficult to understand, while left- and right-branching constructions seem to cause no particular difficulty \cite{Miller1963-MILFMO,gibson1998dlt}. \eenumsentence{\item[a.]\# The reporter [who the senator [who Mary met] attacked] ignored the president.\label{sent:2:embedding} \item[b.]Mary met the senator [who attacked the reporter [who ignored the president]]. } The sentence (\ref{sent:2:embedding}a) is an example of a center-embedded sentence while (\ref{sent:2:embedding}b) is a right-branching sentence. This observation matches the behavior of left-corner parsers, which increase its stack depth in processing center-embedded sentences only, as we discussed above. It has been well established that center-embedded structures are a generally difficult construction \cite{gibson1998dlt,Chen2005144}, and this connection between left-corner parsers and human behaviors motivated researchers to investigate left-corner parsers as an approximation of human parsers \cite{Roark:2001:RPP:933637,journals/coling/SchulerAMS10,vanschijndel-schuler:2013:NAACL-HLT}. The most relevant theory in psycholinguistics that accounts for the difficulty of center-embedding is the one based on the {\it storage cost} \cite{Chen2005144,COGS:COGS1067,nakatani2008}, i.e., the cost associated with keeping incomplete materials in memory.\footnote{ Another explanation for this difficulty is retrieval-based accounts such as the {\it integration cost} \cite{gibson1998dlt,Gibson2000The-dependency-} in the dependency locality theory. We do not discuss this theory since the connection between the integration cost and the stack depth of left-corner parsers is less obvious, and it has been shown that the integration cost itself is not sufficient to account for the difficulty of center-embedding \cite{Chen2005144,COGS:COGS1067}. } For example, \newcite{Chen2005144} and \newcite{COGS:COGS1067} find that people read more center-embedded sentences more slower than less center-embedded sentences, in particular when entering new embedded clauses, through their reading time experiments of English and Japanese, respectively. This observation suggests that there exists some sort of {\it storage} component in human parsers, which is consumed when processing more nested structures, as in the stack of left-corner parsers. However, as we claimed in Section \ref{sec:intro:notabout}, our main goal in this thesis is not to deepen understanding of the mechanism of human sentence processing. One reason of this is that there are some discrepancies between the results in the articles cited above and the behavior of our left-corner parser, which we summarize below. Another, and perhaps more important limitation of left-corner parsers as an approximation of human parsers is that it cannot account for the sentence difficulties not relevant to center-embedding, such as the garden path phenomena: \enumsentence{\# The horse raced past the barn fell,} \label{sent:2:gardenpath} in which people feel difficulty at the last verb {\it fell}. Also there exist some cases in which nested structures {\it do} facilitate comprehension, known as {\it anti-locality} effects \cite{konieczny2000,10.2307/4490268}. These can be accounted for by another, non-memory-based theory called expectation-based account \cite{Hale2001,Levy20081126}, which is orthogonal in many aspects to the memory-based account \cite{jaeger2011language-gsc}. We do not delve into those problems further and in the following we focus on the issues of the former mentioned above, which is relevant to our definition of center-embedding as well as the choice of the variant of left-corner PDAs (Section \ref{sec:bg:anothervariant}). \begin{figure}[t] \centering {\small \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.S [.NP [.NP \edge[roof]; {The reporter} ] [.$\bar{\textrm{S}}$ [.WP who ] [.S [.NP [.NP [.DT the ] [.NP senator ] ] [.$\bar{\textrm{S}}$ [.WP who ] [.S [.NP Mery ] [.VP met ] ] ] ] [.VP attacked ] ] ] ] [.VP \edge[roof]; {ignored the president} ] ] \end{tikzpicture} } \caption{The parse of the sentence (\ref{sent:2:embedding}a).} \label{fig:bg:reporter-parse} \end{figure} \paragraph{Discrepancies in definitions of center-embedding} We argue here that sometimes the stack depth of our left-corner parser {\it underestimates} the storage cost for some center-embedded sentences in which linguists predict greater difficulty for comprehension. More specifically, though \newcite{Chen2005144} claims the sentence (\ref{sent:2:embedding}a) is {\it doubly} center-embedded, our left-corner parser recognizes this is {\it singly} center-embedded, as its parse does not contain the zig-zag pattern in Figure \ref{fig:2:depth-2} (but in Figure \ref{fig:2:depth-1}). Figure \ref{fig:bg:reporter-parse} shows the parse. This discrepancy occurs due to our choice for the definition of center-embedding discussed in Section \ref{sec:bg:embedding}. In our definition (Definition \ref{def:bg:embedding}), center-embedding always starts with a right edge. In the case like Figure \ref{fig:bg:reporter-parse}, two main constituents ``The reporter ... attacked'' and ``ignored the president'' are connected with a left edge, and this is the reason why our definition of center-embedding as well as our left-corner parser predicts that this parse is singly nested. Here we note that although our left-corner parser underestimates the center-embeddedness in some cases, it correctly estimates the relative difficulty of sentence (\ref{sent:2:embedding}a) compared to less nested sentences below. \eenumsentence{ \item[a.]The senator [who Mary met] ignored the president.\label{sent:bg:sigle} \item[b.]The reporter ignored the president.} The problem is that both sentences above are recognized as not center-embedded although some literature in psycholinguistics (e.g., \newcite{Chen2005144}) assumes it is singly center-embedded. {\Ja{書記が}} \REVISE{ \begin{figure}[t] \centering {\small \begin{tikzpicture}[sibling distance=12pt] \tikzset{level distance=25pt} \Tree [.S [.NP {\Ja{書記が}} ] [.VP [.$\bar{\textrm{S}}$ [.S [.NP {\Ja{代議士が}} ] [.VP [.$\bar{\textrm{S}}$ [.S [.NP {\Ja{首相が}} ] [.VP {\Ja{うたた寝した}} ] ] [.ADP {\Ja{と}} ] ] [.VP {\Ja{抗議した}} ] ] ] [.ADP {\Ja{と}} ] ] [.VP {\Ja{報告した}} ] ] ] \end{tikzpicture} } \caption{The parse of the sentence (\ref{sent:bg:japanese-nested}).} \label{fig:bg:shoki-parse} \end{figure} } However, this mismatch does not mean that our left-corner parser always underestimates the predicted center-embeddedness by linguists. \REVISE{ We give further examples below to make explicit the points. \begin{itemize} \item As the example below \cite{nakatani2008} indicates, often in the parse of a Japanese sentence the degree of center-embedding matches the prediction by linguists. \eenumsentence{\item[\#]\Ja{書記が [代議士が [首相が うたた寝した と] 抗議した と] 報告した}\\ secretary-nom [congressman-nom [prime minister-nom dozed comp] protested comp] reported\\ The secretary reported that the congressman protested that the prime minister had dozed.\label{sent:bg:japanese-nested} } The parse is shown in Figure \ref{fig:bg:shoki-parse}, which contains the pattern in Figure \ref{fig:2:depth-2}. This is because two constituents ``{\Ja{書記が}}'' and ``{\Ja{代議士が ... 報告した}}'' are connected with a right edge in this case. \item This observation may suggest that our left-corner parser always underestimates the degree of center-embedding for specific languages, e.g., English. However, this is not generally true since we can make an English example in which two predictions are consistent, as in Japanese sentence, e.g., by making the sentence (\ref{sent:2:embedding}) as a large complement as follows: \enumsentence{\# He said [the reporter [who the senator [who Mary met] attacked] ignored the president].\label{sent:2:embedding:match}} In the example, ``He said'' does not cause additional embedding, as the constituent ``the reporter ... president'' is not embedded internally, and thus linguists predict that this is still doubly center-embedded. On the other hand, the parse now involves the pattern in Figure \ref{fig:2:depth-2}, suggesting that the predictions are consistent in this case. \end{itemize} The point is that since our left-corner parser (PDA) only regards the pattern starting from right edges as center-embedding, it underestimates the prediction by linguists when the direction of outermost edge in the parse is left, as in Figure \ref{fig:bg:reporter-parse}. Though there might be some language specific tendency (e.g., English sentences might be often underestimated) we do not make such claims here, since the degree of center-embedding in our definition is determined purely in terms of the tree structure, as indicated by sentence (\ref{sent:2:embedding:match}). We perform the relevant empirical analysis on treebanks in Chapter \ref{chap:transition}. From the psycholinguistics viewpoint, this discrepancy might make our empirical studies in the following chapters less attractive. However, as we noted in Section \ref{sec:intro:notabout}, our central motivation is rather to capture the universal constraint that every language may suffer from, though is computationally tractable, which we argue does not necessarily reflect correctly the difficulties reported by psycholinguistic experiments. As might be predicted, the results so far become opposite if we employ another variant of PDA that we formulated in Section \ref{sec:bg:anothervariant}, in which the stack depth increases on the pattern starting from left edges, as in Figure \ref{fig:2:not-center-embedding}. This variant of PDA estimates that the degree of center-embedding on the parse in Figure \ref{fig:bg:reporter-parse} will be two, while that of Figure \ref{fig:bg:shoki-parse} will be one. This highlights that the reason of the observed discrepancies is mainly due to the computational tractability: We can develop a left-corner parser so that its stack depth increases on center-embedded structures indicated by some zig-zag patterns, which are always starting from left (the variant of \newcite{conf/coling/Resnik92}), or right (our variant). However, from an algorithm perspective, it is hard to allow both left and right directions, and this is the assumption of psycholinguists. Again, we do argue that our choice for the variant of the left-corner PDA is rather arbitrary. This choice may impact the empirical results in the following chapters, where we examine the relationships between parses on the treebanks and the incurred stack depth. In the current study, we do not empirically compare the behaviors of two PDAs, which we leave as one of future investigations. } \section{Learning Dependency Grammars} \label{sec:2:learning} In this section we will summarize several basic ideas about learning and parsing of dependency grammars. The dependency model with valence \cite{klein-manning:2004:ACL} is the most popular model for unsupervised dependency parsing, which will be the basis of our experiments in Chapter \ref{chap:induction}. We formalize this model in Section \ref{sec:2:dmv} as a special instance of split bilexical grammars (Section \ref{sec:2:sbg}). Before that, this section first reviews some preliminaries on a learning mechanism, namely, probabilistic context-free grammars (Section \ref{sec:2:pcfg}), chart parsing (Section \ref{sec:2:cky}), and parameter estimation with the EM algorithm (Section \ref{sec:2:em}). \subsection{Probabilistic context-free grammars} \label{sec:2:pcfg} Here we start the discussion with probabilistic context-free grammars (PCFGs) because they will allow use of generic parsing and parameter estimation methods that we describe later. However, we note that for the grammar to be applied these algorithms, the grammar should not necessarily be a PCFG. We will see that in fact split bilexical grammars introduced later cannot always be formulated as a correct PCFG. Nevertheless, we begin this section with the discussion of PCFGs mainly because: \begin{itemize} \item the ideas behind chart parsing algorithms (Sections \ref{sec:2:cky} and \ref{sec:2:em}) can be best understood with a simple PCFG; and \item we can obtain a natural generalization of these algorithms to handle a special class of grammars (not PCFGs) including split bilexical grammars. We will describe the precise condition for a grammar to be applied these algorithms later. \end{itemize} Formally a PCFG is a tuple $G=(N,\Sigma,P,S,\theta)$ where $(N,\Sigma,P,S)$ is a CFG (Section \ref{sec:2:cfg}) and $\theta$ is a vector of non-negative real values indexed by production rules $P$ such that \begin{equation} \sum_{A\rightarrow \beta \in P_A} \theta_{A \rightarrow \beta} = 1, \label{eqn:2:normalize} \end{equation} where $P_A \subset P$ is a collection of rules of the form $A \rightarrow \beta$. We can interpret $\theta_{A \rightarrow \beta}$ as the conditional probability of choosing a rule $A \rightarrow \beta$ given that the nonterminal being expanded is $A$. With this model, we can calculate the score (probability) of a parse as the product of rules appeared on that. Let a parse be $z$ that contains rules $r_1,r_2,\cdots$; then the probability of $z$ under the given PCFG is \begin{align} P(z|\theta) &= \prod_{r_i \in z} \theta_{r_i} \\ &= \prod_{r \in P} \theta_r^{f(r,z)}, \label{eqn:2:pz} \end{align} where $f(r,z)$ is the number of occurrences of a rule $r$ in $z$. We can also interpret a PCFG as a directed graphical model that defines a distribution over CFG parses. The generative process is described as follows: Starting at $S$ (start symbol), it chooses to apply a rule $S\rightarrow \beta$ with probability $\theta_{S\rightarrow \beta}$; $\beta$ defines the symbols of the children, which are then expanded recursively to generate their subtrees. This process stops when all the leaves of the tree are terminals. Note that this process also generates a sentence $x$, which is obtained by concatenating every terminal symbol in $z$, meaning that: \begin{equation} P(z|\theta) = P(x,z|\theta). \label{eqn:2:pz-equality} \end{equation} Some questions arise when applying this model to real parsing applications like grammar induction: \begin{description} \item[Parsing] Given a PCFG $G$, how to find the best (highest probablity) parse among all possible parses? \item[Learning] Where do the probabilities, or rule weights $\theta$ come from? \end{description} The nice property of PCFGs is that there is a very general solution for these questions. We first discuss the first question in Section \ref{sec:2:cky}, and then deal with the second question in Section \ref{sec:2:em}. \subsection{CKY Algorithm} \label{sec:2:cky} Let us define some notations first. We assume the input sentence is a length $n$ sentence, $x = x_1 x_2 \cdots x_n$ where $x_i \in \Sigma$. For $i \leq j$, $x_{i,j} = x_i x_{i+1} \cdots x_{j}$ denotes an input substring. We assume the grammar is in CNF (Section \ref{sec:2:cfg}), which makes the discussion much simpler. Given an input sentence $x$ and a PCFG with parameters $\theta$, the goal of parsing is to solve the following argmax problem: \begin{equation} z' = \arg\max_{z\in \mathcal{Z}(x)} P(z|\theta), \label{eq:2:cky-argmax} \end{equation} where $\mathcal{Z}(x)$ is a set of all possible parses on $x$. Now we describe a general algorithm to solve this problem in polynomial time called the CKY algorithm, which also plays an essential role in parameter esmation of $\theta$ discussed in Section \ref{sec:2:em}. For the moment we simplify the problem as calculating the {\it probability} of the best parse instead of the best parse itself (Eq. \ref{eq:2:cky-argmax}). We later describe that the argmax problem can be solved with a small modification to this algorithm. The CKY algorithm is a kind of chart parsing. For an input string, there are too many, exponential number of parses, which prohibit to enumerate one by one. To enable search in this large space, a chart parser divides the problem into subproblems, each of which analyze a small span $x_{i,j}$ and then is combined into an analysis of a larger span. Let $C$ be a chart, which gives mapping from a signature of a subspan, or an item $(i,j,N)$ to a real value. That is, each cell of $C$ keeps the probability of the best (highest score) analysis for a span $x_{i,j}$ with symbol $N$ as its root. Algorithm \ref{alg:cky} describes the CKY parsing, in which each chart cell is filled recursively. The procedure {\sc Fill}($i,j,N$) return a value with memoization. This procedure is slightly different from the ones found in some textbooks \cite{manning99foundations}, which instead fill chart cells in specific order. We do no take this approach since our memoization-based technique need not care about correct order of filling chart cells, which is somewhat involved for more complex grammars such as the one we present in Chapter \ref{chap:induction}. \begin{algorithm}[t] \caption{CKY Parsing}\label{alg:cky} \hspace{18pt} \begin{minipage}{0.95\textwidth} \begin{algorithmic}[1] \Procedure{Parse}{$x$} \State $C[1,n,S] \leftarrow {\textrm{\sc Fill}}(1,n,S)$ \Comment{Recursively fills chart cells.} \State \Return $C[1,n,S]$ \EndProcedure \Procedure{Fill}{$i,j,N$} \If{$(i,j,N) \not\in C$} \If{$i = j$} \label{alg:cky:fill-start} \State $C[i,i,N] \leftarrow \theta_{N \rightarrow x_{i}}$ \Comment{Terminal expansion.} \label{alg:cky:terminal} \Else \State $C[i,j,N] \leftarrow \max_{N \rightarrow A~B \in R; k \in [i,j]} \theta_{N \rightarrow A~B} \times {\textrm{\sc Fill}}(i,k,A) \times {\textrm{\sc Fill}}(k,j,B)$ \label{alg:cky:recursive} \EndIf \label{alg:cky:fill-end} \EndIf \State \Return $C[i,j,N]$ \EndProcedure \end{algorithmic} \end{minipage} \end{algorithm} The crucial point in this algorithm is the recursive equation in line \ref{alg:cky:recursive}. The assumption here is that since the grammar is context-free, the parses of subspans (e.g., spans with signatures $(i,k,A)$ and $(k,j,B)$) in the best parse of $x_{i,j}$ should also be the best parses at subspan levels. The goal of the algorithm is to fill the chart cell for an item $(1,n,S)$ where $S$ is the start symbol, which corresponds the analysis of the full span (whole sentence). To get the best parse, what we need to modify in the algorithm \ref{alg:cky} is to keep backpointers to the cells of the best children when filling each chart cell during lines \ref{alg:cky:fill-start}--\ref{alg:cky:fill-end}. This is commonly done by preparing another chart, in which each cell keeps not a numerical value but backpointers into other cells in that chart. The resulting algorithm is called the {\it Viterbi} algorithm, the details of which are found in the standard textbooks \cite{manning99foundations}. \begin{figure}[t] \centering \begin{minipage}[t]{.5\linewidth} \begin{tikzpicture}[thick, level distance=0.8cm] \node {\sc Terminal:}; \draw (1, -0.5) -- +(2, 0) node[right] {$N\rightarrow x_i \in R$}; \begin{scope}[xshift=2cm, yshift=-0.9cm] \Tree [.$N$ \edge[roof,thick]; {~~~~~~~~} ]; \node at (0, -0.85cm) {$i~~~~~~~~~~i$}; \end{scope} \end{tikzpicture} \end{minipage} \begin{minipage}[t]{.5\linewidth} \begin{tikzpicture}[thick, level distance=0.8cm] \node {\sc Binary:}; \begin{scope}[xshift=2cm, yshift=0cm] \Tree [.$A$ \edge[roof,thick]; {~~~~~~~~} ]; \end{scope} \begin{scope}[xshift=4cm, yshift=0cm] \Tree [.$B$ \edge[roof,thick]; {~~~~~~~~} ]; \end{scope} \node at (3, -0.85) {$i~~~~~~~~~~k~~~~~~k~~~~~~~~~~j$}; \draw (1.2, -1.1) -- +(3.5, 0) node[right] {$N\rightarrow A~B \in R$}; \begin{scope}[xshift=3cm, yshift=-1.45cm] \Tree [.$N$ \edge[roof,thick]; {~~~~~~~~} ]; \node at (0, -0.85cm) {$i~~~~~~~~~~j$}; \end{scope} \end{tikzpicture} \end{minipage} \caption{Inference rules of the CKY algorithm. {\sc Terminal} rules correspond to the terminal expansion in line \ref{alg:cky:terminal} of the Algorithm \ref{alg:cky}; {\sc Binary} rules correspond to the one in line \ref{alg:cky:recursive}. Each rule specifies how an analysis of a larger span (below ---) is derived from the analyses of smaller spans (above ---) provided that the input and grammar satisfy the side conditions in the right of ---.} \label{fig:2:cky} \end{figure} The behavior of the CKY parsing is characterized by the procedure of filling chart cells in lines \ref{alg:cky:fill-start}--\ref{alg:cky:fill-end} of Algorithm \ref{alg:cky}. We often write this procedure visually as in Figure \ref{fig:2:cky}. With this specification we can analyze the time complexity of the CKY algorithm, which is $O(n^3|P|)$ where $|P|$ is the number of allowed rules in the grammar because each rule in Figure \ref{fig:2:cky} is performed only once \footnote{Once the chart cell of some item is calculated, we can access to the value of that cell in $O(1)$.} and there are at most $O(n^3|P|)$ ways of instantiations for {\sc Binary} rules.\footnote{{\sc Terminal} rules are instantiated at most $O(n|N|)$ ways, which is smaller than $O(n^3|R|)$.} \subsection{Learning parameters with EM algorithm} \label{sec:2:em} Next we briefly describe how rule weights $\theta$ can be estimated given a collection of input sentences. This is the setting of {\it unsupervised} learning. In supervised learning, we can often learn parameters more easily by counting rule occurrences in the training treebank \cite{P97-1003,J98-4004,klein-manning:2003:ACL}. \paragraph{EM algorithm} Assume we have a set of training examples $\mathbf x = x^{(1)}, x^{(2)},\cdots,x^{(m)}$. Each example is a sentence $x^{(i)} = x^{(i)}_1 x^{(i)}_2 \cdots x^{(i)}_{n_i}$ where $n_i$ is the length of the $i$-th sentence. Our goal is to estimate parameters $\theta$ given $\mathbf x$, which is good in some criteria. The EM algorithm is closely related to maximum likelihood estimation in that it tries to estimate $\theta$, which maximizes the following log-likelihood of the observed data $\mathbf x$: \begin{equation} L(\theta,\mathbf x) = \sum_{1 \leq i \leq m} \log p(x^{(i)} | \theta) = \sum_{1 \leq i \leq m} \log \sum_{z \in \mathcal{Z}(x^{(i)})} p(x^{(i)}, z | \theta), \label{eqn:2:emobj} \end{equation} where $p(x^{(i)}, z|\theta)$ is given by Eq. \ref{eqn:2:pz} due to Eq. \ref{eqn:2:pz-equality}. However, calculating $\theta$ that maximizes this objective is generally intractable \cite{DEMP1977}. The idea of EM algorithm is instead of getting the optimal $\theta$, trying to increase Eq. \ref{eqn:2:emobj} up to some point to find the locally optimal values of $\theta$, starting from some initial values $\theta_0$. It is an iterative procedure and updates parameters as $\theta^{(0)} \rightarrow \theta^{(1)} \rightarrow \cdots$ until specific number of iterations (or until $L(\theta,\mathbf x)$ does not increase). Each iteration of the EM algorithm proceeds as follows. \begin{description} \item[E-step] Given the current parameters $\theta^{(t)}$, calculate the expected counts of each rule $e(r|\theta^{(t)})$ as \begin{equation} e(r|\theta^{(t)}) = \sum_{1 \leq i \leq m} e_{x^{(i)}}(r|\theta^{(t)}), \end{equation} where $e_{x}(r|\theta^{(t)})$ is the expected counts of $r$ in a sentence $x$, given by \begin{equation} e_{x}(r|\theta^{(t)}) = \sum_{z \in \mathcal{Z}(x)} p(z|x) f(r,z). \label{eqn:2:ex} \end{equation} where $f(r,z)$ is the number of times that $r$ appears in $z$. As in the parsing problem in Eq. \ref{eq:2:cky-argmax}, it is impossible to directly calculate Eq. \ref{eqn:2:ex} by enumerating every parse. Below, we describe how this calculation becomes possible with the dynamic programming algorithm called the inside-outside algorithm, which is similar to CKY. \item[M-step] Update the parameters as follows: \begin{equation} \theta^{(t+1)}_{A \rightarrow \beta} = \frac{e(A \rightarrow \beta | \theta^{(t)})}{\sum_{\alpha: A\rightarrow \alpha \in R} e(A \rightarrow \alpha | \theta^{(t)}) } \end{equation} This update is similar to the standard maximum likelihood estimation in the supervised learning setting, in which we observe the explicit counts of each rule. In the EM algorithm we do not explicitly observe rule counts, so we use the expected counts calculated with the previously estimated parameters. We can show that this procedure always increases the log likelihood (Eq. \ref{eqn:2:emobj}) until convergence though the final parameters are not globally optimum. \end{description} \paragraph{Inside-outside algorithm} We now explain how the expected rule counts $e_{x}(r|\theta)$ are obtained for each $r$ given sentence $x$. Let $r$ be a binary rule $r = A \rightarrow B~C$. First, it is useful to decompose $e_{x}(r|\theta)$ as the expected counts on each subspan as follows: \begin{equation} e_{x}(r|\theta) = \sum_{1\leq i \leq k \leq j \leq n_x} e_x(z_{i,k,j,r}|\theta). \end{equation} $e_x(z_{i,k,j,r}|\theta)$ is the expected counts of an event that the following fragment occurs in a parse $z$. \begin{center} \tikz[level distance=0.8cm]{ \Tree [.$A$ [.$B$ \edge[roof]; {~~~~~~~~} ] [.$C$ \edge[roof]; {~~~~~~~~} ] ] \node at (0,-1.7) {$i~~~~~~~~~k~~~~~~~~~j$}; } \end{center} \noindent $z_{i,k,j,r}$ is an indicator variable\footnote{We omit dependence for $x$ for simplicity.} that is 1 if the parse contains the fragment above. Because the expected counts for an indicator variable are the same as the conditional probability of that variable \cite{Bishop:2006:PRM:1162264}, we can rewrite $e_x(z_{i,k,j,r}|\theta)$ as follows: \begin{align} e_x(z_{i,k,j,r}|\theta) &= p(z_{i,k,j,r} = 1 | x, \theta) \\ &= \frac{p(z_{i,k,j,r} = 1, x | \theta)}{p(x|\theta)}. \label{eqn:2:ew} \end{align} Intuitively the numerator in Eq. \ref{eqn:2:ew} is the total probability for generating parse trees that yield $x$ and contain the fragment $z_{i,k,j,r}$. The denominator $p(x|\theta)$ is the marginal probability of the sentence $x$. We first consider how to calculate $p(x|\theta)$, which can be done with a kind of CKY parsing; what we have to modify is just to replace the $\max$ operation in the line \ref{alg:cky:recursive} in Algorithm \ref{alg:cky} by the summation operation. Then each chart cell $C[i,j,N]$ keeps the marginal probability for a subspan $x_{i,j}$ rooted at $N$. After filling the chart, $C[1,n_x,S]$ is the sentence marginal probability $p(x|\theta)$. The marginal probability for the signature $(i,j,N)$ is called the {\it inside} probability, and this chart algorithm is called the inside algorithm, which calculates inside probabilities by filling chart cells recursively. \REVISE{ Calculation of $p(z_{i,k,j,r} = 1, x | \theta)$ is more elaborate so we only sketch the idea here. Analogous to the inside probability introduced above, we can also define the outside probability $O(i,j,N)$, which is the marginal probability for the outside of the span with signature $(i,j,N)$; that is, \begin{equation} O(i,j,N) = p(x_{1,i-1},N,x_{j+1,n}|\theta), \end{equation} in which $N$ roots the subspan $x_{i,j}$. Since $I(i,j,N) = p(x_{x_{i,j}}| N,\theta)$, given $r = N \rightarrow A~B$ we obtain: \begin{equation} p(z_{i,k,j,r} = 1, x_{i,j}| N, \theta) = \theta_{N \rightarrow A~B} \times I(i,k,A) \times I(k,j,B), \end{equation} which is the total probability of parse trees for the subspan $x_{i,j}$ that contains the fragment indicated by $z_{i,k,j,r}$. Combining these two terms, we obtain: \begin{align} p(z_{i,k,j,r} = 1, x | \theta) &= p(z_{i,k,j,r} = 1, x_{i,j}| N, \theta) \times p(x_{1,i-1},N,x_{j+1,n}|\theta) \\ &= \theta_{N \rightarrow A~B} \times I(i,k,A) \times I(k,j,B) \times O(i,j,N). \label{eqn:2:io} \end{align} } \subsection{When the algorithms work?} \label{sec:2:when} So far we have assumed the underlying grammar is a PCFG, for which we have introduced two algorithms, the CKY algorithm and the EM algorithm with inside-outside calculation. However as we noted in the beginning of Section \ref{sec:2:pcfg}, the scope of these algorithms is not limited to PCFGs. What is the precise condition under which these algorithms can be applied? PCFGs are a special instance of weighted CFGs, in which each rule has a weight but the sum of rule weights from a parent nonterminal (Eq. \ref{eqn:2:normalize}) is not necessarily normalized. As we see next, the split bilexical grammars in Section \ref{sec:2:sbg} can always be converted to a weighted CFG but may not be converted to a PCFG. The CKY algorithm can be applied to {\it any} weighted CFGs. This is easily verified because only the assumption in the Algorithm \ref{alg:cky} is that the grammar is context-free for being able to divide a larger problem into smaller subproblems. The condition for the inside-outside algorithm is more involved. Let us assume that we have a generative model of a parse, which is not originally parameterized with a PCFG, and also we have a weighted CFG designed so that the score that this weighted CFG gives to a (CFG) parse is the same as the probability that the original generative model assigns to the corresponding parse in the original form (not CFG) (The model in Section \ref{sec:2:sbg} is an example of such cases.). Then, the necessary condition on this weighted CFG is that there is no spurious ambiguity between two representations (Section \ref{sec:bilexical}); that is, a CFG parse can be uniquely converted to the parse in the original form, and vice versa. The main reason why the spurious ambiguity cause a problem is that the quantities used to calculate expected counts (Eq. \ref{eqn:2:ew}) are not correctly defined if the spurious ambiguity exists. For example, the sentence marginal probability $p(x|\theta)$ would not be equal to the inside probability for the whole sentence $I(1,n_x,S)$ when the spurious ambiguity exists since if there are multiple CFG derivations for a single parse in the original form the inside probability calculated with the weighted CFG would overestimate the true sentence marginal probability. The same issue happens for another quantity in Eq. \ref{eqn:2:io}. On the other hand, if there is one-to-one correspondence between two representations, it should hold at the smaller subspan levels and it is this transparency that guarantees the correctness of the inside-outside algorithm even if the grammar is not strictly a PCFG. \subsection{Split bilexical grammars} \label{sec:2:sbg} \begin{figure}[t] \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.8cm] Mary \& met \& the \& senator \& in \& the \& office \& $\$$ \\ \end{deptext} \depedge{2}{1}{} \depedge{2}{4}{} \depedge{4}{3}{} \depedge{2}{5}{} \depedge{5}{7}{} \depedge{7}{6}{} \depedge{8}{2}{} \end{dependency} \caption{Example of a projective dependency tree generated by a SBG. $\$$ is always placed at the end of a sentence, which has only one dependent in the left direction.} \label{fig:2:deptree-sbg} \end{figure} The split bilexical grammars, or SBGs \cite{eisner-2000-iwptbook} is a notationally simpler variant of split head-automaton grammars \cite{Eisner:1999:EPB:1034678.1034748}. Here we describe this model as a generalization of the specific model described in Section \ref{sec:2:dmv}, the dependency model with valence (DMV) \cite{klein-manning:2004:ACL}. We will give a somewhat in-depth explanation of this grammar below because it will be the basis of our proposal in Chapter \ref{chap:induction}. The explanation below basically follows \newcite{eisner2010}. A SBG defines a distribution over projective dependency trees. This model can easily be converted to an equivalent weighted CFG, although some effort is needed to remove the {\it spurious ambiguity}. We will show that by removing it the time complexity can also be improved from $O(n^5)$ to $O(n^3)$. In Chapter \ref{chap:induction} we will invent the similar technique for our model that follows the left-corner parsing strategy. \paragraph{Model} A (probabilistic) SBG is a tuple $G_{SBG}=(\Sigma, \$, L, R)$. $\Sigma$ is an alphabet of words that may appear in a sentence. $\$ \not\in \Sigma$ is a distinguished root symbol, which we describe later; let $\bar \Sigma\ = \Sigma \cup \{\$\}$. $L$ and $R$ are functions from $\bar \Sigma$ to probabilistic $\epsilon$-free finite-state automata over $\Sigma$; that is, for each $a \in \bar \Sigma$ the SBG specifies ``left'' and ``right'' probabilistic FSAs, $L_a$ and $R_a$. We write $q \xmapsto{~a'~} r \in R_{a}$ to denote a state transition from $q$ to $r$ by adding $a'$ to $a$'s right dependents when the current right state of $a$ is $q$. Also each model defines $\textit{init}(L_a)$ and $\textit{init}(R_a)$ that return the set of initial states for $a$ in either direction (usually the initial state is unique given the head $a$ and the direction). $\textit{final}(L_a)$ is a set of final states; $q \in \textit{final}(L_a)$ means that $a$ can stop generating its left dependents. We will show that by changing the definitions of these functions several generative models over dependency trees can be represented in a unified framework. The model generates a sentence $x_1 x_2 \cdots x_{n} \$$ along with a parse, given the root symbol $\$$, which is always placed at the end. An example of a parse is shown in Figure \ref{fig:2:deptree-sbg}. SBGs define the following generative process over dependency trees: \begin{enumerate} \item The root symbol $\$$ generates a left dependent $a$ from $q\in \textit{init}(L_\$)$. $a$ is regarded as the conventional root word in a sentence (e.g., {\it met} in Figure \ref{fig:2:deptree-sbg}). \item Recursively generates a parse tree. Given the current head $a$, the model generates its left dependents and its right dependents. This process is head-outward, meaning that the closest dependent is generated first. For example, the initial left state of {\it met}, $q \in \textit{init}(L_{\textit met})$ generates {\it Mary}. The right dependents are generated as follows: First {\it senator} is generated from $q_0 \in \textit{init}(R_{\textit met})$. Then the state may be changed by a transition $q_0 \xmapsto{\textit{senator}} q_1$ to $q_1 \in R_{\textit met}$, which generates {\it in}. The process stops when every token stops generating its left and right dependents. \end{enumerate} This model can generalize several generative models over dependency trees. Given $a$, $L_a$ and $R_a$ define distributions over $a$'s left and right children, respectively. Since the automata $L_a$ and $R_a$ have the current state, we can define several distributions by customizing the topology of state transitions. For example if we define the automata of the form: \begin{center} \tikz[thick, level distance=0.8cm, >=stealth']{ \def\state#1{ \draw (0,0) circle (0.25); \draw (0,0) circle (0.5); \node at (45:0.75) {#1}; } \state{$q_0$}; \draw[->] (0.55, 0) -- +(1.9, 0); \begin{scope}[xshift=3.0cm,yshift=0cm] \state{$q_1$}; \end{scope} \draw[rounded corners=8pt,->] (3.55, 0.25) -- ++(0.9, 0) -- ++(0, -0.5) -- ++(-0.9, 0); } \end{center} \noindent it would allow the first (closet) dependent to be chosen differently from the rest ($q_0$ defines the probability of the first dependent). If we remove $q_0$, the resulting automata are \tikz[baseline=-4pt]{ \draw (0,0) circle (2pt); \draw (0,0) circle (5pt); \draw[rounded corners=3pt,->,>=stealth'] (6pt,3pt) -- ++(12pt,0) -- ++(0,-6pt) -- ++(-12pt,0); } with a single state $q_1$, so token $a$'s left (or right) dependents are conditionally independent of one another given $a$.\footnote{SBGs can also be used to encode second-order {\it adjacent} dependencies, i.e., $a$'s left or right dependent to be dependent on its sibling word just generated before, although in this case there exists more efficient factorization that leads to a better asymptotic runtime \cite{McDonald2006,johnson:2007:ACLMain}.} \paragraph{Spurious ambiguity in naive $\boldsymbol{O(n^5)}$ parsing algorithm} We now describe that a SBG can be converted to a weighted CFG, though the distributions associated with $L_a$ and $R_a$ cannot always be encoded in a form of a PCFG. Also, as we see below, the grammar suffers from the spurious ambiguity, which prevent us to apply the inside-outside algorithm (see Section \ref{sec:2:when}). The key observation for this conversion is that we can represent a subtree of SBGs as the following triangle, which can be seen as a special case of a subtree used in the ordinary CKY parsing. \begin{center} \vspace{-5pt} \tikz[thick, level distance=0.8cm]{ \Tree [.$q_1~q_2$ \edge[roof,thick]; {~~~~~~~~} ]; \node at (0, -0.85) {$i~~~~h~~~~j$}; }\vspace{-10pt} \end{center} \noindent The main difference from the ordinary subtree representation is that it is decorated with an additional index $h$, which is the position of the head word. For example in the analysis of Figure \ref{fig:2:deptree-sbg}, a subtree on {\it met the senator} is represented by setting $i=2, j=4, h=2$. $q_1$ and $q_2$ are the current $h$'s left and right states respectively. \begin{figure}[t] \centering \begin{minipage}[t]{.49\linewidth} \centering \begin{tikzpicture}[thick, level distance=0.8cm] \begin{scope}[xshift=2cm, yshift=0cm] \Tree [.$q_1~q_2$ \edge[roof,thick]; {~~~~~~~~} ]; \end{scope} \begin{scope}[xshift=4cm, yshift=0cm] \Tree [.$F~F$ \edge[roof,thick]; {~~~~~~~~} ]; \end{scope} \node at (3, -0.86) {$i~~~~~h~~~~k~~~~~~k~~~~h'~~~j$}; \draw (1.2, -1.1) -- +(3.5, 0) node[right] {$q_2 \xmapsto{~h'~} q_2' \in R_{h}$}; \begin{scope}[xshift=3cm, yshift=-1.45cm] \Tree [.$q_1~q_2'$ \edge[roof,thick]; {~~~~~~~~} ]; \node at (0, -0.86cm) {$i~~~~h~~~~~j$}; \end{scope} \end{tikzpicture} \end{minipage} \begin{minipage}[t]{.49\linewidth} \centering \begin{tikzpicture}[thick, level distance=0.8cm] \begin{scope}[xshift=2cm, yshift=0cm] \Tree [.$F~F$ \edge[roof,thick]; {~~~~~~~~} ]; \end{scope} \begin{scope}[xshift=4cm, yshift=0cm] \Tree [.$q_1'~q_2'$ \edge[roof,thick]; {~~~~~~~~} ]; \end{scope} \node at (3, -0.86) {$i~~~~h'~~~~k~~~~~~k~~~~h~~~~j$}; \draw (1.2, -1.1) -- +(3.5, 0) node[right] {$q_1' \xmapsto{~h'~} q_1 \in L_{h}$}; \begin{scope}[xshift=3cm, yshift=-1.45cm] \Tree [.$q_1~q_2'$ \edge[roof,thick]; {~~~~~~~~} ]; \node at (0, -0.86cm) {$i~~~~h~~~~~j$}; \end{scope} \end{tikzpicture} \end{minipage} \caption{Binary inference rules in the naive CFG conversion. $F$ means the state is a final state in that direction. Both left and right consequent items (below ---) have the same item but from different derivations, suggesting 1) the weighted grammar is not a PCFG; and 2) there is the spurious ambiguity.} \label{fig:2:binary-naive-sbg} \end{figure} We can assume a tuple ($h$, $q_1$, $q_2$) to comprise a nonterminal symbol of a CFG. Then the grammar is a PCFG if the normalization condition (Eq. \ref{eqn:2:normalize}) is satisfied for every such tuple. Note now each rule looks like $(h,q_1,q_2) \rightarrow \beta$. Figure \ref{fig:2:binary-naive-sbg} explains why the grammar cannot be a PCFG. We can naturally associate PCFG rule weights for these rules with transition probabilities given by the automata $L_h$ and $R_h$.\footnote{Here and the following, we occasionally abuse the notation and use $L_h$ or $R_h$ to mean the automaton associated with word $x_h$ at index position $h$.} However, then the sum of rule weights of the converted CFG starting from symbol $(a,q_1,q_2')$ is not equal to 1. The left rule of Figure \ref{fig:2:binary-naive-sbg} means the converted CFG would have rules of the form $(a,q_1,q_2') \rightarrow (a,q_1,q_2)~(a',F,F)$. The weights associated with these rules are normalized across $a' \in \Sigma$, as state transitions are deterministic given $q_2'$ and $a'$. The problem is that the same signature, i.e., $(a,q_1,q_2')$ on the same span can also be derived from another rule in the right side of Figure \ref{fig:2:binary-naive-sbg}. This distribution is also normalized, meaning that the sum of rule weights is not 1.0 but 2.0, and thus the grammar is a weighted CFG, not a PCFG. The above result also suggests that the grammar suffers from the spurious ambiguity, Below we describe how this ambiguity can be removed with modifications. As a final note, the time complexity of the algorithm in Figure \ref{fig:2:binary-naive-sbg} is very inefficient, $O(n^5)$ because there are five free indexes in each rule. This is in contrast to the complexity of original CKY parsing, which is $O(n^3)$. The refinement described next also fix this problem, and we obtain the $O(n^3)$ algorithm for parsing general SBGs. \begin{figure}[p] \begin{tikzpicture}[thick, level distance=0.8cm, >=stealth'] \begin{scope}[xshift=0cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Start-Left:}; \node at (1.25, -0.5) {$q \in \textit{init}(L_h)$}; \draw (0.1, -0.75) -- +(2.5, 0) node[right] {$1 \leq h \leq n+1$}; \begin{scope}[xshift=1.5cm, yshift=-1.3cm] \lefttriangle{$q$}{$h$}{$h$}; \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-3cm] \node at (0, 0) [anchor=west] {\sc Finish-Left:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \lefttriangle{$q$}{$i$}{$h$}; \end{scope} \node at (1.5, -1.25) [anchor=west] {$q \in \textit{final}(L_h)$}; \draw (0.1, -1.75) -- +(4.1, 0); \begin{scope}[xshift=2.0cm, yshift=-2.3cm] \lefttriangle{$F$}{$i$}{$h$}; \end{scope} \end{scope} \begin{scope}[xshift=7.5cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Start-Right:}; \node at (1.25, -0.5) {$q \in \textit{init}(R_h)$}; \draw (0.1, -0.75) -- +(2.5, 0) node[right] {$1 \leq h \leq n$}; \begin{scope}[xshift=0.75cm, yshift=-1.3cm] \righttriangle{$q$}{$h$}{$h$}; \end{scope} \end{scope} \begin{scope}[xshift=7.5cm,yshift=-3cm] \node at (0, 0) [anchor=west] {\sc Finish-Right:}; \begin{scope}[xshift=0.5cm, yshift=-0.75cm] \righttriangle{$q$}{$i$}{$h$} \end{scope} \node at (1.5, -1.25) [anchor=west] {$q \in \textit{final}(R_h)$}; \draw (0.1, -1.75) -- +(4.1, 0); \begin{scope}[xshift=1.25cm, yshift=-2.3cm] \righttriangle{$F$}{$h$}{$i$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-7cm] \node at (0, 0) [anchor=west] {\sc Attach-Left:}; \begin{scope}[xshift=0.5cm, yshift=-0.75cm] \righttriangle{$F$}{$h'$}{$i-1$} \end{scope} \begin{scope}[xshift=3.0cm, yshift=-0.75cm] \lefttriangle{$q$}{$i$}{$h$} \end{scope} \node at (4.0, -1.25) [anchor=west] {$q \xmapsto{~h'~} r \in L_h$}; \draw (0.1, -1.75) -- +(6.6, 0); \begin{scope}[xshift=3.0cm, yshift=-2.2cm] \lefttrape{$r$}{$h'$}{$h$} \end{scope} \end{scope} \begin{scope}[xshift=7.5cm,yshift=-7cm] \node at (0, 0) [anchor=west] {\sc Attach-Right:}; \begin{scope}[xshift=0.5cm, yshift=-0.75cm] \righttriangle{$q$}{$h$}{$i-1$} \end{scope} \begin{scope}[xshift=3.0cm, yshift=-0.75cm] \lefttriangle{$F$}{$i$}{$h'$} \end{scope} \node at (4.0, -1.25) [anchor=west] {$q \xmapsto{~h'~} r \in R_h$}; \draw (0.1, -1.75) -- +(6.6, 0); \begin{scope}[xshift=2.6cm, yshift=-2.2cm] \righttrape{$r$}{$h$}{$h'$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-11cm] \node at (0, 0) [anchor=west] {\sc Complete-Left:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \lefttriangle{$F$}{$i$}{$h'$} \end{scope} \begin{scope}[xshift=3.0cm, yshift=-0.75cm] \lefttrape{$q$}{$h'$}{$h$} \end{scope} \draw (0.1, -1.75) -- +(3.6, 0); \begin{scope}[xshift=2.0cm, yshift=-2.3cm] \lefttriangle{$q$}{$i$}{$h$} \end{scope} \end{scope} \begin{scope}[xshift=7.5cm,yshift=-11cm] \node at (0, 0) [anchor=west] {\sc Complete-Right:}; \begin{scope}[xshift=0.5cm, yshift=-0.75cm] \righttrape{$q$}{$h$}{$h'$} \end{scope} \begin{scope}[xshift=3.0cm, yshift=-0.75cm] \lefttriangle{$F$}{$h'$}{$i$} \end{scope} \draw (0.1, -1.75) -- +(3.6, 0); \begin{scope}[xshift=1.7cm, yshift=-2.3cm] \righttriangle{$q$}{$h$}{$i$} \end{scope} \end{scope} \begin{scope}[xshift=12.5cm,yshift=-11cm] \node at (0, 0) [anchor=west] {\sc Accept:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \lefttriangle{$F$}{$1$}{$n+1$}; \end{scope} \draw (0.1, -1.75) -- +(1.9, 0); \node at (0.4, -2.1) [anchor=west] {{\it accept}}; \end{scope} \end{tikzpicture} \caption{An algorithm for parsing SBGs in $O(n^3)$ given a length $n$ sentence. The $n+1$-th token is a dummy root token $\$$, which only has one left dependent (sentence root). $i,j,h$, and $h'$ are index of a token in the given sentence while $q,r$, and $F$ are states. $L_h$ and $R_h$ are left and right FSA of the $h$-th token in the sentence. Each item as well as a statement about a state (e.g., $r\in \textit{final}(L_p)$) has a weight and the weight of a consequent item (below ---) is obtained by the product of the weights of its antecedent items (above ---). } \label{fig:bg:ded-sbg} \end{figure} \paragraph{$\boldsymbol{O(n^3)}$ algorithm with head-splitting} The main reason why the algorithm in Figure \ref{fig:2:binary-naive-sbg} causes the spurious ambiguity is because given a head there is no restriction in the order of collecting its left and right dependents. This problem can be handled by introducing new items called {\it half constituents} denoted by \tikz[thick]{ \draw (0, -0.2) -- ++(-0.5, -0.45) -- ++(0.5, 0) -- cycle; } ({\it left constituents}) and \tikz[thick]{ \draw (0, -0.2) -- ++(0.5, -0.45) -- ++(-0.5, 0) -- cycle; } ({\it right constituents}), which represent left and right span separately given a head. For example in the dependency tree in Figure \ref{fig:2:deptree-sbg}, a phrase ``Mary met'' comprises a left constituent while ``met the senator'' comprises a right constituent. In the new algorithm these two constituents are expanded separately with each other and there is no {\it mixed} states $(q_1, q_2)$ as in the items in Figure \ref{fig:2:binary-naive-sbg}. Eliminating these mixed states is the key to eliminate the spurious ambiguity. Figure \ref{fig:bg:ded-sbg} shows new algorithm, which can be understood as follows: \begin{itemize} \item {\sc Attach-Left} and {\sc Complete-Left} (or the {\sc Right} counterpart) are the essential components of the algorithm. The idea is when combining two constituents headed by $h'$ and $h$ ($h' < h$) into a large constituent headed by $h$, we decompose an original constituent \tikz[baseline=-20pt]{\headtriangleinlinebottom{$h'$}} into its left and right half constituents, and combine those fragments in order. {\sc Attach-Left} does the first part, i.e., collects the right constituent \tikz[baseline=-20pt]{\righttriangleinlinebottom{$h'$}}. The resulting trapezoid \tikz[baseline=-20pt]{\lefttrapeinline{$h'$}{$h$}} represents an intermediate parsing state, which means the recognition of the right half part of $h'$ has finished while the remaining left part yet unfinished. {\sc Complete-Left} does the second part and collects the remaining left constituent \tikz[baseline=-20pt]{\lefttriangleinlinebottom{$h'$}}. {\sc Attach-Right} and {\sc Complete-Right} do the opposite operations and collect the right dependents of some head. \item On this process, the state $F$ in both left and right constituents ensure that they can be a dependent of others. \item {\sc Start-Left} and {\sc Start-Right} correspond to the terminal rules of the ordinary CKY algorithm (Figure \ref{fig:2:cky}) though we segment it into the left and right parts. Note that the root symbol $\$$ at the $n+1$ position only applies {\sc Start-Left} because it must not have any right dependents. Commonly the left automaton $L_{\$}$ is designed to have only one dependent; otherwise, the algorithm may allow the fragmental parses with more than one root tokens. \item Differently from the inference rules in Figure \ref{fig:2:binary-naive-sbg}, we put the state transitions, e.g., $q \xmapsto{~h'~} r \in R_h$ as antecedent items (above ---) of each rule instead of the side condition. These modifications are to make the weight calculation at each rule more explicit. Specifically, when we develop a model in this framework, each state transition, i.e., $q\in \textit{init}(L_h)$, $q \in \textit{final}(L_h)$, and $q \xmapsto{~h'~} r \in L_h$ (or for $R_h$) has an associated weight. Also when we run the CKY or the related algorithm, each chart cell that corresponds to some constituent (triangle or trapezoid) has a weight. Thus, this formulation makes explicit that the weight of the consequent item (below ---) is obtained by the product of all weights of the antecedent items (above ---). We describe the particular parameterization of these transitions to achieve DMV in Section \ref{sec:2:dmv}. \item The grammar is not in CNF since it contains unary rules at internal positions. The inside-outside algorithm can still be applied by assuming null element (which has weight 1) in either child position in Algorithm \ref{alg:cky}. \end{itemize} There is no spurious ambiguity. However, again this grammar is not always a PCFG. In particular, the grammar for a dependency model with valence (DMV), which we describe next, is not a PCFG. See the {\sc Finish-Left} rule in the algorithm. A particular model such as DMV may associate a score for this rule to explicitly model an event that a head $h$ stops generating its left dependents. In such cases, the weights for CFG rules $(F,h) \rightarrow (q,h)$ do not define a correct (normalized) distribution given the parent symbol $(F,h)$. This type of inconsistency happens due to discrepancy between the underlying parsing strategies in two representations: The PCFGs assume the tree generation is a top-down process while the SBGs assume it is bottom-up. Nevertheless we can use the inside-outside algorithm as in PCFGs because there is no spurious ambiguity and each derivation in a CFG parse correctly gives a probability that the original SBG would give to the corresponding dependency tree. Also time complexity is improved to $O(n^3)$. This is easily verified since there appear at most three indexes on each rule. The reason of this reduction is we no longer use full constituents with a head index \tikz[baseline=-20pt]{\headtriangleinlinebottom{$h$}}, which itself consumes three indexes, leading to an asymptotically inefficient algorithm. \subsection{Dependency model with valence} \label{sec:2:dmv} \begin{figure}[t] \centering \begin{tabular}[t]{ll}\hline Transition & Weight (DMV parameters) \\ \hline $q_0 \in \textit{init}(L_h)$ & 1.0 \\ $q_0 \in \textit{final}(L_h)$ & $\theta_{\textsc{s}}(\textsc{stop}|h,\leftarrow,\textsc{true})$ \\ $q_1 \in \textit{final}(L_h)$ & $\theta_{\textsc{s}}(\textsc{stop}|h,\leftarrow,\textsc{false})$ \\ $q_0 \xmapsto{~d~} q_1 \in L_h$ & $\theta_{\textsc{a}}(d|h, \leftarrow) \cdot \theta_{\textsc{s}}(\neg \textsc{stop}|h,\leftarrow,\textsc{true})$ \\ $q_1 \xmapsto{~d~} q_1 \in L_h$ & $\theta_{\textsc{a}}(d|h, \leftarrow) \cdot \theta_{\textsc{s}}(\neg \textsc{stop}|h,\leftarrow,\textsc{false})$ \\\hline \end{tabular} \caption{Mappings between FSA transitions of SBGs and the weights to achieve DMV. $\theta_\textit{s}$ and $\theta_\textit{a}$ are parameters of DMV described in the body. The right cases (e.g., $q_0 \in \textit{init}(R_a)$) are omitted but defined similary. $h$ and $d$ are both word types, not indexes in a sentence (contrary to Figure \ref{fig:bg:ded-sbg}). } \label{fig:bg:dmv-param-as-sbg} \end{figure} Now it is not so hard to formulate the famous model, dependency model with valence (DMV), on which our unsupervised model is based, as a special instance of SBGs. This can be done by defining transitions of each automaton as well as the associated weights. In DMV, each $L_h$ or $R_h$ given head $h$ has only two states $q_0$ and $q_1$, both of which are in finished states, i.e., {\sc Finish-Left} and {\sc Finish-Right} in Figure \ref{fig:bg:ded-sbg} can always be applied. $q_0$ is the initial state and $q_0 \xmapsto{\textit{~h~}} q_1$ while $q_1 \xmapsto{\textit{~h~}} q_1$, meaning that we only distinguish the generation process of the first dependent from others. The associated weights for transitions in Figure \ref{fig:bg:ded-sbg} are summarized in Figure \ref{fig:bg:dmv-param-as-sbg}. Each weight is a product of DMV parameters, which are classified into two types of multinomial distributions $\theta_\textsc{s}$ and $\theta_\textsc{a}$. Generally we write $\theta_\textsc{type}(d|c)$ for a multinomial parameter in which {\sc type} defines a type of multinomial, $c$ is a conditioning context, and $d$ is a decision given the context. DMV has the following two types of parameters: \begin{itemize} \item $\theta_\textsc{s}(\textit{stop}|h,\textit{dir},\textit{adj})$: A Bernoulli random variable to decide whether or not to attach further dependents in the current direction $\textit{dir} \in \{ \leftarrow, \rightarrow \}$. The decision $\textit{stop} \in \{\textsc{stop},\neg \textsc{stop}\}$. The adjacency $\textit{adj} \in \{\textsc{true, false}\}$ is the key factor to distinguish the distributions of the first and other dependents. It is \textsc{true} if $h$ has no dependent yet in \textit{dir} direction. \item $\theta_\textsc{a}(d|h,\textit{dir})$: A probability that $d$ is attached as a new dependent of $h$ in \textit{dir} direction. \end{itemize} The key behind the success of the DMV was the introduction of the valence factor in stop probabilities \cite{klein-manning:2004:ACL}. Intuitively, this factor can capture the difference of the expected number of dependents for each head. For example, in English, a verb typically takes one dependent (subject) in the left direction while several dependents in the right direction. DMV may capture this difference with a higher value of $\theta_\textsc{s}(\neg \textsc{stop}|h, \leftarrow, \textsc{true})$ and a lower value of $\theta_\textsc{s}(\neg \textsc{stop}|h, \leftarrow, \textsc{false})$. On the other hand, in the right direction, $\theta_\textsc{s}(\neg \textsc{stop}|h, \rightarrow, \textsc{false})$ might be higher, facilitating to attach several dependents. \paragraph{Inference} With the EM algorithm, we try to update parameters $\theta_\textsc{s}$ and $\theta_\textsc{a}$. This is basically done with the inside-outside algorithm though one complicated point is that some transitions in Figure \ref{fig:bg:dmv-param-as-sbg} are associated with products of parameters, not a single parameter. This situation contrats with the original inside-outside algorithm for PCFGs where each rule is associated with only a single parameter (e.g., $A\rightarrow \beta$ and $\theta_{A\rightarrow \beta}$). In this case the update can be done by first collecting the expected counts of each transition in a SBG, and then converting it to the expected counts of a DMV parameter. For example, let $e_x(\textsc{Attach-Left},q,h,d|\theta)$ be the expected counts of the {\sc Attach-Left} rule between head $h$ with state $q$ and dependent $h'$ in a sentence $x$. We can obtain $e_x(h,d,\leftarrow|\theta)$, the expected counts of an attachment parameter of DMV as follows: \begin{equation} e_x(h,d,\leftarrow|\theta) = e_x(\textsc{Attach-Left},q_0,h,d|\theta) + e_x(\textsc{Attach-Left},q_1,h,d|\theta). \end{equation} These are then normalized to update the parameters (as in Section \ref{sec:2:em}). Similary the counts of the non-stop decision $e_x(h,\neg \textsc{stop}, \leftarrow,\textsc{true}|\theta)$, associated with $\theta_{\textsc{s}}(\neg \textsc{stop}|h,\leftarrow,\textsc{true})$, is obtained by: \begin{equation} e_x(h,\neg \textsc{stop},\leftarrow, \textsc{true}|\theta) = \sum_{h'} e_x(\textsc{Attach-Left},q_0,h,h'|\theta), \end{equation} where $h'$ is a possible left dependent (word type) of $h$. \subsection{Log-linear parameterization} \label{sec:bg:loglinear} In Chapter \ref{chap:induction}, we build our model based on an extended model of DMV with {\it features}, which we describe in this section. We call this model {\it featurized DMV}, which first appeared in \newcite{bergkirkpatrick-EtAl:2010:NAACLHLT}. We use this model since it is relatively a simple extension to DMV (among others) while known to boost the performance well. The basic idea is that we replace each parameter of the DMV as the following log-linear model: \begin{equation} \theta_\textsc{a}(d|h,\textit{dir}) = \frac{ \exp( \mathbf w^\intercal \mathbf f (d,h,\textit{dir},\textsc{a}) ) }{ \sum_{d'} \exp( \mathbf w^\intercal \mathbf f (d',h,\textit{dir},\textsc{a}) ) }, \end{equation} where $\mathbf w$ is a weight vector and $\mathbf f (d,h,\textit{dir},\textsc{a})$ is a feature vector for an event that $h$ takes $d$ as a dependent in \textit{dir} direction. Note that contrary to the more familiar log-linear models in NLP, such as the conditional random fields \cite{Lafferty:2001:CRF:645530.655813,finkel-kleeman-manning:2008:ACLMain}, it does not try to model the whole structure with a single log-linear model. Such approaches make it possible to exploit more richer global structural features though inference gets more complex and challenging in particular in an unsupervised setting \cite{smith-eisner:2005:ACL,NIPS2014_5344}. In this model, the features can only be exploited from the conditioning context and decision of each original DMV parameter. The typical information captured with this method is the back-off structures between parameters. For example, some feature in $\mathbf f$ is the one ignoring direction, which facilitates sharing of statistical strength of attachments between $h$ and $d$. \newcite{bergkirkpatrick-EtAl:2010:NAACLHLT} also report that adding back-off features that use the coarse POS tags is effective, e.g., ones replacing actual $h$ or $d$ with a coarse indicator, such as whether $h$ belongs to a (coarse) noun category or not, when the original dataset provides finer POS tags (e.g., pronoun or proper noun). The EM-like procedure can be applied to this model with a little modification, which instead of optimizing parameters $\theta$ directly, optimizes weight vector $\mathbf w$. The E-step is exactly the same as the original algorithm. In the M-step, we optimize $\mathbf w$ to increase the marginal log-likelihood (Eq. \ref{eqn:2:emobj}) using the gradient-based optimization method such as L-BFGS \cite{Liu89onthe} with the expected counts obtained from the E-step. In practice, we optimize the objective with the regularization term to prevent overfitting. \section{Previous Approaches in Unsupervised Grammar Induction} \label{sec:2:unsupervised} This section summarizes what has been done in the sutdy of unsupervised grammar induction in particular in this decade from \newcite{klein-manning:2004:ACL}, which was the first study breaking the simple baseline method in English experiments. Here we focus on the setting of {\it monolingual} unsupervised parsing, which we first define in Section \ref{sec:bg:task}. Related approaches utilizing some kind of supervised information, such as semi-supervised learning \cite{haghighi-klein:2006:COLACL} or transfer learning in which existing high quality parsers (or treebanks) for some languages (typically English) are transferred into parsing models of other languages \cite{mcdonald-petrov-hall:2011:EMNLP,naseem-barzilay-globerson:2012:ACL2012,mcdonald-EtAl:2013:Short,tackstrom-mcdonald-nivre:2013:NAACL-HLT} exist. These approaches typically achieve higher accuracies though we do not touch here. \subsection{Task setting} \label{sec:bg:task} The typical setting of unsupervised grammar induction is summarized as follows: \begin{itemize} \item During training, the model learns its parameters using (unannotated) sentences only. Sometimes the model uses external resources, such as Wikipedia articles \cite{marevcek-straka:2013:ACL2013} to exploit some statistics (e.g., n-gram) in large corpora but does not rely on any syntactic annotations. \item To remedy the data sparseness (or the learning difficulty), often the model assumes part-of-speech (POS) tags as the input instead of surface forms.\footnote{ Some work, e.g., \newcite{seginer:2007:ACLMain} does not assume this convention as we describe in Section \ref{sec:bg:unsup-other}. } This assumption greatly simplifies the problem though it loses much crucial information for disambiguation. For example, the model may not be able to disambiguate prepositional phrase (PP) attachments based on semantic cues as {\it supervised} parsers would do. Consider two phrases {\it eat sushi with tuna} and {\it eat sushi with chopsticks}. The syntactic structures for these two are different, but POS-based models may not distinguish between them as they both look the same under the model, e.g., {\sc verb noun adp noun}; {\sc adp} is an adposition. Therefore the main challenge of unsupervised grammar induction is often to acquire more basic structures or the word order, such that an adjective tends to modify a noun. \item The POS-based models are further divided into two categories, whether it can or cannot access to the {\it semantics} of each POS tag. The example of the former is \newcite{naseem-EtAl:2010:EMNLP}, which utilizes the information, e.g., a verb tend to be the root of a sentence. This approach is sometimes called {\it lightly} supervised learning. The latter approach, which we call {\it purely} unsupervised learning, does not access to such knowledge. In this case, the only necessary input for the model is the clustering of words, not the {\it label} for each cluster. This is advantageous in practice since it can be based on the output of some unsupervised POS tagger, which cannot identify the semantics (label) of each induced cluster. Though the problem settings are slightly different in two approaches, we discuss both here as it is unknown what kind of prior linguistic knowledge is necessary for learning grammars. Note that it is also an ongoing study how to achieve lightly supervised learning from the output of unsupervised POS taggers with a small amount of manual efforts \cite{Bisk:2015:ACLShort}. \end{itemize} \paragraph{Evaluation} The evaluation of unsupervised systems is generally difficult and controversial. This is particularly true in unsupervised grammar induction. The common procedure, which most works described below employ, is to compare the system outputs and the gold annotated trees just as in the supervised case. That is, we evaluate the quality of the system in terms of accuracy measure, which is precision, recall, and F1-score for constituent structures and an attachment score for dependency structures. This is inherently flawed in some sense mainly because it cannot take into account the variation in the notion of linguistically {\it correct} structures. For example, some dependency structures, such as coordination structures, are analyzed in several ways (see also Section \ref{sec:corpora:heads}); each of which is {\it correct} under a particular syntactic theory \cite{popel-EtAl:2013:ACL2013} but the current evaluation metric penalizes unless the prediction of the model matches the gold data currently used. We do not discuss the solution to this problem here. However, we try to minimize the effect of such variations in our experiments in Chapter \ref{chap:induction}. See Section \ref{sec:ind:eval} for details. \subsection{Constituent structure induction} As we saw in Section \ref{sec:2:em}, the EM algorithm provides an easy way for learning parameters of any PCFGs. This motivated the researchers to use the EM algorithm for obtaining syntactic trees without human efforts (annotations). In early such attempts, the main focus for the induced structures has been phrase-structure trees. However, it has been well known that such EM-based approaches perform poorly to recover the syntactic trees that linguists assume to be correct \cite{books/daglib/0080794,manning99foundations,carl1999}. The reasons are mainly two-folds: One is that the EM algorithm is just a hill climbing method so it cannot reach the global optimum solution. Since the search space of the grammar is highly complex, this local maxima problem is particularly a severe problem; \newcite{Carroll92twoexperiments} observed that randomly initialized EM algorithms always converge to different grammars, which all are far from the target grammar. Another crucial problem is the inherent difficulty in the induction of PCFGs. In the general setting, the fixed structure for the model is just the start symbol and observed terminal symbols. The problem is that although terminal symbols are the most informative source for learning, that information does not correctly propagate to the higher level in the tree since each nonterminal label here is just an abstract symbol (hidden categorical variable) and has less meaning. For example, when the model has a rule $y_1 \rightarrow y_2~y_3$ and $y_2$ and $y_3$ dominate some subtrees, $y_1$ dominates a larger constituent but its relevance to the yield (i.e., dominated terminal symbols) sharply diminishes. For these reasons, so far the only successful PCFG-based constituent structure induction methods are by giving some amount of supervision, e.g., constraints on possible bracketing \cite{pereira-schabes:1992:ACL} and possible rewrite rules \cite{Carroll92twoexperiments}. \newcite{johnson-griffiths-goldwater:2007:main} reported that the situation does not change with the sampling-based Bayesian inference method. Non PCFG-based constituent structure induction has been explored since early 2000s with some success. The common idea behind these approaches is not collapsing each span into the (less meaningful) nonterminal symbols. \newcite{W01-0713} and \newcite{klein-manning:2002:ACL} are such attempts, in which the model tries to learn whether some yields (n-gram) comprises a constituent or not. All parameters are connected to terminal symbols and thus the problem in propagating information from the terminal symbols is alleviated. \newcite{ponvert-baldridge-erk:2011:ACL-HLT2011} present a chunking-based heuristic method to improve the performance in this line of models. \newcite{seginer:2007:ACLMain} is another successful constituent induction system; We discuss his method in Section \ref{sec:bg:unsup-other} as it has some relevance to our approach. \subsection{Dependency grammar induction} Due to the difficulty in PCFG-based constituent structure induction, most recent works in PCFG induction has focused on dependency as its underlying structures. The dependency model with valence (DMV) \cite{klein-manning:2004:ACL} that we introduced in Section \ref{sec:2:dmv} is the most successful approach in such dependency-based models. As we saw, this model can be represented as an instance of weighted CFGs and thus parameter estimation is possible with the EM algorithm. This makes an extension on both model and inference easier, and results in many extensions on DMV in a decade as we summarize below. We divide those previous approaches in largely two categories in whether the model relies on light supervision on dependency rules or not (see also Section \ref{sec:bg:task}). The goal of every study introduced below can be seen to identify the necessary bias or supervision for an unsupervised parser to learn accurate grammars without explicitly annotated corpora. \paragraph{Purely unsupervised approaches} Generally, purely unsupervised methods perform worse than the other, lightly supervised approaches \cite{DBLP:conf/aaai/BiskH12}. We first mention that the success of most works discussed below including the original DMV in \newcite{klein-manning:2004:ACL} rely on a heuristic initialization technique often called the harmonic initializer, which we describe in details in Section \ref{sec:ind:setting}. Since the EM algorithm is the local search method, it suffers from the local optima problem, meaning that it is sensitive to the initialization. Intuitively the harmonic initializer initializes the parameters to favor shorter dependencies. \newcite{gimpel-smith:2012:NAACL-HLT2} reports that DMV {\it without} this initialization performs very badly; the accuracy on English experiments (Wall Street Journal portion of the Penn treebank) significantly drops from 44.1 to 21.3. Most works cited below rely on this technique, but some does not; in that case, we mention it explicitly (e.g., \newcite{marevcek-vzabokrtsky:2012:EMNLP-CoNLL}). Bayesian modeling and inference are popular approach for enhancing the probabilistic models. In dependency grammar induction, \newcite{cohen-smith:2009:NAACLHLT09}, \newcite{headdeniii-johnson-mcclosky:2009:NAACLHLT09}, and \newcite{blunsom-cohn:2010:EMNLP} are examples of such approaches. \newcite{cohen-smith:2009:NAACLHLT09} extend the baseline DMV model with somewhat complex priors called shared logistic normal priors, which enable to tie parameters of related POS tags (e.g., subcategories of nouns) to behave similarly. This is conceptually similar to the feature-based log-linear model \cite{bergkirkpatrick-EtAl:2010:NAACLHLT} that we introduced in Section \ref{sec:bg:loglinear}. They employ variational EM for the inference technique. \newcite{headdeniii-johnson-mcclosky:2009:NAACLHLT09} develop carefully designed Bayesian generative models, which are also estimated with the variational EM. This model is one of the few examples of a {\it lexicalized} model, i.e., utilizing words (surface forms) in additional to POS tags. This is a {\it partially} lexicalized model, meaning that the words that appear less than 100 times in the training data is unlexicalized. Another technique introduced in this paper is random initialization with model selection; they report that the performance improves by running a few iteration of EM in thousands of randomly initialized models and then picking up one with the highest likelihood. However, this procedure is too expensive and the later works do not follow it. \newcite{blunsom-cohn:2010:EMNLP} is one of the current state-of-the-art methods in purely unsupervised approach. In the shared task at the Workshop on Inducing Linguistic Structure (WILS) \cite{gelling-EtAl:2012:WILS}, it performs competitively to the lightly supervised CCG-based approach \cite{DBLP:conf/aaai/BiskH12} on average across 10 languages. The model is partially lexicalized as in \newcite{headdeniii-johnson-mcclosky:2009:NAACLHLT09}. Though the basic model is an extended model of the DMV, they encode the model on Bayesian tree substitution grammars \cite{Cohn:2010:ITG:1756006.1953031}, which enable to model larger tree fragments than the original DMV does. \newcite{marevcek-vzabokrtsky:2012:EMNLP-CoNLL} and \newcite{marevcek-straka:2013:ACL2013} present methods that learn the grammars using some principle of dependencies, which they call the {\it reducibility} principle. They argue that phrases that will be the dependent of another token (head) are often {\it reducible}, meaning that the sentence without such phrases is probably still grammatical. They calculate the reducibility of each POS n-gram using large raw text corpus from Wikipedia articles and develop a model that biases highly reducible sequences to become dependents. \newcite{marevcek-straka:2013:ACL2013} encode the reducibility on DMV and find that their method is not sensitive to initialization. This approach nicely exploits the property of heads and dependents and they report the state-of-the-art scores on the datasets of CoNLL shared tasks \cite{buchholz-marsi:2006:CoNLL-X,nivre-EtAl:2007:EMNLP-CoNLL2007}. Another line of studies on several extensitions or heuristics for improving DMV has been explored by Spitkovsky and colleagues. For example, \newcite{spitkovsky-EtAl:2010:CONLL} report that sometimes the Viterbi objective instead of the EM objective leads to the better model and \newcite{spitkovsky-alshawi-jurafsky:2010:NAACLHLT} present the heuristics that starts learning from the shorter sentences only and gradually increases the training sentences. \newcite{spitkovsky-alshawi-jurafsky:2013:EMNLP} is their final method. While the reported results are impressive (very competitive to \newcite{marevcek-straka:2013:ACL2013}), the method, which tries to avoid the local optima with the combination of several heuristics, e.g., changing the objective, forgetting some previous counts, and changing the training examples, is quite complex and it makes difficult to analyze which component most contributes to attained improvements. The important study for us is \newcite{smith-eisner-2006-acl-sa}, which explores a kind of {\it structural} bias to favor shorter dependencies. This becomes one of our baseline models in Chapter \ref{chap:induction}; See section \ref{sec:ind:structural-const} for more details. We argue however that the motivation in their experiment is slightly different from the other studies cited above and us. Along with a bias to favor shorter dependencies, they also investigate the technique called {\it structural annealing}, in which the strength of the imposed bias is gradually relaxed. Note here that by introducing such new techniques, the number of adjustable parameters (i.e., hyperparameters) increases. \newcite{smith-eisner-2006-acl-sa} choose the best setting of those parameters based on the {\it supervised} model setting, in which the annotated development data is used for choosing the best model. This is however not the unsupervised learning setting. In our experiments in Chapter \ref{chap:induction}, we thus do not explore the annealing technique and just compare the effects of different structural biases, that is, the shorter dependency length bias and our proposing bias of limiting center-embedding. \paragraph{Lightly supervised approaches} \newcite{naseem-EtAl:2010:EMNLP} is the first model utilizing the light supervision on dependency rules. The rules are specified declaratively as dependencies between POS tags such as $\textsc{verb} \rightarrow \textsc{noun}$ or $\textsc{noun} \rightarrow \textsc{adj}$. Then the model parameters are learned with the {\it posterior regularization} technique \cite{journals/jmlr/GanchevGGT10}, which biases the posterior distribution at each EM iteration to put more weights on those specified dependency rules. \newcite{naseem-EtAl:2010:EMNLP} design 13 universal rules in total, and show state-of-the-art scores across a number of languages. This is not purely unsupervised approach but it gives an important upper bound on the required knowledge to achieve reasonable accuracies on this task. For example, \newcite{DBLP:journals/tacl/BiskH13} demonstrate that the competitive scores to them is obtainable with a smaller amount of supervision by casting the model on CCG. \newcite{sogaard:2012:WILS} present a heuristic method that does {\it not} learn anything, but just build a parse tree deterministically; For example it always recognizes the left-most verb to be the root word of the sentence. Although this is extremely simple, S{\o}gaard reports it beats many systems submitted in the WILS shared task \cite{gelling-EtAl:2012:WILS}, suggesting that often such declarative dependency rules alone can capture the basic word order of the language. \newcite{grave-elhadad:2015:ACL-IJCNLP} is the recently proposed strong system, which also relies on the declarative rules. Instead of the generative models as in most works cited above, they formulate their model in a discriminative clustering framework \cite{Xu05maximummargin}, with which the objective becomes convex and the optimality is satisfied. This system becomes our strong baseline in the experiments in Chapter \ref{chap:induction}. Note that their system relies on in total 12 rules between POS tags. We explore how the competitive model to this system can be achieved with our structural constraints as well as a smaller amount of supervision. \subsection{Other approaches} \label{sec:bg:unsup-other} There also exist some approaches that do not induce dependency nor constituent structures directly. Typically for evaluation reasons the induced structure is converted to either form. \paragraph{Common cover link} \newcite{seginer:2007:ACLMain} presents his own grammar formalism called the common cover link (CCL), which looks similar to the dependency structure but differs in many points. For example, in CCL, every link between words is fully connected at every prefix position in the sentence. His parser and learning algorithm are fully incremental; He argues that the CCL structure as well as the incremental processing constraint effectively reduces the search space of the model. This approach may be conceptually similar to our approach in that both try to reduce the search space of the model that comes from the constraint on human sentence processing (incremental left-to-right processing). However, his model, the grammar formalism (CCL), and the learning method are highly coupled with each other and it makes difficult to separate some component or idea in his framework for other applications. We instead investigate the effect of our structural constraint as a single component, which is much simpler and the idea can easily be portable to other applications. Though CCL is similar to dependency, he evaluates the quality of the output on constituent-based bracketing scores by converting the CCL output to the equivalent constituent representation. We thus do not compare our approach to his method directly in this thesis. \paragraph{CCG induction} In the lexicalized grammar formalisms such as CCGs \cite{opac-b1095877}, each nonterminal symbol in a parse tree encodes semantics about syntax and is not an arbitrary symbol unlike previous CFG-based grammar induction approaches. This observation motivates recent attempts for inducing CCGs with a small amount of supervision. \newcite{DBLP:conf/aaai/BiskH12} and \newcite{DBLP:journals/tacl/BiskH13} present generative models over CCG trees and demonstrate that it achieves state-of-the-art scores on a number of languages in the WILS shared task dataset. For evaluation, after getting a CCG derivation, they extract dependencies by reading off predicate and argument (or modifier) structures encoded in CCG categories. \newcite{bisk-hockenmaier:2015:ACL-IJCNLP} present a model extension and thorough error analysis while \newcite{Bisk:2015:ACLShort} show how the idea can be applied when no identity on POS tags (e.g., whether a word cluster is {\sc verb} or not) is given with a small manual effort. The key to the success of their approach is in the seed knowledge about category assignments for each input token. In CCGs or related formalisms, it is known that a parse tree is build almost deterministically if every category for input tokens are assigned correctly \cite{matuzaki:2007,lewis-steedman:2014:EMNLP2014}. In other words, the most difficult part in those parsers is the assignments of lexical categories, which highly restrict the ambiguity in the remaining parts. Bisk and Hockenmair efficiently exploit this property of CCG by restricting possible CCG categories on POS tags. Their seed knowledge is that a sentence root should be a verb or a noun, and a noun should be an argument of a verb. They encode this knowledge to the model by seeding that only a verb can be assigned category ${\sf S}$ and only a noun can be assigned category ${\sf N}$. Then, they expand the possible candidate categories for each POS tag in a bootstrap manner. For example, a POS tag next to a verb is allowed to be assigned category ${\sf S \backslash S}$ or ${\sf S/S}$, and so on.\footnote{The rewrite rules of CCG are defined by a small set of combinatory rules. For example, the rule ${\sf (S \backslash N)/N~~N \rightarrow S \backslash N}$ is an example of the forward application rule, which can be generally written as ${\sf X/Y~~Y \rightarrow X}$. The backward application does the opposite: ${\sf Y~~X \backslash Y \rightarrow X}$. } Figure \ref{fig:bg:ccg-seed} shows an example of this bootstrapping process, which we borrow from \newcite{DBLP:journals/tacl/BiskH13}. \begin{figure}[t] \centering \begin{tabular}{c|c|c|c} {\it The} & {\it man} & {\it ate} & {\it quickly} \\ {\sc dt} & {\sc nns} & {\sc vbd} & {\sc rb} \\ \hline ${\sf N/N}$ & $\bm{\mathsf{N}}{\sf ,S/S}$& $\bm{\mathsf{S}}{\sf ,N/N}$&${\sf S \backslash S}$ \\ ${\sf (S/S)/(S/S)}$ & ${\sf (N \backslash N)/(N \backslash N)}$ & ${\sf S\backslash N}$ & ${\sf (N \backslash N) \backslash (N \backslash N)}$\\ &${\sf (N/N) \backslash (N/N)}$ & ${\sf (S/S) \backslash(S/S)}$ & \\ &&${\sf (S \backslash S)/(S \backslash S)}$&\\ \end{tabular} \caption{An example of bootstrapping process for assigning category candidates in CCG induction borrowed from Bisk and Hockenmaier (2013). {\sc dt, nns, vbd,} and {\sc rb} are POS tags. Bold categories are the initial seed knowledge, which is expanded by allowing the neighbor token to be a modifier. } \label{fig:bg:ccg-seed} \end{figure} After the process, the parameters of the generative model are learned using variational EM. During this phase, the category spanning the whole sentence is restricted to be ${\sf S}$, or ${\sf N}$ if no verb exists in the sentence. This mechanism highly restricts the search space and allows efficient learning. Finally, \newcite{AAAI159835} explore another direction for learning CCG with small supervision. Unlike Bisk and Hockenmaier's models that are based on gold POS tags, they try to learn the model from surface forms but with an incomplete tag dictionary mapping some words to possible categories. The essential difference between these two approaches is how to provide the seed knowledge to the model and it is an ongoing research topic (and probably one of the main goal in unsupervised grammar induction) to specify what kind of information should be given to the model and what can be learned from such seed knowledge. \subsection{Summary} This section surveyed the previous studies in unsupervised and lightly supervised grammar induction. As we have seen, dependency is the only structure that can be learned effectively with the well-studied techniques, e.g., PCFGs and the EM algorithm, except CCG, which may have a potential to replace this although the model tends to be inevitably more complex. For simplicity, our focus in thesis is dependency, but we argue that the success in dependency induction indicates that the idea could be extended to learning of the other grammars, e.g., CCG as well as more basic CFG-based constituent structures. The key to the success of previous dependency-based approaches can be divided into the following categories: \begin{description} \item[Initialization] The harmonic initializer is known to boost the performance and used in many previous models including \newcite{cohen-smith:2009:NAACLHLT09}, \newcite{bergkirkpatrick-EtAl:2010:NAACLHLT}, and \newcite{blunsom-cohn:2010:EMNLP}. \item[Principles of dependency] The reducibility of \newcite{marevcek-vzabokrtsky:2012:EMNLP-CoNLL} and \newcite{marevcek-straka:2013:ACL2013} efficiently exploits the principle property in dependency and thus learning gets more stable. \item[Structural bias] \newcite{smith-eisner:2005:ACL} explores the effect of shorter dependency length bias, which is similar to the harmonic initialization but is more explicit. \item[Rules on POS tags] \newcite{naseem-EtAl:2010:EMNLP} and \newcite{grave-elhadad:2015:ACL-IJCNLP} shows parameter-based constraints on POS tags can boost the performance. \newcite{sogaard:2012:WILS} is the evidence that such POS tag rules are already powerful in themselves to achieve reasonable scores. \end{description} The most relevant approach to ours that we present in Chapter \ref{chap:induction} is the structural bias of \newcite{smith-eisner:2005:ACL}; However, as we have mentioned, they combine the technique with annealing and the selection of initialization method, which are tuned with the supervised model selection. Thus they do not explore the effect of a single structural bias, which is the main interest in our experiments. As another baseline, we also compare the performance with harmonic initialized models. The reducibility and rules on POS tags possibly have orthogonal effects to the structural bias. We will explore a small number of rules and see the combination effects with our structural constraints to get insights on the effect of our constraint when some amount of external supervision is provided. \chapter{Multilingual Dependency Corpora} \label{chap:corpora} Cross-linguality is an important concept in this thesis. In the following chapters, we explore a syntactic regularities or universals that exist in languages in several ways including a corpus analyses, a supervised parsing study (Chapter \ref{chap:transition}), and an unsupervised parsing study (Chapter \ref{chap:induction}). All these studies were made possible by recent efforts for the development of multilingual corpora. This chapter summarizes the properties and statistics of the dataset we use in our experiments. First, we survey the problem of the ambiguity in the definitions of {\it head} that we noted when introducing dependency grammars in Section \ref{sec:2:dependency}. This problem is critical for our purpose; for example, if our unsupervised induction system performs so badly for a particular language, we do not know whether the reason is in the (possibly distinguished) annotation style or the inherent difficulty of that language (see also Section \ref{sec:ind:eval}). In particular, we describe the duality of head, i.e., {\it function} head and {\it content} head, which is the main source of the reason why there can be several dependency representations for a particular syntactic construction. We then summarize the characteristics of the treebanks that we use. The first dataset, CoNLL shared tasks dataset \cite{buchholz-marsi:2006:CoNLL-X,nivre-EtAl:2007:EMNLP-CoNLL2007} is the first large collection of multilingual dependency treebanks (19 languages in total) in the literature, although is just a collection of existing treebanks and lacking annotation consistency across languages. This dataset thus may not fully adequate for our cross-linguistic studies. We will introduce this dataset and use it in our experiments mainly because it was our primary dataset in the preliminary version of the current study \cite{noji-miyao:2014:Coling}, which was done when more adequate dataset such as Universal Dependencies \cite{DEMARNEFFE14.1062} were not available. We use this dataset only for the experiments in Chapter \ref{chap:transition}. Universal Dependencies (UD) is a recent initiative to develop cross-linguistically consistent treebank annotation for many languages \cite{nivre2015ud}. We choose this dataset as our primary resource for cross-linguistic experiments since currently it seems the best dataset that keeps the balance between the typological diversity in terms of the number of languages or language families and the annotation consistency. We finally introduce another recent annotation project called Google universal treebanks \cite{mcdonald-EtAl:2013:Short}. We use this dataset only for our unsupervised parsing experiments in Chapter \ref{chap:induction} mainly for comparing the performance of our model with the current state-of-the-art systems. This dataset is a preliminary version of UD, so its data size and consistency is inferior. We summarize the major differences of approaches in two corpora in Section \ref{sec:corpora:google}. \section{Heads in Dependency Grammars} \label{sec:corpora:heads} \begin{figure}[t] \begin{minipage}[t]{.99\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] The \& complicated \& language \& in \& the \& huge \& new \& law \& has \& muddied \& the \& fight \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \depedge{3}{4}{} \depedge{4}{8}{} \depedge{8}{7}{} \depedge{8}{6}{} \depedge{8}{5}{} \depedge{9}{3}{} \depedge{9}{10}{} \depedge{10}{12}{} \depedge{12}{11}{} \deproot[edge unit distance=3ex]{9}{} \end{dependency} \subcaption{Analysis on CoNLL dataset.} \label{subfig:corpora:conll-en} \end{minipage} \begin{minipage}[t]{.99\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] The \& complicated \& language \& in \& the \& huge \& new \& law \& has \& muddied \& the \& fight \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \depedge[red,thick]{3}{8}{} \depedge[red,thick]{8}{4}{} \depedge{8}{7}{} \depedge{8}{6}{} \depedge{8}{5}{} \depedge[red,thick]{10}{3}{} \depedge[red,thick]{10}{9}{} \depedge{10}{12}{} \depedge{12}{11}{} \deproot[edge unit distance=3ex,red,thick]{10}{} \end{dependency} \subcaption{Analysis on Stanford universal dependencies (UD).} \label{subfig:corpora:ud-en} \end{minipage} \begin{minipage}[t]{.99\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] The \& complicated \& language \& in \& the \& huge \& new \& law \& has \& muddied \& the \& fight \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \depedge{3}{4}{} \depedge{4}{8}{} \depedge{8}{7}{} \depedge{8}{6}{} \depedge{8}{5}{} \depedge[red,thick]{10}{3}{} \depedge[red,thick]{10}{9}{} \depedge{10}{12}{} \depedge{12}{11}{} \deproot[edge unit distance=3ex,red,thick]{10}{} \end{dependency} \subcaption{Analysis on Google universal treebanks.} \label{subfig:corpora:google-en} \end{minipage} \caption{Each dataset that we use employs the different kind of annotation style. Bold arcs are ones that do not exist in the CoNLL style tree (a). } \label{fig:corpora:compare-en} \end{figure} Let us first see the examples. Figure \ref{fig:corpora:compare-en} shows how the analysis of an English sentence would be changed across the datasets we use. Every analysis is {\it correct} under some linguistic theory. We can see that two analyses between the CoNLL style (Figure \ref{subfig:corpora:conll-en}) and the UD style (Figure \ref{subfig:corpora:ud-en}) are largely different, in particular around function words (e.g., {\it in} and {\it has}). \paragraph{Function and content heads} \newcite{zwicky199313} argues that there is a duality in the notion of heads, namely, function heads and content heads. In the view of function heads, the head of each constituent is the word that determines the syntactic role of it. The CoNLL style tree is largely function head-based; For example, the head in constituent ``in the huge new law'' in Figure \ref{subfig:corpora:conll-en} is ``in'', since this preposition determines the syntactic role of the phrase (i.e., prepositional phrase modifying another noun or verb phrase). The construction of ``has muddied'' is similar; In the syntactic view, the auxiliary ``has'' becomes the head since it is this word that determines the aspect of this sentence (present perfect). \begin{figure}[t] \centering {\small \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] \Ja{直接} \& \Ja{の} \& \Ja{取りやめ} \& \Ja{は} \& \Ja{今回} \& \Ja{が} \& \Ja{事実} \& \Ja{上} \& \Ja{初めて} \& \Ja{と} \& \Ja{いう} \\ \textsc{noun} \& \textsc{adp} \& \textsc{noun} \& \textsc{adp} \& \textsc{noun} \& \textsc{adp} \& \textsc{noun} \& \textsc{noun} \& \textsc{adv} \& \textsc{adp} \& \textsc{verb} \\ \Ja{direct} \& \Ja{POS} \& \Ja{cancellation} \& \Ja{NOM} \& \Ja{this} \& \Ja{NOM} \& \Ja{fact} \& \Ja{in} \& \Ja{first} \& \Ja{was} \& \Ja{heard} \\ \end{deptext} \depedge{3}{1}{} \depedge{1}{2}{} \depedge{9}{3}{} \depedge{3}{4}{} \depedge{9}{5}{} \depedge{5}{6}{} \depedge{8}{7}{} \depedge{9}{8}{} \depedge{11}{9}{} \depedge{9}{10}{} \deproot[edge unit distance=3ex]{11}{} \end{dependency} } I heard that this was in fact the first time of the direct cancellation. \caption{A dependency tree in the Japanese UD. \textsc{noun, adv, verb,} and \textsc{adp} are assigned POS tags.} \label{fig:corpora:ud-jp} \end{figure} In another view of content heads, the head of each constituent is selected to be the word that most contributes to the semantics of it. This is the design followed in the UD scheme \cite{nivre2015ud}. For example, in Figure \ref{subfig:corpora:ud-en} the head of constituent ``in the huge new law'' is the noun ``law'' instead of the preposition. Thus, in UD, every dependency arc is basically from a content word (head) to another content or function word (dependent). Figure \ref{fig:corpora:ud-jp} shows an example of sentence in Japanese treebank of UD. We can see that every function word (e.g., {\sc adp}) is attached to some content word, such as {\sc noun} and {\sc adv} (adverb). \paragraph{Other variations} \begin{figure}[t] \begin{minipage}[t]{.49\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] apples \& oranges \& and \& lemons \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \depedge{3}{4}{} \deproot[edge unit distance=2ex]{3}{} \end{dependency} \subcaption{Prague style} \label{subfig:corpora:prague-coord} \end{minipage} \begin{minipage}[t]{.49\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] apples \& oranges \& and \& lemons \\ \end{deptext} \depedge{1}{2}{} \depedge{1}{3}{} \depedge{3}{4}{} \deproot[edge unit distance=2ex]{1}{} \end{dependency} \subcaption{Mel'\v{c}ukian style} \label{subfig:corpora:melcuk-coord} \end{minipage} \begin{minipage}[t]{.49\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] apples \& oranges \& and \& lemons \\ \end{deptext} \depedge{1}{2}{} \depedge{1}{3}{} \depedge{1}{4}{} \deproot[edge unit distance=2ex]{1}{} \end{dependency} \subcaption{Stanford style} \label{subfig:corpora:stanford-coord} \end{minipage} \begin{minipage}[t]{.49\linewidth} \centering \begin{dependency} [theme=simple] \begin{deptext}[column sep=0.3cm] apples \& oranges \& and \& lemons \\ \end{deptext} \deproot[edge unit distance=2ex]{1}{} \deproot[edge unit distance=2ex]{2}{} \deproot[edge unit distance=2ex]{3}{} \deproot[edge unit distance=2ex]{4}{} \end{dependency} \subcaption{Teni\'{e}rian style} \label{subfig:corpora:tenierian-coord} \end{minipage} \caption{Four styles of annotation for coordination.} \label{fig:corpora:coord} \end{figure} Another famous construction that has several variations in analysis is coordination, which is inherently {\it multiple-head} construction and is difficult to deal with in dependency. \newcite{popel-EtAl:2013:ACL2013} give detailed analysis of coordination structures employed in several existing treebanks. There are roughly four families of approaches \cite{halmedt} in existing treebanks as shown in Figure \ref{fig:corpora:coord}. Each annotation style has the following properties: \begin{description} \item[Prague] All conjuncts are headed by the conjunction \cite{cs}. \item[Mel'\v{c}ukian] The first/last conjunct is the head, and others are organized in a chain \cite{Melcuk:1988}. \item[Stanford] The first conjunct is the head, others are attached directly to it \cite{mm2008stdm}. \item[Teni\'{e}rian] There is no common head, and all conjuncts are attached directly to the node modified by the coordination structure \cite{Tesniere1959}. \end{description} Note that through our experiments we do not make any claims on which annotation style is the most appropriate for dependency analysis. In other words, we do not want to commit to a particular linguistic theory. The main reason why we focus on UD is that it is the dataset with the highest annotation consistency across languages now available, as we describe in the following. \section{CoNLL Shared Tasks Dataset} This dataset consists of 19 language treebanks used in the CoNLL shared tasks 2006 \cite{buchholz-marsi:2006:CoNLL-X} and 2007 \cite{nivre-EtAl:2007:EMNLP-CoNLL2007}, in which the task was the multilingual supervised dependency parsing. See the list of languages and statistics in Table \ref{tab:corpora:conll}. There is generally no annotation consistency across languages in various constructions. For example, the four types of coordination annotation styles all appear in this dataset; Prague style is used in, e.g., Arabic, Czech, and English, while Mel'\v{c}ukian style is found in, e.g., German, Japanese, and Swedish, etc. Function and content head choices are also mixed across languages as well as the constructions in each language. For example, in English, the basic style is function head-based while some exceptions are found in e.g., infinitive marker in a verb phrase, such as ``... allow executives to report ...'' in which the head of ``to'' is ``report'' instead of ``allow''. The idea of regarding a determiner as a head is the extreme of function head-based view \cite{abney1987english,Hudson:2004-01-01T00:00:00:0929-998X:7}, and most treebanks treat a noun as a head while the determiner head is also employed in some treebank, such as Danish. \newcite{halmedt} gives more detailed survey on the differences of annotation styles in this dataset. \begin{table}[t] \centering \scalebox{1.0}{ \begin{tabular}[t]{l r r r r r r r} \hline Language & \#Sents. & \multicolumn{4}{c}{\#Tokens} & Punc. & Av. len.\\ & & $\leq$10& $\leq$15 & $\leq$20 & $\leq\infty$ & (\%) & \\ \hline Arabic & 3,043 & 2,833 & 5,193 & 8,656 & 116,793 & 8.3 & 38.3\\ Basque & 3,523 & 7,865 & 19,351 & 31,384 & 55,874 & 18.5 & 15.8\\ Bulgarian & 13,221 & 34,840 & 75,530 & 114,687 & 196,151 & 14.3 & 14.8\\ Catalan & 15,125 & 9,943 & 31,020 & 66,487 & 435,860 & 11.6 & 28.8\\ Chinese & 57,647 & 269,772 & 326,275 & 337,908 & 342,336 & 0.0 & 5.9\\ Czech & 25,650 & 48,452 & 110,516 & 191,635 & 437,020 & 14.7 & 17.0\\ Danish & 5,512 & 10,089 & 24,432 & 40,221 & 100,238 & 13.9 & 18.1\\ Dutch & 13,735 & 40,816 & 75,665 & 110,118 & 200,654 & 11.2 & 14.6\\ English & 18,791 & 13,969 & 47,711 & 106,085 & 451,576 & 12.2 & 24.0\\ German & 39,573 & 66,741 & 164,738 & 292,769 & 705,304 & 13.5 & 17.8\\ Greek & 2,902 & 2,851 & 8,160 & 16,076 & 70,223 & 10.1 & 24.1\\ Hungarian & 6,424 & 8,896 & 23,676 & 42,796 & 139,143 & 15.5 & 21.6\\ Italian & 3,359 & 5,035 & 12,350 & 21,599 & 76,295 & 14.7 & 22.7\\ Japanese & 17,753 & 52,399 & 81,561 & 105,250 & 157,172 & 11.6 & 8.8\\ Portuguese & 9,359 & 13,031 & 30,060 & 54,804 & 212,545 & 14.0 & 22.7\\ Slovene & 1,936 & 4,322 & 9,647 & 15,555 & 35,140 & 18.0 & 18.1\\ Spanish & 3,512 & 3,968 & 9,716 & 18,007 & 95,028 & 12.5 & 27.0\\ Swedish & 11,431 & 20,946 & 55,670 & 96,907 & 197,123 & 10.9 & 17.2\\ Turkish & 5,935 & 21,438 & 34,449 & 44,110 & 69,695 & 16.0 & 11.7\\\hline \end{tabular} } \caption{Overview of CoNLL dataset (mix of training and test sets). Punc. is the ratio of punctuation tokens in a whole corpus. Av. len. is the average length of a sentence.} \label{tab:corpora:conll} \end{table} The dataset consists of the following treebanks. Note that some languages (Arabic, Chinese, Czech, and Turkish) are used in both 2006 and 2007 shared tasks in different versions; in which case we use only 2007 data. Also a number of treebanks, such as Basque, Chinese, English, etc, are annotated originally in phrase-structure trees, which are converted to dependency trees with heuristics rules extracting a head token from each constituent. \begin{description} \item[Arabic:] Prague Arabic Dependency Treebank 1.0 \cite{ar}. \item[Basque:] 3LB Basque treebank \cite{eu}. \item[Bulgarian:] BulTreeBank \cite{bg}. \item[Catalan:] The Catalan section of the CESS-ECE Syntactically and Semantically Annotated Corpora \cite{catalan}. \item[Chinese:] Sinica treebank \cite{sinica}. \item[Czech:] Prague Dependency Treebank 2.0 \cite{cs}. \item[Danish:] Danish Dependency Treebank \cite{da}. \item[Dutch:] Alpino treebank \cite{nl}. \item[English:] The Wall Street Journal portion of the Penn Treebank \cite{Marcus93buildinga}. \item[German:] TIGER treebank \cite{de}. \item[Greek:] Greek Dependency Treebank \cite{el}. \item[Hungarian:] Szeged treebank \cite{hu}. \item[Italian:] A subset of the balanced section of the Italian SyntacticSemantic Treebank \cite{it}. \item[Japanese:] Japanese Verbmobil treebank \cite{ja}. This is mainly the collection of speech conversations and thus the average length is relatively short. \item[Portuguese:] The Bosque part of the Floresta sint\'{a}(c)tica \cite{pt} covering both Brazilian and European Portuguese. \item[Slovene:] Slovene Dependency Treebank \cite{sl}. \item[Spanish:] Cast3LB \cite{spanish}. \item[Swedish:] Talbanken05 \cite{sv}. \item[Turkish:] METU-Sabancı Turkish Treebank used in CoNLL 2007 \cite{tr}. \end{description} \section{Universal Dependencies} UD is a collection of treebanks each of which is designed to follow the annotation guideline based on the Stanford typed dependencies \cite{mm2008stdm}, which is in most cases content head-based as we mentioned in Section \ref{sec:corpora:heads}. We basically use the version 1.1 of this dataset, from which we exclude Finnish-FTB since UD also contains another Finnish treebank, which is larger, and add Japanese, which is included in version 1.2 dataset first. Typically a treebank is created by first transforming trees in an existing treebank with some script into the trees to follow the annotation guideline, and then manually correcting the errors. Another characteristic of this dataset is the set of POS tags and dependency labels are consistent across languages. Appendix \ref{chap:app:ud-pos} summarizes the POS tagset of UD. We do not discuss dependency labels since we omit them. Below is the list of sources of treebanks. We omit the languages if the source is the same as the CoNLL dataset described above. Note the source of some languages, such as English and Japanese, are changed from the previous dataset. See Table \ref{tab:corpora:ud} for the list of all 19 languages as well as the statistics. \begin{description} \item[Croatian:] SETimes.HR \cite{hr}. \item[English:] English Web \cite{silveira14gold}. \item[Finnish:] Turku Dependency Treebank \cite{fi}. \item[German:] Google universal treebanks (see Section \ref{sec:corpora:google}). \item[Hebrew:] Hebrew Dependency Treebank \cite{he}. \item[Indonesian:] Google universal treebanks (see Section \ref{sec:corpora:google}). \item[Irish:] Irish Dependency Treebank \cite{ga}. \item[Japanese]: Kyoto University Text Corpus 4.0 \cite{Kawahara02constructionof,ja-ud}. \item[Persian]: Uppsala Persian Dependency Treebank \cite{faupdt}. \item[Spanish]: Google universal treebanks (see Section \ref{sec:corpora:google}). \end{description} \begin{table}[t] \centering \scalebox{1.0}{ \begin{tabular}[t]{l r r r r r r r} \hline Language & \#Sents. & \multicolumn{4}{c}{\#Tokens} & Punc. & Av. len.\\ & & $\leq$10& $\leq$15 & $\leq$20 & $\leq\infty$ & & \\ \hline Basque & 5,273 & 19,597 & 38,612 & 51,305 & 60,563 & 17.3 & 11.4\\ Bulgarian & 9,405 & 27,903 & 58,386 & 84,318 & 125,592 & 14.3 & 13.3\\ Croatian & 3,957 & 3,850 & 12,718 & 26,614 & 87,765 & 12.9 & 22.1\\ Czech & 87,913 & 160,930 & 377,994 & 654,559 & 1,506,490 & 14.6 & 17.1\\ Danish & 5,512 & 10,089 & 24,432 & 40,221 & 100,238 & 13.8 & 18.1\\ English & 16,622 & 36,189 & 74,361 & 115,511 & 254,830 & 11.7 & 15.3\\ Finnish & 13,581 & 39,797 & 85,601 & 123,036 & 181,022 & 14.6 & 13.3\\ French & 16,468 & 13,988 & 51,525 & 106,303 & 400,627 & 11.1 & 24.3\\ German & 15,918 & 24,418 & 74,400 & 135,117 & 298,614 & 13.0 & 18.7\\ Greek & 2,411 & 2,229 & 6,707 & 13,493 & 59,156 & 10.6 & 24.5\\ Hebrew & 6,216 & 5,527 & 17,575 & 35,128 & 158,855 & 11.5 & 25.5\\ Hungarian & 1,299 & 1,652 & 5,196 & 9,913 & 26,538 & 14.6 & 20.4\\ Indonesian & 5,593 & 6,890 & 23,009 & 42,749 & 121,923 & 14.9 & 21.7\\ Irish & 1,020 & 1,901 & 3,695 & 6,202 & 23,686 & 10.6 & 23.2\\ Italian & 12,330 & 24,230 & 51,033 & 79,901 & 277,209 & 11.2 & 22.4\\ Japanese & 9,995 & 6,832 & 24,657 & 54,395 & 267,631 & 10.8 & 26.7\\ Persian & 6,000 & 6,808 & 18,011 & 34,191 & 152,918 & 8.7 & 25.4\\ Spanish & 16,006 & 10,489 & 40,087 & 88,665 & 432,651 & 11.0 & 27.0\\ Swedish & 6,026 & 13,045 & 31,343 & 51,333 & 96,819 & 10.7 & 16.0\\\hline \end{tabular} } \caption{Overview of UD dataset (mix of train/dev/test sets). Punc. is the ratio of punctuation tokens in a whole corpus. Av. len. is the average length of a sentence.} \label{tab:corpora:ud} \end{table} \section{Google Universal Treebanks} \label{sec:corpora:google} This dataset is a collection of 12 languages treebanks, i.e., Brazilian-Portuguese, English, Finnish, French, German, Italian, Indonesian, Japanese, Korean, Spanish and Swedish. Most treebanks are created by hand in this project except the following two languages: \begin{description} \item[English:] Automatically convert from the Wall Street Journal portion of the Penn Treebank \cite{Marcus93buildinga} (with a different conversion method than the CoNLL dataset). \item[Swedish:] Talbanken05 \cite{sv} as in CoNLL dataset. \end{description} Basically every treebank follow the annotation guideline based on the Stanford typed dependencies as in UD, but contrary to UD, the annotation of Google treebanks is not fully content head-based. As we show in Figure \ref{subfig:corpora:google-en}, it annotates specific constructions in function head-based, in particular {\sc adp} phrases. We do not summarize the statistics of this dataset here as we use it only in our experiments in Chapter \ref{chap:induction} where we will see the statistics of the subset of the data that we use (see Section \ref{sec:ind:dataset}). \chapter{Left-corner Transition-based Dependency Parsing} \label{chap:transition} Based on several recipes introduced in Chapter \ref{chap:bg}, we now build a left-corner parsing algorithm operating on dependency grammars. In this chapter, we formalize the algorithm as a transition system for dependency parsing \cite{Nivre:2008} that roughly corresponds to the dependency version of a push-down automaton (PDA). We have introduced PDAs with the left-corner parsing strategy for CFGs (Section \ref{sec:bg:left-corner-pda}) as well as a conversion method of any projective dependency trees into an equivalent CFG parse (Section \ref{sec:bilexical}). Thus one may suspect that it is straightforward to obtain a left-corner parsing algorithm for dependency grammars by, e.g., developing a CFG parser that will build a CFG parse encoding dependency information at each nonterminal symbol. \REVISE{ In this chapter, however, we take a different approach to build an algorithm in a non-trivial way. One reason for this is because such a CFG-based approach cannot be an {\it incremental} algorithm. On the other hand, our algorithm in this chapter is incremental; that is, it can construct a partial parse on the stack, without seeing the future input tokens. Incrementality is important for assessing parser performance with a comparison to other existing parsing methods, which basically assume incremental processing. We perform such empirical comparison in Sections \ref{sec:analysis} and \ref{sec:parse}. For example, let us assume to build a parse in Figure \ref{subfig:abigdog:cfg}, which corresponds to the CFG parse for a dependency tree on ``a big dog''. To recognize this parse on the left-corner PDA in Section \ref{sec:bg:left-corner-pda}, after shifting token ``a'' (which becomes X[a]), the PDA may covert it to the symbol ``X[dog]/X[dog]''. However, for creating such a symbol, we have to know that ``dog'' will appear on the remaining inputs at this point, which is impossible in incremental parsing. This contrasts with the left-corner parser for phrase-structure grammars that we considered in Section \ref{sec:bg:left-corner-pda} in which there is only a finite inventory of nonterminal, which might be predicted. The algorithm we formalize in this chapter does not introduce such symbols to enable incremental parsing. We do so by introducing a new concept, a {\it dummy} node, which efficiently abstracts the predicted structure of a subtree in a compact way. Another important point is since this algorithm directly operates on a dependency tree (not via a CFG form), we can get intuition into how the left-corner parser builds a dependency parse tree. This becomes important when developing efficient tabulating algorithm with head-splitting (Section \ref{sec:2:sbg}) in Chapter \ref{chap:induction}. } \REVISE{ We formally define our algorithm as a {\it transition system}, a stack-based formalization like push-down automata and is the most popular way for obtaining algorithms for dependency grammars \cite{Nivre2003,Yamada03,Nivre:2008,RodriguezNivre2013divisible}. As we discussed in Section \ref{sec:bg:left-corner-pda}, a left-corner parser can capture the degree of center-embedding of a construction by its stack depth. Our algorithm preserves this property, and its stack depth increases only when processing dependency structures involving center-embedding. } The empirical part of this chapter comprises of two kinds of experiments: First, we perform a corpus analysis to show that our left-corner algorithm consistently requires less stack depth to recognize annotated trees relative to other algorithms across languages. The result also suggests the existence of a syntactic universal by which deeper center-embedding is a rare construction across languages, which has not yet been quantitatively examined cross-linguistically. The second experiment is a supervised parsing experiment, which can be seen as an alternative way to assess the parser's ability to capture important syntactic regularities. In particular, we will find that the parser using our left-corner algorithm is consistently less sensitive to the decoding constraints of stack depth bound across languages. Conversely, the performance of other dependency parsers such as the arc-eager parser is largely affected by the same constraints. \REVISE{ The motivation behind these comparisons is to examine whether the stack depth of a left-corner parser is in fact a meaningful measure to explain the syntactic universal among other alternatives, which would be valuable for other applications such as unsupervised grammar induction that we explore in Chapter \ref{chap:induction}. } The first experiment is a {\it static} analysis, which strictly analyzes the observed tree forms in the treebanks, while the second experiment takes {\it parsing errors} into account. Though the result of the first experiment seems clearer to claim a universal property of language, the result of the second experiment might also be important for real applications. Specifically we will find that the rate of performance drop with a decoding constraint is smaller than the expected value from the coverage result of the first experiment. This suggests that a good approximation of the observed syntactic structures in treebanks is available from a highly restricted space if we allow small portion of parse errors. Since real applications always suffer from parse errors, this result is more appealing for finding a good constraint to restrict the possible tree structures. This chapter proceeds as follows: Since our empirical concern is the relative performance of our left-corner algorithm compared to existing transition-based algorithms, we begin the discussion in this chapter with a survey of stack depth behavior in existing algorithms in Section \ref{sec:others}. This discussion is an extension of a preliminary survey about the incrementality of transition systems by \newcite{nivre:2004:IncrementalParsing}, which is (to our knowledge) the only study discussing how stack elements increase for a particular dependency structures in some algorithm. Then, in Section \ref{sec:left-corner}, we develop our new transition system that follows a left-corner parsing strategy for dependency grammars and discuss the formal properties of the system, such as the spurious ambiguity of the system and its implications, which are closely relevant to the spurious ambiguity problem we discussed in Section \ref{sec:bilexical}. The empirical part is devoted to Sections \ref{sec:analysis} and \ref{sec:parse}, focusing on the static corpus analysis and supervised parsing experiments, respectively. Finally, we give discussion along with the the relevant previous studies in Section \ref{sec:relatedwork} to conclude this chapter. The preliminary version of this chapter appeared as \newcite{noji-miyao:2015:jnlp}, which was itself an extension of \newcite{noji-miyao:2014:Coling}. Although these previous versions limited the dataset to the one in the CoNLL shared tasks \cite{buchholz-marsi:2006:CoNLL-X,nivre-EtAl:2007:EMNLP-CoNLL2007}, we add new analysis on Universal dependencies \cite{DEMARNEFFE14.1062} (see also Chapter \ref{chap:corpora}). The total number of analyzed treebanks is 38 in total across 26 languages. \section{Notations} \label{sec:trans:notation} We first introduce several important concepts and notations used in this chapter. \paragraph{Transition system} Every parsing algorithm presented in this chapter can be formally defined as a transition system. The description below is rather informal; See \newcite{Nivre:2008} for more details. A transition system is an abstract state machine that processes sentences and produces parse trees. It has a set of {\it configurations} and a set of {\it transition actions} applied to a configuration. Each system defines an {\it initial configuration} given an input sentence. The parsing process proceeds by repeatedly applying an action to the current configuration. After a finite number of transitions the system arrives at a {\it terminal configuration}, and the dependency tree is read off the terminal configuration. Formally, each configuration is a tuple $(\sigma,\beta,A)$; here, $\sigma$ is a stack, and we use a vertical bar to signify the append operation, e.g., $\sigma=\sigma'|\sigma_1$ denotes $\sigma_1$ is the topmost element of stack $\sigma$. Further, $\beta$ is an input buffer consisting of token indexes that have yet to be processed; here, $\beta=j|\beta'$ indicates that $j$ is the first element of $\beta$. Finally, $A \subseteq V_w \times V_w$ is a set of arcs given $V_w$, a set of token indexes for sentence $w$. \paragraph{Transition-based parser} We distinguish two similar terms, a transition system and a transition-based parser in this chapter. A transition system formally characterizes how a tree is constructed via transitions between configurations. On the other hand, a parser is built on a transition system, and it selects the best {\it action sequence} (i.e., the best parse tree) for an input sentence probably with some scoring model. Since a transition system abstracts the way of constructing a parse tree, when we mention a {\it parsing algorithm}, it often refers to a transition system, not a parser. Most of the remaining parts of this chapter is about transition systems, except Section \ref{sec:parse}, in which we compare the performance of several parsers via supervised parsing experiments. \paragraph{Center-embedded dependency structure} \begin{figure}[t] \centering \begin{minipage}[t]{.3\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.5cm] a \& big \& dog \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \end{dependency} \subcaption{}\label{subfig:abigdog} \end{minipage} \begin{minipage}[t]{.3\linewidth} \centering \begin{tikzpicture}[sibling distance=10pt] \Tree [.X[dog] [.X[a] a ] [.X[dog] [.X[big] big ] [.X[dog] dog ] ] ] \end{tikzpicture} \subcaption{}\label{subfig:abigdog:cfg} \end{minipage} \vspace{10pt} \begin{minipage}[t]{.25\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.3cm] dogs \& run \& fast \\ \end{deptext} \depedge{2}{1}{} \depedge{2}{3}{} \end{dependency} \subcaption{}\label{subfig:dogsrunfast} \end{minipage} \begin{minipage}[t]{.35\linewidth} \centering \begin{tikzpicture}[sibling distance=10pt] \Tree [.X[run] [.X[run] [.X[dogs] dogs ] [.X[run] run ] ] [.X[fast] fast ] ] \end{tikzpicture} \subcaption{}\label{subfig:dogsrunfast:cfg1} \end{minipage} \begin{minipage}[t]{.35\linewidth} \centering \begin{tikzpicture}[sibling distance=10pt] \Tree [.X[run] [.X[dogs] dogs ] [.X[run] [.X[run] run ] [.X[fast] fast ] ] ] \end{tikzpicture} \subcaption{}\label{subfig:dogsrunfast:cfg2} \end{minipage} \caption{Conversions from dependency trees into CFG parses; (a) can be uniquely converted to (b), while (c) can be converted to both (d) and (e).}\label{fig:reduction} \end{figure} The concept of center-embedding introduced in Section \ref{sec:bg:embedding} is originally defined on a constituent structure, or a CFG parse. Remember that a dependency tree also encodes constituent structures implicitly (see Figure \ref{fig:reduction}) but the conversion from a dependency tree into a CFG parse (in CNF) is not unique, i.e., there is a spurious ambiguity (see Section \ref{sec:bilexical}). This ambiguity implies there is a subtlety for defining the degree of center-embedding for a dependency structure. We argue that the tree structure of a given dependency tree (i.e., whether it belongs to center-embedding) cannot be determined by a given tree itself; We can determine the tree structure of a dependency tree {\it only if} we have some one-to-one conversion method from a dependency tree to a CFG parse. For example some conversion method may always convert a tree of Figure \ref{subfig:dogsrunfast} into the one of Figure \ref{subfig:dogsrunfast:cfg1}. In other words, the tree structure of a dependency tree should be discussed along with such a conversion method. We discuss this subtlety more in Section \ref{sec:oracle}. We avoid this ambiguity for a while by restricting our attention to the tree structures like Figure \ref{subfig:abigdog} in which we can obtain the corresponding CFG parse uniquely. For example the dependency tree in Figure \ref{subfig:abigdog} is an example of a {\it right-branching} dependency tree. Similarly we call a given dependency tree is center-embedding, or left- (right-)branching, depending on the implicit CFG parse when there is no conversion ambiguity. \section{Stack Depth of Existing Transition Systems} \label{sec:others} This section surveys how the stack depth of existing transition systems grows given a variety of dependency structures. These are used as baseline systems in our experiments in Sections \ref{sec:analysis} and \ref{sec:parse}. \subsection{Arc-standard} \label{sec:standard} \begin{figure}[t] \centering \begin{minipage}[b]{.24\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.4cm] $a$ \& $b$ \& $c$ \\ \end{deptext} \depedge{1}{2}{} \depedge{2}{3}{} \end{dependency} \subcaption{}\label{subfig:right-dep1} \end{minipage} \begin{minipage}[b]{.24\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.4cm] $a$ \& $b$ \& $c$ \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \end{dependency} \subcaption{}\label{subfig:right-dep2} \end{minipage} \begin{minipage}[b]{.24\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.4cm] $a$ \& $b$ \& $c$ \\ \end{deptext} \depedge{1}{3}{} \depedge{3}{2}{} \end{dependency} \subcaption{}\label{subfig:right-dep3} \end{minipage} \begin{minipage}[b]{.24\linewidth} \centering \begin{tikzpicture}[sibling distance=10pt] \tikzset{level distance=25pt} \Tree [.X [.X[$a$] $a$ ] [.X [.X[$b$] $b$ ] [.X[$c$] $c$ ] ] ] \end{tikzpicture} \subcaption{}\label{subfig:right-dep-constituency} \end{minipage} \caption{(a)-(c) Right-branching dependency trees for three words and (d) the corresponding CFG parse.} \label{fig:right-deps} \end{figure} The arc-standard system \cite{nivre:2004:IncrementalParsing} consists of the following three transition actions, with $(h, d)$ representing a dependency arc from $h$ (head) to $d$ (dependent). \begin{itemize} \item {\sc Shift}: $(\sigma,j|\beta,A) \mapsto (\sigma | j, \beta, A)$; \item {\sc LeftArc}: $(\sigma|\sigma'_2|\sigma'_1, \beta, A) \mapsto (\sigma|\sigma'_1, \beta, A \cup \{ (\sigma'_1, \sigma'_2) \})$; \item {\sc RightArc}: $(\sigma|\sigma'_2|\sigma'_1, \beta, A) \mapsto (\sigma|\sigma'_2, \beta, A \cup \{ (\sigma'_2, \sigma'_1) \})$. \end{itemize} We first observe here that the stack depth of the arc-standard system increases linearly for a right-branching structure, such as $a^\curvearrowright b^\curvearrowright c^\curvearrowright \cdots$, in which the system first shifts all words on the stack before connecting each pair of words. \newcite{nivre:2004:IncrementalParsing} analyzed this system and observed that stack depth grows when processing a dependency tree that becomes right-branching with a CFG conversion. Figure \ref{fig:right-deps} shows these dependency trees for three words; the system must construct a subtree of $b$ and $c$ before connecting $a$ to either, thus increasing stack depth. This occurs because the system builds a tree in a bottom-up manner, i.e., each token collects all dependents before being attached to its head. The arc-standard system is essentially equivalent to the push-down automaton of a CFG in CNF with a bottom-up strategy \cite{nivre:2004:IncrementalParsing}, so it has the same property as the bottom-up parser for a CFG. This equivalence also indicates that its stack depth increases for center-embedded structures. \subsection{Arc-eager} The arc-eager system \cite{Nivre2003} uses the following four transition actions: \begin{itemize} \item {\sc Shift}: $(\sigma,j|\beta,A) \mapsto (\sigma | j, \beta, A)$; \item {\sc LeftArc}: $(\sigma|\sigma'_1, j|\beta, A) \mapsto (\sigma, j|\beta, A \cup \{ (j, \sigma'_1) \})$ ~~~ (if $\neg \exists k, (k,\sigma'_1) \in A$); \item {\sc RightArc}: $(\sigma|\sigma'_1, j|\beta, A) \mapsto (\sigma|\sigma'_1|j, \beta, A \cup \{ (\sigma'_1, j) \})$; \item {\sc Reduce}: $(\sigma|\sigma'_1,\beta,A) \mapsto (\sigma,\beta,A)$ ~~~ (if $\exists k, (k,\sigma'_1) \in A$). \end{itemize} Note that {\sc LeftArc} and {\sc Reduce} are not always applicable. {\sc LeftArc} requires that $\sigma'_1$ is not a dependent of any other tokens, while {\sc Reduce} requires that $\sigma'_1$ is a dependent of some token (attached to its head). These conditions originate from the property of the arc-eager system by which each element on the stack may not be disjoint. In this system, two successive tokens on the stack may be combined with a left-to-right arc, i.e., $a^\curvearrowright b$, thus constituting a {\it connected component}. \begin{table}[t] \centering \begin{tabular}[t]{lccc} \hline &Left-branching &Right-branching &Center-embedding \\ \hline Arc-standard &$O(1)$ &$O(n)$ &$O(n)$ \\ Arc-eager &$O(1)$ &$O(1\sim n)$ &$O(1\sim n)$ \\ Left-corner &$O(1)$ &$O(1)$ &$O(n)$ \\ \hline \end{tabular} \caption{Order of required stack depth for each structure for each transition system. $O(1\sim n)$ means that it recognizes a subset of structures within a constant stack depth but demands linear stack depth for the other structures. } \label{tab:order} \end{table} For this system, we slightly abuse the notation and define stack depth as the number of connected components, not as the number of tokens on the stack, since our concern is the syntactic bias that may be captured with measures on the stack. With the definition based on the number of tokens on the stack, the arc-eager system would have the same stack depth properties as the arc-standard system. As we see below, the arc-eager approach has several interesting properties with this modified definition.\footnote{ The stack of the arc-eager system can be seen as the stack of stacks; i.e., each stack element itself is a stack preserving a connected subtree (a right spine). Our definition of stack depth corresponds to the depth of this stack of stacks. } From this definition, unlike the arc-standard system, the arc-eager system recognizes the structure shown in Figure \ref{subfig:right-dep1} and more generally $a^\curvearrowright b^\curvearrowright c^\curvearrowright \cdots$ within constant depth (just one) since it can connect all tokens on the stack with consecutive {\sc RightArc} actions. More generally, the stack depth of the arc-eager system never increases as long as all dependency arcs are left to right. This result indicates that the construction of the arc-eager system is no longer purely bottom-up and makes it difficult to formally characterize the stack depth properties based on the tree structure. We argue two points regarding the stack depth of the arc-eager system. First, it recognizes a subset of the right-branching structures within a constant depth, as we analyzed above, while increasing stack depth linearly for other right-branching structures, including the trees shown in Figures \ref{subfig:right-dep2} and \ref{subfig:right-dep3}. Second, it recognizes a subset of the center-embedded structures within a constant depth, such as \tikz[baseline=-.5ex]{ \node at (0.0, 0.0) {$a^\curvearrowright b ^\curvearrowright c ~~ d$,}; \draw[line width=0.35pt,->] (-0.16,0.08) .. controls(-0.09,0.3) and (0.50,0.3) .. (0.57,0.08); }, which becomes center-embedded when converted to a constituent tree with all arcs left-to-right. For other center-embedded structures, the stack depth grows linearly as with the arc-standard system. We summarize the above results in Table \ref{tab:order}. The left-corner transition system that we propose next has the properties of the third row of the table, and its stack depth grows only on center-embedded dependency structures. \subsection{Other systems} All systems in which stack elements cannot be connected have the same properties as the arc-standard system because of their bottom-up constructions including the hybrid system of \newcite{kuhlmann-gomezrodriguez-satta:2011:ACL-HLT2011}. \newcite{kitagawa-tanakaishii:2010:Short} and \newcite{sartorio-satta-nivre:2013:ACL2013} present an interesting variant that attaches one node to another node that may not be the head of a subtree on the stack. We do not explore these systems in our experiments because their stack depth essentially has the same properties as the arc-eager system, e.g., their stack depth does not always grow on center-embedded structures, although it grows on some kinds of right-branching structures. \section{Left-corner Dependency Parsing} \label{sec:left-corner} In this section, we develop our dependency transition system with the left-corner strategy. Our starting point is the push-down automaton for a CFG that we developed in Section \ref{sec:bg:left-corner-pda}. We will describe how the idea in this automaton can be extended for dependency trees by introducing the concept of {\it dummy nodes} that abstract the prediction mechanism required to achieve the left-corner parsing strategy. \subsection{Dummy node} The key characteristic of our transition system is the introduction of a dummy node in a subtree, which is needed to represent a subtree containing predicted structures, such as the symbol $A/B$ in Figure \ref{fig:bg:ourpda}, which predicts an existence of $B$ top-down. To intuitively understand the parser actions, we present a simulation of transitions for the sentence shown in Figure \ref{subfig:right-dep2} for which all existing systems demand a linear stack depth. Our system first shifts $a$ and then conducts a {\it prediction} operation that yields subtree \hspace{-3pt}\tikz[baseline=-1.ex]{ \draw[->] (-0.12,-0.08) -- (-0.5,-0.25); \node at (0,0) {$x$}; \node at (-0.6,-0.3) {$a$}; }, where $x$ is a dummy node. Here, we predict that $a$ will become a left dependent of an incoming word. Next, it shifts $b$ to the stack and then conducts a {\it composition} operation to obtain a tree \hspace{-3pt}\tikz[baseline=-1.ex]{ \draw[->] (-0.12,-0.02) -- (-0.6,-0.25); \draw[->] (-0.03,-0.1) -- (-0.2,-0.25); \node at (0,0) {$x$}; \node at (-0.7,-0.3) {$a$}; \node at (-0.25,-0.3) {$b$}; }. Finally, $c$ is inserted into the position of $x$, thus recovering the tree. \subsection{Transition system} \label{subsec:transitionsystem} Our system uses the same notation for a configuration as other systems presented in Section \ref{sec:others}. Figure \ref{fig:config} shows an example of a configuration in which the $i$-th word in a sentence is written as $w_i$ on the stack. Each element on the stack is a list representing a right spine of a subtree, which is similar to \newcite{kitagawa-tanakaishii:2010:Short} and \newcite{sartorio-satta-nivre:2013:ACL2013}. Here, right spine $\sigma_i = \langle \sigma_{i1},\sigma_{i2},\cdots,\sigma_{ik} \rangle$ consists of all nodes in a descending path from the head of $\sigma_i$, i.e., from $\sigma_{i1}$, taking the rightmost child at each step. We also write $\sigma_i = \sigma'_i | \sigma_{ik}$, meaning that $\sigma_{ik}$ is the rightmost node of spine $\sigma_i$. Each element of $\sigma_i$ is an index of a token in a sentence or a subtree rooted at a dummy node, $x(\lambda)$, where $\lambda$ is the set of left dependents of $x$. We state that right spine $\sigma_i$ is {\it complete} if it does not contain any dummy nodes, while $\sigma_i$ is {\it incomplete} if it contains a dummy node.\footnote{\REVISE{$\sigma_i$ with a dummy node corresponds to a stack symbol of the form $A/B$ in the left-corner PDA, which we called incomplete in Section \ref{sec:bg:left-corner-pda}. Thus, the meaning of these notions (i.e., complete and incomplete) is the same in two algorithms. The main reason for us to use spine-based notation stems from our use of a dummy node, which postpones the realization of dependency arcs connected to it. To add arcs appropriately to $A$ when a dummy is filled with a token, it is necessary to keep the surrounding information of the dummy node (this occurs in {\sc Insert} and {\sc RightComp}), which can be naturally traced by remembering each right spine. }} \REVISE{ All transition actions in our system are defined in Figure \ref{fig:actions}. {\sc Insert} is essentially the same as the {\sc Scan} operation in the original left-corner PDA for CFGs (Figure \ref{fig:bg:ourpda}). Other changes are that we divide {\sc Prediction} and {\sc Composition} into two actions, left and right. As in the left-corner PDA, by a shift action, we mean {\sc Shift} or {\sc Insert}, while a reduce action means one of prediction and composition actions. } \begin{figure}[t] \hspace{4.0cm}Stack: \hspace{1.5cm}Buffer: \vspace*{0.2cm}\hspace{4.0cm} \begin{tikzpicture}[edge from parent/.style={draw,->},sibling distance=15pt,level distance=20pt] \path[draw] (0, 0) -- ++(2, 0) -- ++(0, -0.6) -- ++(-2, 0); \node at (0.5, -0.3) {$w_2$} [sibling distance=0.68cm] child[missing] child { node {$w_1$} } child[missing] child { node {$x$} [sibling distance=0.47cm] child { node {$w_3$} } child { node {$w_5$} [sibling distance=3pt] child { node {$w_4$} } child[missing] } child[missing] child[missing] }; \node at (1.7, -0.3) {$w_6$} child[missing] child{ node {$w_7$} }; \path[draw] (4.5, 0) -- ++(-2.0, 0) -- ++(0, -0.6) -- ++(2.0, 0); \node at (3.45, -0.3) {$w_8,w_9,\cdots$}; \node at (7,-1.0) {$ \begin{aligned} \sigma &= [\langle 2,x(\{3,5\}) \rangle, \langle 6,7 \rangle ] \\ \beta &= [8,9,\cdots,n] \\ A &= \{(2,1),(5,4),(6,7)\} \end{aligned} $}; \end{tikzpicture} \caption{Example configuration of a left-corner transition system.} \label{fig:config} \end{figure} \paragraph{Shift Action} \REVISE{ As in the left-corner PDA, {\sc Shift} moves a token from the top of the buffer to the stack. {\sc Insert} corresponds to the {\sc Scan} operation of the PDA, and replaces a dummy node on the top of the stack with a token from the top of the buffer. Note that before doing a shift action, a dummy $x$ can be replaced by any words, meaning that arcs from and to $x$ are unspecified. This is the key to achieve incremental parsing (see the beginning of this chapter). It is {\sc Insert} that these arcs are first specified, by filling the dummy node with an actual token. As in the left-corner PDA, the top element of the stack must be complete after a shift action. } \paragraph{Reduce Action} \REVISE{ As in the left-corner PDA, a reduce action is applied when the top element of the stack is complete, and changes it to an incomplete element. {\sc LeftPred} and {\sc RightPred} correspond to {\sc Prediction} in the left-corner PDA. Figure \ref{fig:correspondence} describes the transparent relationships between them. {\sc LeftPred} makes the head of the top stack element (i.e., $\sigma_{11}$) as a left dependent of a new dummy $x$, while {\sc RightPred} predicts a dummy $x$ as a right dependent of $a$. In these actions, if we think the original and resulting dependency forms in CFG, the correspondence to {\sc Prediction} in the PDA is apparent. Specifically, the CFG forms of the resulting trees in both actions are the same. The only difference is the head label of the parent symbol, which is $x$ in {\sc LeftPred} while $a$ in {\sc RightPred}. } \begin{figure}[t!] \centering \begin{tabular}[t]{|lll|} \hline {\sc Shift} &$(\sigma, j|\beta, A) \mapsto (\sigma | \langle j \rangle, \beta, A)$ & \\ {\sc Insert} &$(\sigma | \langle \sigma'_1 | i | x(\lambda) \rangle), j|\beta, A) \mapsto (\sigma | \langle \sigma'_1 | i | j \rangle, \beta, A \cup \{ (i,j) \} \cup \{ \cup_{k\in\lambda} (j,k) \}$ & \\ \hline {\sc LeftPred}&$(\sigma | \langle \sigma_{11}, \cdots \rangle, \beta, A) \mapsto (\sigma | \langle x(\sigma_{11}) \rangle, \beta, A)$ & \\ {\sc RightPred}&$(\sigma | \langle \sigma_{11}, \cdots \rangle, \beta, A) \mapsto (\sigma | \langle \sigma_{11}, x(\emptyset) \rangle, \beta, A)$ & \\ {\sc LeftComp}&$(\sigma | \langle \sigma_2'| x(\lambda) \rangle | \langle \sigma_{11},\cdots \rangle, \beta, A) \mapsto (\sigma | \langle \sigma_2'| x(\lambda \cup \{\sigma_{11}\})\rangle, \beta, A)$ & \\ {\sc RightComp}& $(\sigma | \langle \sigma_2'| x(\lambda) \rangle | \langle \sigma_{11},\cdots \rangle, \beta, A) \mapsto (\sigma | \langle \sigma_2'| \sigma_{11} | x(\emptyset) \rangle, \beta, A \cup \{\cup_{k\in\lambda} (\sigma_{11},k)\})$ & \\ \hline \end{tabular} \caption{Actions of the left-corner transition system including two shift operations (top) and reduce operations (bottom).} \label{fig:actions} \end{figure} \begin{figure}[t] \centering \begin{tabular}[t]{ll} {\sc LeftPred:}&{\sc RightPred:} \\ \parbox{0.4\linewidth}{% \hspace{10pt} \begin{tikzpicture}[level distance=22pt, sibling distance=20pt,edge from parent/.style={draw,->}] \node (a) at (1.0, -0.6) {$a$}; \path[draw] (0, -1.0) -- +(2.0, 0.0); \node at (1.0, -1.4) {$x$} child { node {$a$} } child[missing]; \begin{scope}[xshift=4.0cm,yshift=-0.1cm,edge from parent/.style={draw}] \Tree[.X[$a$] $a$ ] \end{scope} \path[draw] (2.5, -1.0) -- +(3.0, 0.0); \begin{scope}[xshift=4.0cm,yshift=-1.33cm,edge from parent/.style={draw}] \Tree[.X[$x$] [.X[$a$] $a$ ] [.X[$x$] ] ] \end{scope} \end{tikzpicture} } &{% \parbox{0.4\linewidth}{% \hspace{10pt} \begin{tikzpicture}[level distance=22pt, sibling distance=20pt,edge from parent/.style={draw,->}] \node (a) at (1.0, -0.6) {$a$}; \path[draw] (0, -1.0) -- +(2.0, 0.0); \node at (1.0, -1.4) {$a$} child[missing] child { node {$x$} }; \begin{scope}[xshift=4.0cm,yshift=-0.1cm,edge from parent/.style={draw}] \Tree[.X[$a$] $a$ ] \end{scope} \path[draw] (2.5, -1.0) -- +(3.0, 0.0); \begin{scope}[xshift=4.0cm,yshift=-1.33cm,edge from parent/.style={draw}] \Tree[.X[$a$] [.X[$a$] $a$ ] [.X[$x$] ] ] \end{scope} \end{tikzpicture} }} \\ {\sc LeftComp:}&{\sc RightComp:} \\ \parbox{0.4\linewidth}{% \hspace{10pt} \begin{tikzpicture}[level distance=20pt, sibling distance=20pt,edge from parent/.style={draw,->}] \node (a) at (1.5, -1.0) {$a$}; \node (b) at (0.5, -0.6) {$b$} child[missing] child { node {$x$} }; \path[draw] (0, -1.6) -- +(2.0, 0.0); \node at (0.7, -2.0) {$b$} [sibling distance=40pt] child[missing] child { node {$x$} [sibling distance=20pt] child { node {$a$} } child[missing] }; \begin{scope}[xshift=5.5cm,yshift=-0.8cm,edge from parent/.style={draw}] \Tree[.X[$a$] $a$ ] \end{scope} \path[draw] (2.5, -1.6) -- +(3.3, 0.0); \begin{scope}[xshift=3.7cm,yshift=-0.1cm,edge from parent/.style={draw},level distance=20pt] \Tree[.X[$b$] [.X[$b$] $b$ ] X[$x$] ] \end{scope} \begin{scope}[xshift=3.8cm,yshift=-1.93cm,edge from parent/.style={draw},level distance=22pt,sibling distance=3pt] \Tree[.X[$b$] [.X[$b$] $b$ ] [.X[$x$] [.X[$a$] $a$ ] X[$x$] ] ] \end{scope} \end{tikzpicture} } &{% \parbox{0.4\linewidth}{% \hspace{10pt} \begin{tikzpicture}[level distance=20pt, sibling distance=20pt,edge from parent/.style={draw,->}] \node (a) at (1.5, -1.0) {$a$}; \node (b) at (0.5, -0.6) {$b$} child[missing] child { node {$x$} }; \path[draw] (0, -1.6) -- +(2.0, 0.0); \node at (0.7, -2.0) {$b$} [sibling distance=40pt] child[missing] child { node {$a$} [sibling distance=20pt] child[missing] child { node {$x$} } }; \begin{scope}[xshift=5.5cm,yshift=-0.8cm,edge from parent/.style={draw}] \Tree[.X[$a$] $a$ ] \end{scope} \path[draw] (2.5, -1.6) -- +(3.3, 0.0); \begin{scope}[xshift=3.7cm,yshift=-0.1cm,edge from parent/.style={draw},level distance=20pt] \Tree[.X[$b$] [.X[$b$] $b$ ] X[$x$] ] \end{scope} \begin{scope}[xshift=3.8cm,yshift=-1.93cm,edge from parent/.style={draw},level distance=22pt,sibling distance=3pt] \Tree[.X[$b$] [.X[$b$] $b$ ] [.X[$a$] [.X[$a$] $a$ ] X[$x$] ] ] \end{scope} \end{tikzpicture} }} \\ \end{tabular} \caption{Correspondences of reduce actions between dependency and CFG. We only show minimal example subtrees for simplicity. However, $a$ can have an arbitrary number of children, so can $b$ or $x$, provided $x$ is on a right spine and has no right children.} \label{fig:correspondence} \end{figure} \REVISE{ A similar correspondence holds between {\sc RightComp}, {\sc LeftComp}, and {\sc Composition} in the PDA. We can interpret {\sc LeftComp} as two step operations as in {\sc Composition} in the PDA (see Section \ref{sec:bg:left-corner-pda}): It first applies {\sc LeftPred} to the top stack element, resulting in \hspace{-3pt}\tikz[baseline=-1.ex]{ \draw[->] (-0.12,-0.08) -- (-0.5,-0.25); \node at (0,0) {$x$}; \node at (-0.6,-0.3) {$a$}; }, and then unifies two $x$s to comprise a subtree. The connection to {\sc Composition} is apparent from the figure. {\sc RightComp}, on the other hand, first applies {\sc RightPred} to $a$, and then combines the resulting tree and the second top element on the stack. This step is a bit involved, which might be easier to understand with the CFG form. On the CFG form, two unified nodes are X[$x$], which is predicted top-down, and X[$a$], which is recognized bottom-up (with the first {\sc RightPred} step).\footnote{If the parent of X[$x$] is also X[$x$], all of them are filled with $a$ recursively. This situation corresponds to the case in which dummy node $x$ in the dependency tree has multiple left dependents, as in the resulting tree by {\sc LeftComp} in Figure \ref{fig:correspondence}. } Since $x$ can be unified with any tokens, at this point, $x$ in X[$x$] is filled with $a$. Returning to dependency, this means that we insert the subtree rooted at $a$ (after being applied {\sc RightPred}) into the position of $x$ in the second top element. Note that from Figure \ref{fig:correspondence}, we can see that the dummy node $x$ can only appears in a right spine of a CFG subtree. Now, we can reinterpret {\sc Insert} action on the CFG subtree, which attaches a token to a (predicted) preterminal X[$x$], as in {\sc Scan} of the PDA, and then fills every $x$ in a right spine with a shifted token. This can be seen as a kind of unification operation. \paragraph{Relationship to the left-corner PDA} As we have seen so far, though our transition system directly operates dependency trees, we can associate every step with a process to expand (partial) CFG parses as in the manner that the left-corner PDA would do.\footnote{ To be precise, we note that this CFG-based expansion process cannot be written in the form used in the left-corner PDA. For example, if we write items in {\sc LeftComp} and {\sc RightComp} in Figure \ref{fig:correspondence} in the form of $A/B$, both results in the same transition: X[$b$]/X[$x$] X[$a$] $\mapsto$ X[$b$]/X[$x$]. This is due to our use of a dummy node $x$, which plays different roles in two actions (e.g., {\sc RigthComp} assumes the first $x$ is $a$) but the difference is lost with this notation. We thus claim that a CFG-based expansion step corresponds to a step in the left-corner PDA in that every action in the former expands the tree in the same way as the corresponding action of the left-corner PDA, as explained by Figure \ref{fig:correspondence} (for reduce actions) and the body (for {\sc Insert}); the equivalence of {\sc Shift} is apparent. } In every step, the transparency of two tree forms, i.e., dependency and CFG, is always preserved. This means that at the final configuration the CFG parse would be the one corresponding to the resulting dependency tree, and also at each step the stack depth is identical to the one that is incurred during parsing the final CFG parse with the original left-corner PDA. We will see this transparency with an example next. The connection between the stack depth and the degree of center-embedding, that we established in Theorem \ref{thoerem:bg:stack-depth} for the PDA, also directly holds in this transition system. We restate this for our transition system in Section \ref{sec:memorycost}. \paragraph{Example} \begin{figure*}[t] \centering \begin{tabular}[t]{rllll} \hline Step & Action & Stack ($\sigma$) & Buffer ($\beta$) & Set of arcs ($A$) \\ \hline & & $\varepsilon$ & $a~b~c~d$ & $\emptyset$ \\ 1 & {\sc Shift} & $\langle a \rangle$ & $b~c~d$ & $\emptyset$ \\ 2 & {\sc RightPred}& $\langle a, x(\emptyset) \rangle$ & $b~c~d$ & $\emptyset$ \\ 3 & {\sc Shift} & $\langle a, x(\emptyset) \rangle \langle b \rangle$ & $c~d$ & $\emptyset$ \\ 4 & {\sc RightPred}& {\color{red}{$\langle a, x(\emptyset) \rangle \langle b, x(\emptyset) \rangle$}} & $c~d$ & $\emptyset$ \\ 5 & {\sc Insert} & $\langle a, x(\emptyset) \rangle \langle b, c \rangle$ & $d$ & ${(b,c)}$ \\ 6 & {\sc RightComp}& $\langle a, b, x(\emptyset) \rangle$ & $d$ & ${(b, c), (a, b)}$ \\ 7 & {\sc Insert} & $\langle a, b, d \rangle$ & & ${(b, c), (a, b), (b, d)}$ \\ \hline \end{tabular} \caption{An example parsing process of the left-corner transition system.} \label{fig:trans:example} \end{figure*} For an example, Figure \ref{fig:trans:example} shows the transition of configurations during parsing a tree \tikz[baseline=-.5ex]{ \node at (0.0, 0.0) {$a^\curvearrowright b ^\curvearrowright c ~~ d$,}; \draw[line width=0.35pt,->] (-0.16,0.08) .. controls(-0.09,0.3) and (0.50,0.3) .. (0.57,0.08); } which corresponds to the parse in Figure \ref{fig:2:depth-1} and thus involves one degree of center-embedding. Comparing to Figure \ref{fig:bg:pda-example}, we can see that two transition sequences for the PDA (for CFG) and the transition system (for dependency) are essentially the same: the differences are that {\sc Prediction} and {\sc Composition} are changed to the corresponding actions (in this case, {\sc RightPred} and {\sc RigthComp}) and {\sc Scan} is replaced with {\sc Insert}. This is essentially due to the transparent relationships between them that we discussed above. As in Figure \ref{fig:bg:pda-example}, the stack depth two after a reduce action indicates center-embedding, which is step 4. } \paragraph{Other properties} As in the left-corner PDA, this transition system also performs shift and reduce actions alternately (the proof is almost identical to the case of PDA). Also, given a sentence of length $n$, the number of actions required to arrive at the final configuration is $2n-1$, because every token except the last word must be shifted once and reduced once, and the last word is always inserted as the final step. Every projective dependency tree is derived from at least one transition sequence with this system, i.e., our system is {\it complete} for the class of projective dependency trees \cite{Nivre:2008}. \REVISE{ Though we omit the proof, this can be shown by appealing to the transparency between the transition system and the left-corner PDA, which is complete for a given CFG. } However, our system is {\it unsound} for the class of projective dependency trees, meaning that a transition sequence on a sentence does not always generate a valid projective dependency tree. We can easily verify this claim with an example. Let $a~b~c$ be a sentence and consider the action sequence ``{\sc Shift LeftPred Shift LeftPred Insert}'' with which we obtain the terminal configuration of $\sigma= [ x(a), c ]; \beta=[]; A=\{ (b,c) \}$\footnote{For clarity, we use words instead of indices for stack elements.}, but this is not a valid tree. The arc-eager system also suffers from a similar restriction \cite{NivreAEP14}, which may lead to lower parse accuracy. Instead of fixing this problem, in our parsing experiment, which is described in Section \ref{sec:parse}, we implement simple post-processing heuristics to combine those fragmented trees that remain on the stack. \subsection{Oracle and spurious ambiguity} \label{sec:oracle} This section presents and analyzes an oracle function for the transition system defined above. An oracle for a transition system is a function that returns a correct action given the current configuration and a set of gold arcs. The reasons why we develop and analyze the oracle are mainly two folds: First, we use this in our empirical corpus study in Section \ref{sec:analysis}; that is, we analyze how stack depth increases during simulation of recovering dependency trees in the treebanks. Such simulation requires the method to extract the correct sequence of actions to recover the given tree. Second, we use it to obtain training examples for our supervised parsing experiments in Section \ref{sec:parse}. This is more typical reason to design the oracles for transition-based parsers \cite{Nivre:2008,GoldbergTDP13}. Below we also point out the deep connection between the design of an oracle and a conversion process of a dependency into a (binary) CFG parse, which becomes the basis of the discussion in Section \ref{sec:memorycost} on the degree of center-embedding of a given dependency tree, the problem we left in Section \ref{sec:trans:notation}. Since our system performs shift and reduce actions interchangeably, we need two functions to define the oracle. Let $A_g$ be a set of arcs in the gold tree and $c$ be the current configuration. We select the next shift action if the stack is empty (i.e., the initial configuration) or the top element of the stack is incomplete as follows: \begin{itemize} \item {\sc Insert}: Let $c=(\sigma| \langle \sigma'_1 | i | x(\lambda) \rangle, j|\beta, A)$. $i$ may not exist. The condition is: \begin{itemize} \item if $i$ exists, $(i,j)\in A_g$ and $j$ has no dependents in $\beta$; \item otherwise, $\exists k\in \lambda;(j,k)\in A_g$. \end{itemize} \item {\sc Shift}: otherwise. \end{itemize} If the top element on the stack is complete, we select the next reduce action as follows: \begin{itemize} \item {\sc LeftComp}: Let $c=(\sigma| \langle \sigma'_2 | i | x(\lambda) \rangle | \langle \sigma_{11},\cdots \rangle, \beta, A)$. $i$ may not exist. Then \begin{itemize} \item if $i$ exists, $\sigma_{11}$ has no dependents in $\beta$ and $i$'s next dependent is the head of $\sigma_{11}$; \item otherwise, $\sigma_{11}$ has no dependents in $\beta$ and $k\in\lambda$ and $\sigma_{11}$ share the same head. \end{itemize} \item {\sc RightComp}: Let $c=(\sigma| \langle \sigma'_2 | i | x(\lambda) \rangle | \langle \sigma_{11},\cdots \rangle, \beta, A)$. $i$ may not exist. Then \REVISE{ \begin{itemize} \item if $i$ exists, the rightmost dependent of $\sigma_{11}$ is in $\beta$ and $(i,\sigma_{11}) \in A_g$; \item otherwise, the rightmost dependent of $\sigma_{11}$ is in $\beta$ and $\exists k\in \lambda,(\sigma_{11},k)\in A_g$. \end{itemize} } \item {\sc RightPred}: if $c=(\sigma| \langle \sigma_{11},\cdots \rangle, \beta, A)$ and $\sigma_{11}$ has a dependent in $\beta$. \item {\sc LeftPred}: otherwise. \end{itemize} Essentially, each condition ensures that we do not miss any gold arcs by performing the transition. This is ensured at each step so we can recover the gold tree in the terminal configuration. We use this oracle in our experiments in Sections \ref{sec:analysis} and \ref{sec:parse}. \paragraph{Spurious ambiguity} Next, we observe that the developed oracle above is not a unique function to return the gold action. Consider sentence $a ^\curvearrowleft b ^\curvearrowright c$, which is a simplification of the sentence shown in Figure \ref{subfig:dogsrunfast}. If we apply the oracle presented above to this sentence, we obtain the following sequence: \begin{equation} \textsc{Shift} \rightarrow \textsc{LeftPred} \rightarrow \textsc{Insert} \rightarrow \textsc{RightPred} \rightarrow \textsc{Insert} \label{eqn:oracleseq1} \end{equation} Note, however, that the following transitions also recover the parse tree: \begin{equation} \textsc{Shift} \rightarrow \textsc{LeftPred} \rightarrow \textsc{Shift} \rightarrow \textsc{RightComp} \rightarrow \textsc{Insert} \label{eqn:oracleseq2} \end{equation} This is a kind of spurious ambiguities that we mentioned several times in this thesis (Sections \ref{sec:bilexical}, \ref{sec:2:sbg}, and \ref{sec:trans:notation}). Although in the transition-based parsing literature some works exist to improve parser performances by utilizing this ambiguity \cite{GoldbergTDP13} or by eliminating it \cite{TACL68}, here we do not discuss such practical problems and instead {\it analyze} the differences in the transitions leading to the same tree. Here we show that the spurious ambiguity of the transition system introduced above is essentially due to the spurious ambiguity of transforming a dependency tree into a CFG parse (Section \ref{sec:bilexical}). We can see this by comparing the implicitly recognized CFG parses with the two action sequences above. In sequence (\ref{eqn:oracleseq1}), {\sc RightPred} is performed at step four, meaning that the recognized CFG parse has the form $((a~b)~c)$, while that of sequence (\ref{eqn:oracleseq2}) is $(a~(b~c))$ due to its {\sc RightComp} operation. This result indicates an oracle for our left-corner transition system implicitly binarizes a given gold dependency tree. The particular binarization mechanism associated with the oracle presented above is discussed next. \begin{figure}[t] \centering \begin{minipage}[b]{.3\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.5cm] $a$ \& $b$ \& $c$ \& $d$ \& $e$ \\ \end{deptext} \depedge{2}{1}{} \depedge{2}{4}{} \depedge{4}{3}{} \depedge{4}{5}{} \end{dependency} \subcaption{}\label{subfig:binarize} \end{minipage} \begin{minipage}[b]{.3\linewidth} \centering \begin{tikzpicture}[sibling distance=7pt] \tikzset{level distance=22pt} \Tree [.X[${b}$] [.X[${b}$] [.X[$a$] $a$ ] [.X[$b$] $b$ ] ] [.X[${d}$] [.X[$c$] $c$ ] [.X[${d}$] [.X[$d$] $d$ ] [.X[$e$] $e$ ] ] ] ] \end{tikzpicture} \subcaption{}\label{subfig:binarize:constituent} \end{minipage} \caption{Implicit binarization process of the oracle described in the body.}\label{fig:binarize} \end{figure} \paragraph{Implicit binarization} We first note the property of the presented oracle that it follows the strategy of performing composition or insert operations when possible. As we saw in the given example, sometimes {\sc Insert} and {\sc Shift} can both be valid for recovering the gold arcs, though here we always select {\sc Insert}. Sometimes the same ambiguity exists between {\sc LeftComp} and {\sc LeftPred} or {\sc RightComp} and {\sc RightPred}; we always prefer composition. \REVISE{ Then, we can show the following theorem about the binarization mechanism of this oracle. \begin{mytheorem} \label{theorem:trans:binarize} The presented oracle above implicitly binarizes a dependency tree in the following manner: \begin{itemize} \item Given a subtree rooted at $h$, if the parent of it is its {\it right} side, or $h$ is the sentence root, $h$'s left children are constructed first. \item If the parent of $h$ is its {\it left} side, $h$'s right children are constructed first. \end{itemize} \end{mytheorem} Figure \ref{fig:binarize} shows an example. For example, since the parent of $d$ is $b$, which is in left, the constituent $d$ $e$ is constructed first. } An important observation for showing this is the following lemma about the condition for applying {\sc RightComp}. \begin{newlemma} \label{lemma:trans:rightcomp} Let $c=(\sigma| \sigma_2 | \sigma_1, \beta, A)$ and $\sigma_2$ be incomplete (next action is a reduce action). Then, in the above oracle, {\sc RightComp} never occurs for a configuration on which $\sigma_2$ is rooted at a dummy, i.e., $\sigma_2 = \langle x(\lambda) \rangle$, or $\sigma_1$ has some left children, i.e., $\exists k < \sigma_{11}, (\sigma_{11},k)$. \end{newlemma} \begin{proof} The first constraint, $\sigma_2 \neq \langle x(\lambda) \rangle$ is shown by simulating how a tree after {\sc RightComp} is created in the presented oracle. Let us assume $\lambda = \{i\}$ (i.e., $\sigma_2$ looks like $i^\curvearrowleft x$), $j = \sigma_{11}$, and $\sigma_1$ be a subtree spanning from $i+1$ to $k$ (i.e., $i+1 \leq j \leq k$). After {\sc RightComp}, we get a subtree rooted at $j$, which looks like $i^\curvearrowleft j^\curvearrowright x'$ where $x'$ is a new dummy node. The oracle instead builds the same structure by the following procedures: After building $i^\curvearrowleft x$ by {\sc LeftPred} to $i$, it first collect all left children of $x$ by consecutive {\sc LeftComp} actions, followed by {\sc Insert}, to obtain a tree $i^\curvearrowleft j$ (omit $j$'s left dependents between $i$ and $j$). Then it collects the right children of $j$ (corresponding to $x'$) with {\sc RightPred}s. This is because we prefer {\sc LeftComp} and {\sc Insert} over {\sc LeftPred} and {\sc Shift}, and suggests that $\sigma_2 \neq \langle x(\lambda) \rangle$ before {\sc RightComp}. This simulation also implies the second constraint that $\nexists k < \sigma_{11}, (\sigma_{11},k)$, since it never occurs unless {\sc LeftPred} is preferred over {\sc LeftComp}. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:trans:binarize}] Let us examine the first case in which the parent of $h$ is in the right side. Let this parent index be $h'$, i.e., $h < h'$. Note that this right-to-left arc ($h^\curvearrowleft h'$) are only created with {\sc LeftComp} or {\sc LeftPred} and in both cases $h$ must finish collecting every child before reduced, meaning that $h'$ does not affect the construction of a subtree rooted at $h$. This is also the case when $h$ is the sentence root. Now, if $h$ collects its right children first, that means $h$ collects left children via {\sc RightComp} with subtree rooted at a dummy node (which is later identified to $h$) but this never occurs by Lemma \ref{lemma:trans:rightcomp}. In the second case, $h' < h$. The arc $h'^\curvearrowleft h$ is established with {\sc RightPred} or {\sc RightComp} when the head of the top stack symbol is $h'$ (instantiated dummy node is later filled with $h$). In both cases, if $h$ collects its left children first, that means a subtree rooted at $h$ (the top stack symbol) with left children is substituted to the dummy node with {\sc RightComp} (see Figure \ref{fig:correspondence}). However, this situation is prohibited by Lemma \ref{lemma:trans:rightcomp}. The oracle instead collects left children of $h$ with successive {\sc LeftComp}s. This occurs dues to the oracle's preference for {\sc LeftComp} over {\sc LeftPred}. \end{proof} \paragraph{Other notes} \begin{itemize} \item The property of binarization above also indicates that the designed oracle is optimal in terms of stack depth, i.e., it always minimizes the maximum stack depth for a dependency tree, since it will minimize the number of turning points of the zig-zag path. \item If we construct another oracle algorithm, we would have different properties regarding implicit binarization, in which case Lemma \ref{lemma:trans:rightcomp} would not be satisfied. \item \REVISE{ Combining the result in Section \ref{subsec:transitionsystem} about the transparency between the stack depth of the transiton system and the left-corner PDA, it is obvious that at each step of this oracle, the incurred stack depth to recognize a dependency tree equals to the stack depth incurred by the left-corner PDA during recognizing the CFG parse given by the presented binarization. } \item In Section \ref{sec:analysis}, we use this oracle to evaluate the ability of the left-corner parser to recognize natural language sentences within small stack depth bound. Note if our interest is just to examine rarity of center-embedded constructions, that is possible without running the oracle in entire sentences, by just counting the degree of center-embedding of the binarized CFG parses. The main reason why we do not employ such method is because our interests are not limited to rarity of center-embedded constructions but also lie in the relative performance of the left-corner parser to capture syntactic regularities among other transition-based parsers, such as the arc-eager parser. This comparison seems more meaningful when we run every transition system on the same sentences. To make this comparison clearer, we next give more detailed analysis on the stack depth behavior of our left-corner transition system. \end{itemize} \subsection{Stack depth of the transition system} \label{sec:memorycost} \begin{figure}[t] \centering \begin{minipage}[b]{.2\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.3cm] $a$ \& $b$ \& $c$ \& $d$ \\ \end{deptext} \depedge{1}{4}{} \depedge{4}{2}{} \depedge{2}{3}{} \end{dependency} \subcaption{} \end{minipage} \begin{minipage}[b]{.2\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.3cm] $a$ \& $b$ \& $c$ \& $d$ \\ \end{deptext} \depedge{1}{2}{} \depedge{2}{3}{} \depedge{2}{4}{} \end{dependency} \subcaption{} \end{minipage} \begin{minipage}[b]{.28\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.3cm] $l$ \& $a$ \& $b$ \& $c$ \& $d$ \& $m$ \\ \end{deptext} \depedge{1}{6}{} \depedge{6}{2}{} \depedge{2}{5}{} \depedge{5}{3}{} \depedge{3}{4}{} \end{dependency} \subcaption{} \end{minipage} \begin{minipage}[b]{.28\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.3cm] $l$ \& $a$ \& $b$ \& $c$ \& $d$ \& $m$ \\ \end{deptext} \depedge{1}{2}{} \depedge{2}{3}{} \depedge{3}{4}{} \depedge{3}{5}{} \depedge{2}{6}{} \end{dependency} \subcaption{} \end{minipage} \begin{minipage}[b]{.4\linewidth} \centering \begin{tikzpicture}[sibling distance=7pt] \tikzset{level distance=20pt} \Tree [.{\bf X} [.X[$a$] $a$ ] \edge[line width=1pt]; [.{\bf X} \edge[line width=1pt]; [.{\bf X} [.X[$b$] $b$ ] \edge[line width=1pt]; [.X[${\boldsymbol c}$] $c$ ] ] [.X[$d$] $d$ ] ] ] \end{tikzpicture} \subcaption{} \end{minipage} \begin{minipage}[b]{.5\linewidth} \centering \begin{tikzpicture}[sibling distance=7pt] \tikzset{level distance=20pt} \Tree [.{\bf X} [.X[$l$] $l$ ] \edge[line width=1pt]; [.{\bf X} \edge[line width=1pt]; [.{\bf X} [.X[$a$] $a$ ] \edge[line width=1pt]; [.{\bf X} \edge[line width=1pt]; [.{\bf X} [.X[$b$] $b$ ] \edge[line width=1pt]; [.X[${\boldsymbol c}$] $c$ ] ] [.X[$d$] $d$ ] ] ] [.X[$m$] $m$ ] ] ] \end{tikzpicture} \subcaption{} \end{minipage} \caption{Center-embedded dependency trees and zig-zag patterns observed in the implicit CFG parses: (a)--(b) depth one, (c)--(d) depth two, (e) CFG parse for (a) and (b), and (f) CFG parse for (c) and (d).} \label{fig:level} \end{figure} We finally summarize the property of the left-corner transition system in terms of the stack depth. To do so, let us first introduce two measure, depth$_{re}$ and depth$_{sh}$, with the former representing the stack depth after a reduce step and the latter representing the stack depth after a shift step. Then, we have: \begin{itemize} \item Depth$_{re} \leq 1$ unless the implicit CFG parse does not contain center-embedding (i.e., is just left-linear or right-linear). This linearly increases as the degree of center-embedding increases. \item Depth$_{sh} \leq 2$ if the implicit CFG parse does not contain center-embedding. The extra element on the stack occurs with a {\sc Shift} action, but it does not imply the existence of center-embedding. This linearly increases as the degree of center-embedding increases. \end{itemize} The first statement about depth$_{re}$ directly comes from Theorem \ref{thoerem:bg:stack-depth} for the left-corner PDA. The second statement is about depth$_{sh}$, which we did not touch for the PDA. Figure \ref{fig:level} shows examples of how the depth of center-embedding increases, with the distinguished zig-zag patterns in center-embedded structures shown in bold. Note that depth$_{re}$ can capture the degree of center-embedding correctly, by $\max \textrm{depth}_{re} -1$ (Theorem \ref{thoerem:bg:stack-depth}), while depth$_{sh}$ may not; for example, for parsing a right-branching structure $a^\curvearrowright b^\curvearrowright c$, $b$ must be {\sc Shift}ed (not inserted) before being reduced, resulting in depth$_{sh} = 2$. We do not precisely discuss the condition with which an extra factor of depth$_{sh}$ occurs. Of importance here is that both depth$_{re}$ and depth$_{sh}$ increase as the depth of center-embedding in the implicit CFG parse increases, though they may differ only by a constant (just one). \section{Empirical Stack Depth Analysis} \label{sec:analysis} In this section, we evaluate the cross-linguistic coverage of our developed transition system. We compare our system with other systems by observing the required stack depth as we run oracle transitions for sentences on a set of typologically diverse languages. We thereby verify the hypothesis that our system consistently demands less stack depth across languages in comparison with other systems. Note that this claim is not obvious from our theoretical analysis (Table \ref{tab:order}) since the stack depth of the arc-eager system is sometimes smaller than that of the left-corner system (e.g., a subset of center-embedding), which suggests that it may possibly provide a more meaningful measure for capturing the syntactic regularities of a language. \subsection{Settings} \label{sec:analysis:settings} \paragraph{Datasets} We use two kinds of multilingual corpora introduced in Chapter \ref{chap:corpora}, CoNLL dataset and Universal dependencies (UD), both of which comprises of 19 treebanks. Below, the first part of analyses in Sections \ref{sec:analysis:general}, \ref{sec:random}, and \ref{sec:token} are performed on CoNLL dataset while the latter analyses in Sections \ref{sec:trans:ud-result} and \ref{sec:trans:relax} are based on UD. Since all systems presented in this chapter cannot handle nonprojective structures \cite{Nivre:2008}, we projectivize all nonprojective sentences using pseudo-projectivization \cite{nivre-nilsson:2005:ACL} implemented in the MaltParser \cite{NivreMAL07} (see also Section \ref{sec:bg:projective}). We expect that this modification does not substantially change the overall corpus statistics as nonprojective constructions are relatively rare \cite{nivre-EtAl:2007:EMNLP-CoNLL2007}. Some treebanks such as the Prague dependency treebanks (including Arabic and Czech) assume that a sentence comprises multiple independent clauses that are connected via a dummy root token. We place this dummy root node at the end of each sentence, because doing so does not change the behaviors for sentences with a single root token in all systems and improves the parsing accuracy of some systems such as arc-eager across languages as compared with the conventional approach in which the dummy token is placed only at the beginning of each sentence \cite{journals/coling/BallesterosN13}. \paragraph{Method} We compare three transition systems: arc-standard, arc-eager, and left-corner. For each system, we perform oracle transitions for all sentences and languages, measuring stack depth at each configuration. The arc-eager system sometimes creates a subtree at the beginning of a buffer, in which case we increment stack depth by one. \paragraph{Oracle} We run an oracle transition for each sentence with each system. For the left-corner system, we implemented the algorithm presented in Section \ref{sec:oracle}. For the arc-standard and arc-eager systems, we implemented oracles preferring reduce actions over shift actions, which minimizes the maximum stack depth. \subsection{Stack depth for general sentences} \label{sec:analysis:general} For each language in CoNLL dataset, we count the number of configurations of a specific stack depth while performing oracles on all sentences. Figure \ref{fig:load-comparison} shows the cumulative frequencies of configurations as the stack depth increases for the arc-standard, arc-eager, and left-corner systems. The data answer the question as to which stack depth is required to cover X\% of configurations when recovering all gold trees. Note that comparing absolute values here is less meaningful since the minimal stack depth to construct an arc is different for each system, e.g., the arc-standard system requires at least two items on the stack, while the arc-eager system can create a right arc if the stack contains one element. Instead, we focus on the universality of each system's behavior for different languages. As discussed in Section \ref{sec:standard}, the arc-standard system can only process left-branching structures within a constant stack depth; such structures are typical in head-final languages such as Japanese or Turkish, and we observe this tendency in the data. The system performs differently in other languages, so the behavior is not consistent across languages. The arc-eager and left-corner systems behave similarly for many languages, but we observe that there are some languages for which the left-corner system behaves similarly across numerous languages, while the arc-eager system tends to incur a larger stack depth. In particular, except Arabic, the left-corner system covers over 90\% (specifically, over 98\%) of configurations with a stack depth $\leq 3$. The arc-eager system also has 90\% coverage in many languages with a stack depth $\leq 3$, though some exceptions exist, e.g., German, Hungarian, Japanese, Slovene, and Turkish. We observe that results for Arabic are notably different from other languages. We suspect that this is because the average length of each sentence is very long (i.e., 39.3 words; see Table \ref{tab:token} for overall corpus statistics). \newcite{buchholz-marsi:2006:CoNLL-X} noted that the parse unit of the Arabic treebank is not a sentence but a paragraph in which every sentence is combined via a dummy root node. To remedy this inconsistency of annotation units, we prepared the modified treebank, which we denote as Arabic$^*$ in the figure, by treating each child tree of the root node as a new sentence.\footnote{We removed the resulting sentence if the length was one.} The results then are closer to other language treebanks, especially Danish, which indicates that the exceptional behavior of Arabic largely originates with the annotation variety. From this point, we review the results of Arabic$^*$ instead of the original Arabic treebank. \subsection{Comparing with randomized sentences} \label{sec:random} \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/memory_cost.pdf}} \caption{Crosslinguistic comparison of the cumulative frequencies of stack depth during oracle transitions.} \label{fig:load-comparison} \end{figure} \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/memory_cost_random.pdf}} \caption{Stack depth results in corpora with punctuation removed; the dashed lines show results on randomly reordered sentences.} \label{fig:load-comparison-random} \end{figure} The next question we examine is whether the observation from the last experiment, i.e., that the left-corner parser consistently demands less stack depth, holds only for naturally occurring or grammatically correct sentences. We attempt to answer this question by comparing oracle transitions on original treebank sentences and on (probably) grammatically incorrect sentences. We create these incorrect sentences using the method proposed by \newcite{gildea-temperley:2007:ACLMain}. We reorder words in each sentence by first extracting a directed graph from the dependency tree, and then randomly reorder the children of each node while preserving projectivity. Following \newcite{gildea-temperley:2007:ACLMain}, we remove punctuation from all corpora in this experiment beforehand, since how punctuation is attached to words is not essential. The dotted lines shown in Figure \ref{fig:load-comparison-random} denote the results of randomized sentences for each system. There are notable differences in required stack depth between original and random sentences in many languages. For example, with a stack depth $\leq 3$, the left-corner system cannot reach 90\% of configurations in many randomized treebanks such as Arabic$^*$, Catalan, Danish, English, Greek, Italian, Portuguese, and Spanish. These results suggest that our system demands less stack depth only for naturally occurring sentences. For Chinese and Hungarian, the differences are subtle; however, the differences are also small for the other systems, which implies that these corpora have biases on graphs to reduce the differences. \subsection{Token-level and sentence-level coverage results} \label{sec:token} \begin{table}[p] \centering \scalebox{0.90}{ \begin{tabular}[t]{l l r r r r r r r r r r} \hline & & \multicolumn{1}{r}{{\bf Arabic}}&\multicolumn{1}{r}{{\bf Arabic$^*$}}&\multicolumn{1}{r}{{\bf Basque}}&\multicolumn{1}{r}{{\bf Bulgarian}}&\multicolumn{1}{r}{{\bf Catalan}}&\multicolumn{1}{r}{{\bf Chinese}}&\multicolumn{1}{r}{{\bf Czech}}\\ \multicolumn{2}{l}{\#Sents.} &3,043&4,102&3,523&13,221&15,125&57,647&25,650\\ \multicolumn{2}{l}{Av. len.} &39.3&28.0&16.8&15.8&29.8&6.9&18.0\\ \hline Token & $\leq 1$& 22.9/21.8&52.8/55.9&57.3/62.7&79.5/80.0&66.2/69.4&83.6/83.6&74.7/74.0\\ & $\leq 2$& 63.1/65.4&89.6/92.2&92.1/93.3&98.1/98.7&94.8/96.8&98.3/98.3&96.6/97.3\\ & $\leq 3$& 92.0/94.1&98.9/99.4&99.2/99.3&99.8/99.9&99.5/99.8&99.9/99.9&99.7/99.8\\ & $\leq 4$& 99.1/99.5&99.9/99.9&99.9/99.9&99.9/99.9&99.9/99.9&99.9/99.9&99.9/99.9\\ \\ Sent. & $\leq 1$& 7.0/7.4&20.8/21.4&15.5/20.8&37.3/39.4&14.7/16.9&58.3/58.3&32.0/34.2\\ & $\leq 2$& 26.8/27.8&55.4/59.3&69.8/75.8&90.7/93.0&68.3/75.5&95.0/95.0&83.9/86.6\\ & $\leq 3$& 57.6/61.6&91.7/94.5&95.8/97.0&99.3/99.7&95.6/98.3&99.7/99.7&98.2/99.1\\ & $\leq 4$& 90.9/94.4&99.5/99.8&99.7/99.8&99.9/99.9&99.7/99.9&99.9/99.9&99.8/99.9\\ \hline \ & & \multicolumn{1}{r}{{\bf Danish}}&\multicolumn{1}{r}{{\bf Dutch}}&\multicolumn{1}{r}{{\bf English}}&\multicolumn{1}{r}{{\bf German}}&\multicolumn{1}{r}{{\bf Greek}}&\multicolumn{1}{r}{{\bf Hungarian}}&\multicolumn{1}{r}{{\bf Italian}}\\ \multicolumn{2}{l}{\#Sents.} &5,512&13,735&18,791&39,573&2,902&6,424&3,359\\ \multicolumn{2}{l}{Av. len.} &19.1&15.6&25.0&18.8&25.1&22.6&23.7\\ \hline Token & $\leq 1$& 71.3/75.2&70.2/73.4&69.2/71.3&66.9/66.7&66.7/66.8&65.6/64.1&62.8/64.1\\ & $\leq 2$& 95.6/97.4&95.9/96.8&96.3/97.5&94.5/94.5&95.2/96.2&95.1/94.9&94.0/94.2\\ & $\leq 3$& 99.6/99.8&99.7/99.8&99.7/99.9&99.5/99.5&99.6/99.8&99.5/99.5&99.5/99.5\\ & $\leq 4$& 99.9/99.9&99.9/99.9&99.9/99.9&99.9/99.9&99.9/100&99.9/99.9&99.9/99.9\\ \\ Sent. & $\leq 1$& 26.1/29.7&33.0/37.3&13.5/16.7&22.7/23.7&20.7/22.5&14.0/14.7&25.0/27.3\\ & $\leq 2$& 77.9/83.4&83.4/87.3&73.4/80.0&71.3/72.8&76.6/80.8&69.3/70.4&76.0/77.2\\ & $\leq 3$& 96.8/98.9&98.2/98.7&97.8/99.0&96.3/96.6&97.4/98.4&95.8/96.2&97.3/97.5\\ & $\leq 4$& 99.8/99.9&99.8/99.9&99.8/99.9&99.7/99.7&99.8/100&99.7/99.7&99.8/99.8\\ \hline \ & & \multicolumn{1}{r}{{\bf Japanese}}&\multicolumn{1}{r}{{\bf Portuguese}}&\multicolumn{1}{r}{{\bf Slovene}}&\multicolumn{1}{r}{{\bf Spanish}}&\multicolumn{1}{r}{{\bf Swedish}}&\multicolumn{1}{r}{{\bf Turkish}}\\ \multicolumn{2}{l}{\#Sents.} &17,753&9,359&1,936&3,512&11,431&5,935\\ \multicolumn{2}{l}{Av. len.} &9.8&23.7&19.1&28.0&18.2&12.7\\ \hline Token & $\leq 1$& 57.1/55.0&68.7/73.0&76.4/74.9&64.0/67.0&78.5/80.1&65.8/62.7\\ & $\leq 2$& 90.6/89.5&95.5/97.5&97.1/97.3&93.4/96.1&98.1/98.6&93.9/93.7\\ & $\leq 3$& 99.1/99.0&99.6/99.9&99.7/99.8&99.1/99.8&99.9/99.9&99.4/99.5\\ & $\leq 4$& 99.9/99.9&99.9/99.9&99.9/100&99.9/99.9&99.9/99.9&99.9/99.9\\ \\ Sent. & $\leq 1$& 57.3/58.1&27.1/30.8&34.0/40.1&17.8/20.2&32.0/34.4&37.6/38.8\\ & $\leq 2$& 81.8/81.8&78.7/85.1&85.7/88.5&66.1/73.5&87.8/90.3&80.1/81.0\\ & $\leq 3$& 97.0/97.1&97.4/99.1&98.3/99.0&94.5/97.9&99.1/99.6&97.1/97.5\\ & $\leq 4$& 99.8/99.8&99.8/99.9&99.9/100&99.2/99.9&99.9/99.9&99.8/99.8\\ \hline \end{tabular} } \caption{Token-level and sentence-level coverage results of left-corner oracles with depth$_{re}$. Here, the right-hand numbers in each column are calculated from corpora that exclude all punctuation, e.g., 92\% of tokens in Arabic are covered within a stack depth $\leq 3$, while the number increases to 94.1 when punctuation is removed. Further, 57.6\% of sentences (61.6\% without punctuation) can be parsed within a maximum depth$_{re}$ of three, i.e., the maximum degree of center-embedding is at most two in 57.6\% of sentences. Av. len. indicates the average number of words in a sentence.}\label{tab:token} \end{table} As noted in Section \ref{sec:memorycost}, the stack depth of the left-corner system in our experiments thus far is not the exact measurement of the degree of center-embedding of the construction due to an extra factor introduced by the {\sc Shift} action. In this section, we focus on depth$_{re}$, which matches the degree of center-embeddeding and may be more applicable to some applications. Table \ref{tab:token} shows token- and sentence-level statistics with and without punctuations. The token-level coverage of depth $\leq 2$ substantially improves from the results shown in Figure \ref{fig:load-comparison} in many languages, consistently exceeding 90\% except for Arabic$^*$, which indicates that many configurations of a stack depth of two in previous experiments are due to the extra factor caused by the {\sc Shift} action rather than the deeper center-embedded structures. Results showing that the token-level coverage reaches 99\% in most languages with depth$_{re} \leq 3$ indicate that the constructions with the degree three of center-embedding occurs rarely in natural language sentences. Overall, sentence-level coverage results are slightly decreased, but they are still very high, notably 95\% -- 99\% with depth$_{re} \leq 3$ for most languages. \subsection{Results on UD} \label{sec:trans:ud-result} \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/ud/depth_to_cumulative.pdf}} \caption{Stack depth results in UD.} \label{fig:load-comparison-ud} \end{figure} In the following two sections, we move on to UD, in which annotation styles are more consistent across languages. Figure \ref{fig:load-comparison-ud} shows the result of the same analysis as the comparison in Section \ref{fig:load-comparison} on CoNLL dataset (i.e., Figure \ref{fig:load-comparison}). We do not observe substantial differences between CoNLL dataset and UD. Again, the left-corner system is the most consistent across languages. This result is interesting in that it indicates the stack depth constraint of the left-corner system is less affected by the choice of annotation styles, since the annotation of UD is consistently content head-based while that of CoNLL dataset is (although consistently is lower) mainly function head-based.\footnote{ \REVISE{A theoretical analysis of the effect of the annotation style is interesting, but is beyond the scope of the current study. We only claim that substantial differences are not observed in the present empirical analysis. Generally speaking, two dependency representations based on content-head and function-head, do not lead to the identical CFG representation with binarization, but as the meanings that they encode are basically the same (with different notions of head) we expect that the resulting differences in CFG forms are not substantial. }} We will see this tendency in more detail by analyzing token-based statistics based on depth$_{re}$ below. \subsection{Relaxing the definition of center-embedding} \label{sec:trans:relax} The token-level analysis on CoNLL dataset in Section \ref{sec:token} (Table \ref{tab:token}) reveals that in most languages depth$_{re} \leq 2$ is a sufficient condition to cover most constructions but there are often relatively large gaps between depth$_{re} \leq 1$ (i.e., no center-embedding) and depth$_{re} \leq 2$ (i.e., at most one degree of center-embedding). We explore in this section constraints that exist in the middle between these two. We do so by relaxing the definition of center-embedding that we discussed in Section \ref{sec:bg:embedding}. Recall that in our definition of center-embedding (Definition \ref{def:bg:embed-depth}), we check whether the length of the most embedded constituent is larger than one (i.e., $|x|\geq 2$ in Eq. \ref{eq:bg:embed-depth}). In other words, the minimal length of the most embedded constituent for center-embedded structures is {\it two} in this case. Here, we relax this condition; for example, if assume the minimal length of most embedded clause is {\it three}, we recognize some portion of singly center-embedded structures (by Definition \ref{def:bg:embed-depth}), in which the size of embedded constituent is one or two, to be not center-embedded. Due to the transparency between the stack depth and the degree of center-embedding, this can be achieved by not increasing depth$_{re}$ when the size (number of tokens including the dummy node) of the top stack element does not exceed the threshold, which is one in default (thus no reduction occurs). \begin{figure}[t] \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.5cm] The \& reporter \& who \& the \& senator \& met \& ignored \& the \& president \\ \end{deptext} \depedge{2}{1}{} \depedge{7}{2}{} \depedge{6}{3}{} \depedge[thick,red]{5}{4}{} \depedge[thick,red]{6}{5}{} \depedge[thick,red]{2}{6}{} \deproot[edge unit distance=2ex]{7}{} \depedge{9}{8}{} \depedge{7}{9}{} \end{dependency} \caption{Following Definition \ref{def:bg:embed-depth}, this tree is recognized as singly center-embedded while is not center-embedded if ``the senator'' is replaced by one word. Bold arcs are the cause of center-embedding (zig-zag pattern).} \label{fig:trans:relax} \end{figure} \paragraph{Motivating example} In Section \ref{sec:bg:psycho}, we showed that the following sentence is recognized as not center-embedded when we follow Definition \ref{def:bg:embed-depth}: \enumsentence{ The reporter [who {\bf Mary} met] ignored the president. } However, we can see that the following sentence is recognized as singly center-embedded: \enumsentence{ The reporter [who {\bf the senator} met] ignored the president. } Figure \ref{fig:trans:relax} shows the UD-style dependency tree with the emphasis on arcs causing center-embedding. This observation suggests many constructions that requires depth$_{re} = 2$ might be caught by relaxing the condition of center-embedding discussed above. \paragraph{Result} Figure \ref{fig:load-comparison-relax-ud} is the result with such relaxed conditions. Here we also show the effect of changing maximum sentence length. We can see in some languages, such as Hungarian, Japanese, and Persian, the effect of this relaxation is substantial while the changes in other languages are rather modest. We can also see that in most languages depth two is a sufficient condition to conver most constructions, which is again consistent with our observation in CoNLL dataset (Section \ref{sec:token}). We will explore this relaxation again in the supervised experiments we present below. Interestingly, there we will observe that the improvements with those relaxations are more substantial in parsing experiments (Section \ref{sec:trans:ud-parse}). \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/ud/token_depth_to_cumulative.pdf}} \vspace{-10pt} \caption{Stack depth results in UD with a left-corner system (depth$_{re}$) when the definition of center-embedding is relaxed. The parenthesized numbers indicate the size of allowed constituents at the bottom of embedding. For example (2) next to 2 indicates we allow depth = 3 if the size of subtree on the top of the stack is 1 or 2. ${\sf Len.}$ is the maximum sentence length.} \label{fig:load-comparison-relax-ud} \end{figure} \section{Parsing Experiment} \label{sec:parse} Our final experiment is the parsing experiment on unseen sentences. A transition-based dependency parsing system is typically modeled with a structured discriminative model, such as with the structured perceptron and beam search \cite{zhang-clark:2008:EMNLP,huang-sagae:2010:ACL}. We implemented and trained the parser model in this framework to investigate the following questions: \begin{itemize} \item How does the stack depth bound at decoding affect parsing performance of each system? The underlying concern here is basically the same as in the previous oracle experiment discussed in Section \ref{sec:analysis}, i.e., to determine whether the stack depth of the left-corner system provides a meaningful measure for capturing the syntactic regularities. More specifically, we wish to observe whether the observation from the last experiment, i.e., that the behavior of the left-corner system is mostly consistent across languages, also holds with parse errors. \item Does our parser perform better than other transition-based parsers? One practical disadvantage of our system is that its attachment decisions are made more eagerly, i.e., that it has to commit to a particular structure at an earlier point; however, this also means the parser may utilize rich syntactic information as features that are not available in other systems. We investigate whether these rich features help disambiguation in practice. \item Finally, we examine parser performance of our system under a restriction on features to prohibit lookahead on the buffer. This restriction is motivated by the previous model of probabilistic left-corner parsing \cite{journals/coling/SchulerAMS10} in which the central motivation is its cognitive plausibility. We report how accuracies drop with the cognitively motivated restriction and discuss a future direction to improve performance. \end{itemize} In the following we will investigate the above questions mainly with CoNLL dataset, as in our analysis in Section \ref{sec:analysis}. In Section \ref{sec:exp:setting}, we explain several experimental setups. We first compare the performances in the standard English experiments in Section \ref{sec:exp:english-devel}, and then present experiments in CoNLL dataset in Section \ref{sec:exp:conll-parse}. Finally, we summarize the results in UD in Section \ref{sec:trans:ud-parse}. \begin{figure}[t] \centering \begin{tikzpicture}[level distance=20pt, sibling distance=35pt,edge from parent/.style={draw,->}] \node at (0, 0) {Incomplete element:}; \node at (0, -0.5) {$s.{\sf gg}$} [sibling distance=45pt] child[missing] child { node {$s.{\sf gp}$} [sibling distance=45pt] child[missing] child { node {$s.{\sf p}$} [sibling distance=45pt] child[missing] child { node {$x$} [sibling distance=15pt] child {node {$s.{\sf l}$}} child {node {$s.{\sf l_2}$}} child[missing] child[missing] } } }; \node at (4.0, 0) {Complete element:}; \node at (4.0, -0.5) {$q$} [sibling distance=30pt] child { node {$q.{\sf l}$} } child { node {$q.{\sf r}$} }; \node (R) at (7.0, 0) {Shift mode:}; \node (s) [below right =0.4cm and 0.5cm of R.west,anchor=west] {Stack:}; \node (b) [right=1.7cm of s] {Buffer:}; \draw ($(s)+(0,-0.3)$) -- ($(s)+(2.0,-0.3)$) -- ($(s)+(2.0,-0.8)$) -- ($(s)+(0,-0.8)$); \draw ($(s)+(4.5,-0.3)$) -- ($(s)+(2.7,-0.3)$) -- ($(s)+(2.7,-0.8)$) -- ($(s)+(4.5,-0.8)$); \node (s1) [below right=0.55cm and 1.4cm of s.west,anchor=west] {$s_1 ~~ s_0$}; \node[right=0.9cm of s1] {$q_0 ~~ q_1 ~~ q_2$}; \node (R) [below=1.8cm of R.west,anchor=west] {Reduce mode:}; \node (s) [below right =0.4cm and 0.5cm of R.west,anchor=west] {Stack:}; \node (b) [right=1.7cm of s] {Buffer:}; \draw ($(s)+(0,-0.3)$) -- ($(s)+(2.0,-0.3)$) -- ($(s)+(2.0,-0.8)$) -- ($(s)+(0,-0.8)$); \draw ($(s)+(4.5,-0.3)$) -- ($(s)+(2.7,-0.3)$) -- ($(s)+(2.7,-0.8)$) -- ($(s)+(4.5,-0.8)$); \node (s1) [below right=0.55cm and 0.9cm of s.west,anchor=west] {$s_1 ~~ s_0 ~~ q_0$}; \node[right=0.9cm of s1] {$q_1 ~~ q_2$}; \end{tikzpicture} \caption{(Left) Elementary features extracted from an incomplete and complete node, and (Right) how feature extraction is changed depending on whether the next step is shift or reduce. }\label{fig:feature} \end{figure} \begin{table}[t] \centering \begin{tabular}{l l l l} \hline $s_0.{\sf p.w}$ & $s_0.{\sf p.t}$ & $s_0.{\sf l.w}$ & $s_0.{\sf l.t}$ \\ $s_1.{\sf p.w}$ & $s_1.{\sf p.t}$ & $s_1.{\sf l.w}$ & $s_1.{\sf l.t}$ \\ $s_0.{\sf p.w} \circ s_0.{\sf p.t}$ & $s_0.{\sf l.w} \circ s_0.{\sf l.t}$ & $s_1.{\sf p.w} \circ s_1.{\sf p.t}$ & $s_1.{\sf l.w} \circ s_1.{\sf l.t}$ \\ $q_0.{\sf w}$ & $q_0.{\sf t}$ & $q_0.{\sf w} \circ q_0.{\sf t}$\\ $s_0.{\sf p.w} \circ s_0.{\sf l.w}$ & $s_0.{\sf p.t} \circ s_0.{\sf l.t}$ & \\ \hline $s_0.{\sf p.w} \circ s_1.{\sf p.w}$ & $s_0.{\sf l.w} \circ s_1.{\sf l.w}$ & $s_0.{\sf p.t} \circ s_1.{\sf p.t}$ & $s_0.{\sf l.t} \circ s_1.{\sf l.t}$ \\ \hline $s_0.{\sf p.w} \circ q_0.{\sf w}$ & $s_0.{\sf l.w} \circ q_0.{\sf w}$ & $s_0.{\sf p.t} \circ q_0.{\sf t}$ & $s_0.{\sf l.t} \circ q_0.{\sf t}$ \\ $s_0.{\sf p.w} \circ q_0.{\sf w} \circ q_0.{\sf p}$ & $s_0.{\sf p.w} \circ q_0.{\sf w} \circ s_0.{\sf p.t}$ & $s_0.{\sf l.w} \circ q_0.{\sf w} \circ s_0.{\sf l.t}$ & $s_0.{\sf l.w} \circ q_0.{\sf w} \circ s_0.{\sf l.t}$ \\ $s_0.{\sf p.w} \circ s_0.{\sf p.t} \circ q_0.{\sf t}$ & $s_0.{\sf l.w} \circ s_0.{\sf l.t} \circ q_0.{\sf t}$\\ \hline $q_0.{\sf t} \circ q_0.{\sf l.t} \circ q_0.{\sf r.t}$ & $q_0.{\sf w} \circ q_0.{\sf l.t} \circ q_0.{\sf r.t}$ \\ \hline $s_0.{\sf p.t} \circ s_0.{\sf gp.t} \circ s_0.{\sf gg.t}$ & $s_0.{\sf p.t} \circ s_0.{\sf gp.t} \circ s_0.{\sf l.t}$ & $s_0.{\sf p.t} \circ s_0.{\sf l.t} \circ s_0.{\sf l_2.t}$ & $s_0.{\sf p.t} \circ s_0.{\sf gp.t} \circ q_0.{\sf t}$\\ $s_0.{\sf p.t} \circ s_0.{\sf l.t} \circ q_0.{\sf t}$ & $s_0.{\sf p.w} \circ s_0.{\sf l.t} \circ q_0.{\sf t}$ & $s_0.{\sf p.t} \circ s_0.{\sf l.w} \circ q_0.{\sf t}$ & $s_0.{\sf l.t} \circ s_0.{\sf l_2.p} \circ q_0.{\sf t}$ \\ $s_0.{\sf l.t} \circ s_0.{\sf l_2.t} \circ q_0.{\sf t}$ & $s_0.{\sf p.t} \circ q_0.{\sf t} \circ q_0.{\sf l.t}$ & $s_0.{\sf p.t} \circ q_0.{\sf t} \circ q_0.{\sf r.t}$ \\ $s_1.{\sf p.t} \circ s_0.{\sf p.t} \circ s_0.{\sf l.t}$ & $s_1.{\sf p.t} \circ s_0.{\sf l.t} \circ q_0.{\sf t}$ & $s_1.{\sf l.t} \circ s_0.{\sf l.t} \circ q_0.{\sf t}$ & $s_1.{\sf l.t} \circ s_0.{\sf l.t} \circ q0.{\sf t}$ \\ $s_1.{\sf l.t} \circ s_0.{\sf p.t} \circ q_0.{\sf p}$ \\ \hline \end{tabular} \caption{Feature templates used in both full and restricted feature sets, with ${\sf t}$ representing POS tag and ${\sf w}$ indicating the word form, e.g., $s_0.{\sf l.t}$ refers to the POS tag of the leftmost child of $s_0$. $\circ$ means concatenation.}\label{tab:feature} \end{table} \begin{table}[t] \centering \begin{tabular}{l l l l} \hline $q_0.{\sf t} \circ q_1.{\sf t}$ & $q_0.{\sf t} \circ q_1.{\sf t} \circ q_2.{\sf t}$ & $s_0.{\sf p.t} \circ q_0.{\sf p} \circ q_1.{\sf p} \circ q_2.{\sf p}$ & $s_0.{\sf l.t} \circ q_0.{\sf t} \circ q_1.{\sf t} \circ q_2.{\sf t}$ \\ $s_0.{\sf p.w} \circ q_0.{\sf t} \circ q_1.{\sf t}$ & $s_0.{\sf p.t} \circ q_0.{\sf t} \circ q_1.{\sf t}$ & $s_0.{\sf l.w} \circ q_0.{\sf t} \circ q_1.{\sf t}$ & $s_0.{\sf l.t} \circ q_0.{\sf t} \circ q_1.{\sf t}$ \\ \hline \end{tabular} \caption{Additional feature templates only used in the full feature model.}\label{tab:additional} \end{table} \subsection{Feature} The feature set we use is explained in Figure \ref{fig:feature} and Tables \ref{tab:feature} and \ref{tab:additional}. Our transition system is different from other systems in that it has two modes, i.e., a shift mode in which the next action is either {\sc Shift} or {\sc Insert} and a reduce mode in which we select the next reduce action, thus we use different features depending on the current mode. Figure \ref{fig:feature} shows how features are extracted from each node for each mode. In reduce mode, we treat the top node of the stack as if it were the top of buffer ($q_0$), which allows us to use the same feature templates in both modes by modifying only the definitions of elementary features $s_i$ and $q_i$. A similar technique has been employed in the transition system proposed by \newcite{sartorio-satta-nivre:2013:ACL2013}. To explore the last question, we develop two feature sets. Our full feature set consists of features shown in Tables \ref{tab:feature} and \ref{tab:additional}. For the limited feature set, we remove all features that depend on $q_1$ and $q_2$ in Figure \ref{fig:feature}, which we list in Table \ref{tab:additional}. Here, we only look at the top node on the buffer in shift mode. This is the minimal amount of lookahead in our parser and is the same as the previous left-corner PCFG parsers \cite{journals/coling/SchulerAMS10}, which are cognitively motivated. Our parser cannot capture a head and dependent relationship directly at each reduce step, because all interactions between nodes are via a dummy node, which may be a severe limitation from a practical viewpoint; however, we can exploit richer context from each subtree on the stack, as illustrated in Figure \ref{fig:feature}. We construct our feature set with many nodes around the dummy node, including the parent (${\sf p}$), grandparent (${\sf gp}$), and great grandparent (${\sf gg}$). \subsection{Settings} \label{sec:exp:setting} We compare parsers with three transition systems: arc-standard, arc-eager, and left-corner. The feature set of the arc-standard system is borrowed from \newcite{huang-sagae:2010:ACL}. For the arc-eager system, we use the feature set of \newcite{zhang-nivre:2011:ACL-HLT2011} from which we exclude features that rely on arc label information. We train all models with different beam sizes in the violation fixing perceptron framework \cite{huang-fayong-guo:2012:NAACL-HLT}. Since our goal is not to produce a state-of-the-art parsing system, we use gold POS tags as input both in training and testing. As noted in Section \ref{subsec:transitionsystem}, the left-corner parser sometimes fails to generate a single tree, in which case the stack contains a complete subtree at the top position (since the last action is always {\sc Insert}) and one or more incomplete subtrees. If this occurs, we perform the following post-processing steps: \begin{itemize} \item We collapse each dummy node in an incomplete tree. More specifically, if the dummy node is the head of the subtree, we attach all children to the sentence (dummy) root node; otherwise, the children are reattached to the parent of the dummy node. \item The resulting complete subtrees are all attached to the sentence (dummy) root node. \end{itemize} \subsection{Results on the English Development Set} \label{sec:exp:english-devel} \begin{figure}[p] \centering \begin{minipage}[b]{.49\linewidth} \centering \resizebox{0.9\textwidth}{!} {\includegraphics[]{./data/plot_b=1.pdf}} \subcaption{$b=1$} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \resizebox{0.9\textwidth}{!} {\includegraphics[]{./data/plot_b=2.pdf}} \subcaption{$b=2$} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \resizebox{0.9\textwidth}{!} {\includegraphics[]{./data/plot_b=8.pdf}} \subcaption{$b=8$} \end{minipage} \begin{minipage}[b]{.49\linewidth} \centering \resizebox{0.9\textwidth}{!} {\includegraphics[]{./data/plot_b=16.pdf}} \subcaption{$b=16$} \end{minipage} \caption{Accuracy vs. stack depth bound at decoding for several beam sizes ($b$).} \label{fig:accuracy-depth} \end{figure} We first evaluate our system on the common English development experiment. We train the model in section 2-21 of the WSJ Penn Treebank \cite{Marcus93buildinga}, which are converted into dependency trees using the LTH conversion tool\footnote{http://nlp.cs.lth.se/software/treebank\_converter/}. \paragraph{Impact of Stack Depth Bound} To explore the first question posed at the beginning of this section, we compare parse accuracies under each stack depth bound with several beam sizes, with results shown in Figure \ref{fig:accuracy-depth}. In this experiment, we calculate the stack depth of a configuration in the same way as our oracle experiment (see Section \ref{sec:analysis:settings}), and when expanding a beam, we discard candidates for which stack depth exceeds the maximum value. As discussed in Section \ref{sec:token}, for the left-corner system, depth$_{re}$ might be a more linguistically meaningful measure, so we report scores with both definitions.\footnote{ This can be achieved by allowing any configurations after a shift step. } The general tendency across different beam sizes is that our left-corner parser (in particular with depth$_{re}$) is much less sensitive to the value of the stack depth bound. For example, when the beam size is eight, the accuracies of the left-corner (depth$_{re}$) are 90.6, 91.7, 91.7, and 91.7 with increased stack depth bounds, while the corresponding scores are 82.5, 90.6, 92.6, and 93.3 in the arc-eager system. This result is consistent with the observation in our oracle coverage experiment discussed in Section \ref{sec:analysis}, and suggests that a depth bound of two or three might be a good constraint for restricting tree candidates for natural language with no (or little) loss of recall. Next, we examine whether this observation is consistent across languages. \begin{figure}[t] \centering \resizebox{0.75\textwidth}{!} {\includegraphics[]{./data/plot_d=0.pdf}} \caption{Accuracy vs. beam size for each system on the English Penn Treebank development set. Left-corner (full) is the model with the full feature set, while Left-corner (limited) is the model with the limited feature set.} \label{fig:plot} \end{figure} \paragraph{Performance without Stack Depth Bound} Figure \ref{fig:plot} shows accuracies with no stack depth bound when changing beam sizes. We can see that the accuracy of the left-corner system (full feature) is close to that of the other two systems, but some gap remains. With a beam size of 16, the scores are left-corner: 92.0; arc-standard: 92.9; arc-eager: 93.4. Also, the score gaps are relatively large at smaller beam sizes; e.g., with beam size 1, the score of the left-corner is 85.5, while that of the arc-eager is 91.1. This result indicates that the prediction with our parser might be structurally harder than other parsers even though ours can utilize richer context from subtrees on the stack. \paragraph{Performance of Limited Feature Model} Next we move on to the results with cognitively motivated limited features (Figure \ref{fig:plot}). When the beam size is small, it performs extremely poorly (63.6\% with beam size 1). This is not surprising since our parser has to deal with each attachment decision much earlier, which seems hard without lookahead features or larger beam. However, it is interesting that it achieves a reasonably higher score of 90.6\% accuracy with beam size 16. In the previous constituency left-corner parsing experiments that concerned their cognitive plausibility \cite{journals/coling/SchulerAMS10,TOPS:TOPS12034}, typically the beam size is quite huge, e.g., 2,000. The largest difference between our parser and their systems is the model: our model is discriminative while their models are generative. Though discriminative models are not popular in the studies of human language processing \cite{keller:2010:Short}, the fact that our parser is able to output high quality parses with such smaller beam size would be appealing from the cognitive viewpoint (see Section \ref{sec:relatedwork} for further discussion). \subsection{Result on CoNLL dataset} \label{sec:exp:conll-parse} \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/conll_depth_to_accuracy_2.pdf}} \caption{Accuracy vs. stack depth bound in CoNLL dataset.} \label{fig:conll_depth_to_accuracy} \end{figure} We next examine whether the observations above with English dataset are general across languages using CoNLL dataset. Note that although we train on the projectivized corpus, evaluation is against the original nonprojective trees. As our systems are unlabeled, we do not try any post-deprojectivization \cite{nivre-nilsson:2005:ACL}. In this experiment, we set the beam size to 8. \paragraph{Effect of Stack Depth Bound} The cross-linguistic results with stack depth bounds are summarized in Figure \ref{fig:conll_depth_to_accuracy} from which we can see that the overall tendency of each system is almost the same as the English experiment. Little accuracy drops are observed between models with bounded depth 2 or 3 and the model without depth bound in the left-corner (depth$_{re}$), although the score gaps are larger in the arc-eager. The arc-standard parser performs extremely poorly with small depth bounds except Japanese and Turkish, and this is consistent with our observation that the arc-standard system demands less stack depth only for head-final languages (Section \ref{sec:analysis:general}). Notably, in some cases the scores of the left-corner parser (depth$_{re}$) drop when loosing the depth bound (see Basque, Danish, and Turkish), meaning that the stack depth bound of the left-corner sometimes help disambiguation by ignoring linguistically implausible structures (deep center-embedding) during search. The result indicates the parser performance could be improved by utilizing stack depth information of the left-corner parser, though we leave further investigation as a future work. \paragraph{Performance without Stack Depth Bound} Table \ref{tab:parse} summarizes the results without stack depth bounds. Again, the overall tendency is the same as the English experiment. The arc-eager performs the best except Arabic. In some languages (e.g., Bulgarian, English, Spanish, and Swedish), the left-corner (full) performs better than the arc-standard, while the average score is 1.1 point below. This difference and the average difference between the arc-eager and the arc-standard (85.8 vs. 84.6) are both statistically significant (p $<$ 0.01, the McNemar test). We can thus conclude that our left-corner parser is not stronger than the other state-of-the-art parsers even with rich features. \begin{table}[t] \centering \begin{tabular}[t]{lcccc} \hline & {\bf Arc-standard} & {\bf Arc-eager} & {\bf Left-corner} & {\bf Left-corner} \\ & & & {\bf full} & {\bf limited} \\ \hline Arabic & 83.9 & 82.2 & 81.2 & 77.5 \\ Basque & 70.5 & 72.8 & 66.8 & 64.6 \\ Bulgarian & 90.2 & 91.4 & 89.9 & 88.1 \\ Catalan & 92.5 & 93.3 & 91.4 & 89.3 \\ Chinese & 87.3 & 88.4 & 86.8 & 83.6 \\ Czech & 81.5 & 82.3 & 80.1 & 77.2 \\ Danish & 88.0 & 89.1 & 86.8 & 85.5 \\ Dutch & 77.7 & 79.0 & 77.4 & 74.9 \\ English & 89.6 & 90.3 & 89.0 & 85.8 \\ German & 88.1 & 90.0 & 87.2 & 85.7 \\ Greek & 82.2 & 84.0 & 82.0 & 80.7 \\ Hungarian & 79.1 & 80.9 & 79.0 & 75.8 \\ Italian & 82.3 & 84.8 & 81.7 & 79.4 \\ Japanese & 92.5 & 92.9 & 91.3 & 90.7 \\ Portuguese & 89.2 & 90.6 & 88.9 & 87.1 \\ Slovene & 82.3 & 82.3 & 80.8 & 77.1 \\ Spanish & 83.0 & 85.0 & 83.8 & 80.6 \\ Swedish & 87.2 & 90.0 & 88.5 & 87.0 \\ Turkish & 80.8 & 80.8 & 77.5 & 75.4 \\ \\ Average & 84.6 & 85.8 & 83.7 & 81.4 \\ \hline \end{tabular} \caption{Parsing results on CoNLL X and 2007 test sets with no stack depth bound (unlabeled attachment scores).}\label{tab:parse} \end{table} \paragraph{Performance of Limited Feature Model} With limited features, the left-corner parser performs worse in all languages. The average score is about 2 point below the full feature models (83.7\% vs. 81.4\%) and shows the same tendency as in the English development experiment. This difference is also statistically significant (p $<$ 0.01, the McNemar test). The scores of English are relatively low compared with the results in Table \ref{fig:plot}, probably because the training data used in the CoNLL 2007 shared task is small, about half of our development experiment, to reduce the cost of training with large corpora for the shared task participants \cite{nivre-EtAl:2007:EMNLP-CoNLL2007}. \begin{figure}[p] \centering \begin{minipage}[b]{0.45\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.12cm] ... \& to \& appear \& three \& times \& on \& CNN \& ... \\ \end{deptext} \depedge{2}{3}{} \depedge{3}{5}{} \depedge{5}{4}{} \depedge[line width=0.2ex]{5}{6}{} \depedge{6}{7}{} \depedge{3}{8}{} \end{dependency} \subcaption{System output (left-corner; $b=8$).} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.12cm] ... \& to \& appear \& three \& times \& on \& CNN \& ... \\ \end{deptext} \depedge{2}{3}{} \depedge{3}{5}{} \depedge{5}{4}{} \depedge[line width=0.2ex]{3}{6}{} \depedge{6}{7}{} \depedge{3}{8}{} \end{dependency} \subcaption{Gold tree.} \end{minipage} \begin{minipage}[b]{\linewidth} \centering \vspace{10pt} \begin{tikzpicture}[edge from parent/.style={draw,->},sibling distance=15pt,level distance=16pt] \node (begin) at (-3, 0.0) {}; \draw ($(begin)$) rectangle ($(begin)+(6, -2.5)$); \node (s) [below right=0.6 and 0.7 of begin.west, anchor=west] {}; \draw ($(s)+(0,0)$) -- ($(s)+(3.0,0.0)$) -- ($(s)+(3.0,-0.5)$) -- ($(s)+(0,-0.5)$); \draw ($(s)+(4.5,0)$) -- ($(s)+(3.3,0)$) -- ($(s)+(3.3,-0.5)$) -- ($(s)+(4.5,-0.5)$); \node[anchor=south west] (t) at ($(s.west)+(0.25, -0.49)$) {to{\color{white}{p}}} child[missing] child { node {$x$} }; \node[anchor=south west] (t) at ($(s.west)+(0.9, -0.49)$) {appear} child[missing] child { node {$x$} child { node {three}} child[missing] }; \node[anchor=south west] (t) at ($(s.west)+(3.5, -0.49)$) {times{\color{white}{p}}}; \draw[->] (-1.0, -2.8) -> (-3.5, -3.5); \node at (-3.0, -3.0) {{\sc Shift}}; \draw[->] (1.0, -2.8) -> (3.5, -3.5); \node at (3.0, -3.0) {{\sc Insert}}; \draw[->] (-3.5, -6.6) -> (-3.5, -7.5) node[left,pos=0.5] {{\sc RightComp}}; \draw[->] (3.5, -6.6) -> (3.5, -7.5) node[right,pos=0.5] {{\sc RightPred}}; \node (begin) at (-6.5, -3.8) {}; \draw ($(begin)$) rectangle ($(begin)+(6, -2.5)$); \node (s) [below right=0.6 and 0.7 of begin.west, anchor=west] {}; \draw ($(s)+(0,0)$) -- ($(s)+(3.0,0.0)$) -- ($(s)+(3.0,-0.5)$) -- ($(s)+(0,-0.5)$); \draw ($(s)+(4.5,0)$) -- ($(s)+(3.3,0)$) -- ($(s)+(3.3,-0.5)$) -- ($(s)+(4.5,-0.5)$); \node[anchor=south west] (t) at ($(s.west)+(0.25, -0.49)$) {to{\color{white}{p}}} child[missing] child { node {$x$} }; \node[anchor=south west] (t) at ($(s.west)+(0.9, -0.49)$) {appear} child[missing] child { node {$x$} child { node {three}} child[missing] }; \node[anchor=south west] (t) at ($(s.west)+(1.9, -0.49)$) {{\color{white}{p}}times}; \node[anchor=south west] (t) at ($(s.west)+(3.5, -0.49)$) {on{\color{white}{p}}}; \node (begin) at (0.5, -3.8) {}; \draw ($(begin)$) rectangle ($(begin)+(6, -2.5)$); \node (s) [below right=0.6 and 0.7 of begin.west, anchor=west] {}; \draw ($(s)+(0,0)$) -- ($(s)+(3.0,0.0)$) -- ($(s)+(3.0,-0.5)$) -- ($(s)+(0,-0.5)$); \draw ($(s)+(4.5,0)$) -- ($(s)+(3.3,0)$) -- ($(s)+(3.3,-0.5)$) -- ($(s)+(4.5,-0.5)$); \node[anchor=south west] (t) at ($(s.west)+(0.25, -0.49)$) {to{\color{white}{p}}} child[missing] child { node {$x$} }; \node[anchor=south west] (t) at ($(s.west)+(0.9, -0.49)$) {appear} child[missing] child { node {times} child { node {three}} child[missing] }; \node[anchor=south west] (t) at ($(s.west)+(3.5, -0.49)$) {on{\color{white}{p}}}; \node (begin) at (-6.5, -7.8) {}; \draw ($(begin)$) rectangle ($(begin)+(6, -2.5)$); \node (s) [below right=0.6 and 0.7 of begin.west, anchor=west] {}; \draw ($(s)+(0,0)$) -- ($(s)+(3.0,0.0)$) -- ($(s)+(3.0,-0.5)$) -- ($(s)+(0,-0.5)$); \draw ($(s)+(4.5,0)$) -- ($(s)+(3.3,0)$) -- ($(s)+(3.3,-0.5)$) -- ($(s)+(4.5,-0.5)$); \node[anchor=south west] (t) at ($(s.west)+(0.25, -0.49)$) {to{\color{white}{p}}} child[missing] child { node {$x$} }; \node[anchor=south west] (t) at ($(s.west)+(0.9, -0.49)$) {appear} child[missing] child { node {times} child { node {three}} child { node[right=0.1] {$x$}} }; \node[anchor=south west] (t) at ($(s.west)+(3.5, -0.49)$) {on{\color{white}{p}}}; \node (begin) at (0.5, -7.8) {}; \draw ($(begin)$) rectangle ($(begin)+(6, -2.5)$); \node (s) [below right=0.6 and 0.7 of begin.west, anchor=west] {}; \draw ($(s)+(0,0)$) -- ($(s)+(3.0,0.0)$) -- ($(s)+(3.0,-0.5)$) -- ($(s)+(0,-0.5)$); \draw ($(s)+(4.5,0)$) -- ($(s)+(3.3,0)$) -- ($(s)+(3.3,-0.5)$) -- ($(s)+(4.5,-0.5)$); \node[anchor=south west] (t) at ($(s.west)+(0.25, -0.49)$) {to{\color{white}{p}}} child[missing] child { node {$x$} }; \node[anchor=south west] (t) at ($(s.west)+(0.9, -0.49)$) {appear} child[missing] child[missing] child { node {times} child { node {three}} child[missing] } child { node[right=0.2] {$x$} }; \node[anchor=south west] (t) at ($(s.west)+(3.5, -0.49)$) {on{\color{white}{p}}}; \end{tikzpicture} \end{minipage} \caption{(a)-(b) Example of a parse error by the left-corner parser that may be saved with external syntactic knowledge (limited features and beam size 8). (c) Two corresponding configuration paths: the left path leads to (a) and the right path leads to (b).} \label{fig:error:example} \end{figure} Finally, though the overall score of the left-corner parser is lower, we suspect that it could be improved by inventing new features, in particular those with external syntactic knowledge. The analysis below is based on the result with limited features, but we expect a similar technique would also be helpful to the full feature model. As we have discussed (see the beginning of Section \ref{sec:parse}), an attachment decision of the left-corner parser is more eager, which is the main reason for the lower scores. One particular difficulty with the left-corner parser is that the parser has to decide whether each token has further (right) arguments with no (or a little) access to the actual right context. Figure \ref{fig:error:example} shows an example of a parse error in English caused by the left-corner parser with limited features (without stack depth bound). This is a kind of PP attachment error on {\it on CNN}, though the parser has to deal with this attachment decision implicitly before observing the attached phrase ({\it on CNN}). When the next token in the buffer is {\it times} (Figure \ref{fig:error:example}(c)), performing {\sc Shift} means {\it times} would take more than one argument in future, while performing {\sc Insert} means the opposite: {\it times} does not take any arguments. Resolving this problem would require knowledge on {\it times} that it often takes no right arguments (while {\it appear} generally takes several arguments), but it also suggests that the parser performance could be improved by augmenting such syntactic knowledge on each token as new features, such as with distributional clustering \cite{koo-carreras-collins:2008:ACLMain,BohnetJMA13}, supertagging \cite{ouchi-duh-matsumoto:2014:EACL2014-SP}, or refined POS tags \cite{mueller-EtAl:2014:EMNLP2014}. All those features are shown to be effective in transition-based dependency parsing; we expect those are particularly useful for our parser though the further analysis is beyond the scope of this chapter. In PCFG left-corner parsing, \newcite{TOPS:TOPS12034} reported accuracy improvement with symbol refinements obtained by the Berkeley parser \cite{petrov-EtAl:2006:COLACL} in English. \subsection{Result on UD} \label{sec:trans:ud-parse} Figure \ref{fig:ud_depth_to_accuracy} shows the results in UD. Again the performance tendency is not changed from the CoNLL dataset; on average, the left-corner with depth$_{re}$ can parse sentences without dropping accuracies but other systems are largely affected by the constraints. \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/ud/transition/b8_systems.pdf}} \caption{Accuracy vs. stack depth bound in UD.} \label{fig:ud_depth_to_accuracy} \end{figure} We further examine the behavior of the left-corner parser by relaxing the definition of center-embedding which we discussed in Section \ref{sec:trans:relax}. Figure \ref{fig:ud_token_depth_to_accuracy} shows the result when we change the definition of depth$_{re}$. It is intersting to see that compared to Figure \ref{fig:load-comparison-relax-ud}, the number of languages in which this relaxation had a greater impact increases; e.g., in Croatian, Czech, Danish, Finnish, Hungarian, Indonesian, Persian, and Swedish, there is about 10\% improvements from the original depth$_{re} \leq 1$ to the relaxed depth$_{re} 1 (3)$ (i.e., when three word constituents are allowed to be embedded). The reason of this might be in the characteristics of the supervised parsers, which more freely explore the search space (compared to the statistic analysis in Figure \ref{fig:load-comparison-relax-ud}). \begin{figure}[p] \centering \resizebox{0.95\textwidth}{!} {\includegraphics[]{./data/ud/transition/b8.pdf}} \caption{Accuracy vs. stack depth bound with left-corner parsers on UD with different maximum length of test sentences.} \label{fig:ud_token_depth_to_accuracy} \end{figure} \section{Discussion and Related Work} \label{sec:relatedwork} We have presented the left-corner parsing algorithm for dependency structures and showed that our parser demands less stack depth for recognizing most of natural language sentences. The result also indicates the existence of universal syntactic biases that center-embedded constructions are rare phenomena across languages. We finally discuss the relevance of the current study to the previous works. We have reviewed previous works about left-corner parsing (for CFGs) in Section \ref{sec:bg:left-corner} though have little mentioned previous works that study the empirical property of the left-corner parsers. \newcite{Roark:2001:RPP:933637} is the first attempt of the empirical study. His idea is instead of modeling left-corner transitions directly as in our parser, incorporating the left-corner strategy into a CFG parser via a left-corner grammar transform \cite{conf/acl/Johnson98}. This design makes the overall parsing system top-down and makes it possible to compare the pure top-down and the left-corner parsing systems in a unified way. Note also that as his method is based on \newcite{conf/acl/Johnson98}, the parsing mechanism is basically the same as the left-corner PDA that we introduced as another variant in Section \ref{sec:bg:anothervariant}. \newcite{journals/coling/SchulerAMS10} examine the empirical coverage result of the left-corner PDA that we formalized in Section \ref{sec:bg:left-corner-pda}, though the experiment is limited on English. Most of previous left-corner parsing models have been motivated by the study of cognitively plausible parsing models, an interdisciplinary research on psycholinguistics and computational linguistics \cite{keller:2010:Short}. Though we also evaluated our parser with cognitively motivated limited feature models and got an encouraging result, this is preliminary and we do not claim from this experiment that our parser is cross-linguistically cognitively plausible. Our parser is able to parse most sentences within a limited stack depth bound. However, it is skeptical whether there is any connection between the stack of our parser and memory units preserved in human memory. \newcite{vanschijndel-schuler:2013:NAACL-HLT} calculated several kinds of {\it memory cost} obtained from a configuration of their left-corner parser and discussed which cost is more significant indicator to predict human reading time data, such as the current stack depth and the integration cost in the dependency locality theory \cite{Gibson2000The-dependency-}, which is obtained by calculating the distance between two subtrees at composition. Discussing cognitive plausibility of a parser requires such kind of careful experimental setup, which is beyond the scope of the current work. Our main focus in this chapter is rather a syntactic bias exist in language universally. In this view, our work is more relevant to previous dependency parsing model with a constraint on possible tree structures \cite{eisner2010}. They studied parsing with a hard constraint on dependency length based on the observation that grammar may favor a construction with shorter dependency lengths \cite{gildea-temperley:2007:ACLMain,DBLP:journals/cogsci/GildeaT10}. Instead of prohibiting longer dependency lengths, our method prohibits deeper center-embedded structures, and we have shown that this bias is effective to restrict natural language grammar. The two constraints, length and center-embedding, are often correlated since center-embedding constructions typically lead to longer dependency length. It is therefore an interesting future topic to explore which bias is more essential for restricting grammar. This question can be perhaps explored through unsupervised dependency parsing tasks \cite{klein-manning:2004:ACL}, where such kind of light supervision has significant impact on the performance \cite{smith-eisner-2006-acl-sa,marevcek-vzabokrtsky:2012:EMNLP-CoNLL,DBLP:journals/tacl/BiskH13}. We introduced a dummy node for representing a subtree with an unknown head or dependent. Recently, Menzel and colleagues \cite{beuck2013,kohn-menzel:2014:ACL} have also studied dependency parsing with a dummy node. While conceptually similar, the aim of introducing a dummy node is different between our approach and theirs: We need a dummy node to represent a subtree corresponding to that in Resnik's algorithm, while they introduced it to confirm that every dependency tree on a sentence prefix is fully connected. This difference leads to a technical difference; a subtree of their parser can contain more than one dummy node, while we restrict each subtree to containing only one dummy node on a right spine. \chapter{Grammar Induction with Structural Constraints} \label{chap:induction} In the previous chapter, we formulated a left-corner dependency parsing algorithm as a transition system in which its stack size grows only for center-embedded constructions. Also, we investigated how much the developed parser can capture the syntactic biases found in the manually developed treebanks, and found that very restricted stack depth such as two or one (by allowing small consituents to be embedded) suffices to describe most syntactic constructions across languages. In this chapter, we will investigate whether the found syntactic bias in the previous chapter would be helpful for the task of {\it unsupervised grammar induction}, where the goal is to learn the model of finding hidden syntactic structures given the surface strings (or part-of-speeches) alone; see Section \ref{sec:2:unsupervised} for overviews. There are a number of motivations to consider unsupervised grammar induction, in particular with the {\it universal} syntactic biases as we discussed in Chapter \ref{chap:1}. Among them our primary motivation is to investigate a good {\it prior} that would be useful for restricting possible tree structures for general natural language sentences (regardless of language). The structure that we aim to induce is dependency structure; though this choice mainly stems from computational reasons rather than philosophical ones, i.e., the dependency structure is currently the most feasible structure to be learned, we argue the lesson from the current study would be useful for the problem of inducing other structures including constituent-based representations, e.g., HPSG or CCG. Another interesting reason to tackle this problem is to understand the mechanism of child language acquisition. In particular, since the structural constraint that we impose originally is motivated by psycholinguistic observations (Section \ref{sec:bg:psycho}), we can regard the current task as controlled experiments to see whether the (memory) limitation that children may suffer from may in turn facilitate the acquisition of language. This is however not our primary motivation since there are large gaps between the actual environment in which children acquire language and the current task; see Section \ref{sec:intro:unsupervised} for the detailed discussion. We therefore think the current study to be a starting point for the modeling of a more realistic acquisition scenario, such as the joint inference of word categories and syntax. As in the previous chapter, this chapter starts with the conceptual part, in which the main focus is the learning algorithm with structural constraints, then followed by the empirical part that focuses on experiments. Our model is basically the dependency model with valence \cite{klein-manning:2004:ACL} that we formalized as a special instance of split bilexical grammar (SBG) in Section \ref{sec:2:dmv}. We describe how this model can be encoded in a chart parser that simulates left-corner dependency parsing as presented in the previous chapter, which captures the {\it center-embeddedness} of a subderivation at each chart entry. Intuitively, with this technique we can bias the model to prefer some syntactic patterns, e.g., that do not contain many center-embedding. We discuss the high level idea and mathematical formalization of this approach in Section \ref{sec:ind:overview} and then present a new chart parsing algorithm that simulates split bilexical grammars in a left-corner parsing strategy \ref{sec:ind:simulate}. We then empirically evaluate whether such structural constraints would help to learn good parameters for the model (Sections \ref{sec:ind:setup} and \ref{sec:ind:exp}). As in the previous chapter, we study this effect across diverse languages; the total number of treebanks that we use is 30 across 24 languages. Our main empirical finding is that the constraint on center-embeddedness brings at least the same or superior effects as the closely related structural bias on dependency {\it length} \cite{smith-eisner-2006-acl-sa}, i.e., the preference for {\it shorter} dependencies. In particular, we find that our bias often outperforms length-based ones when additional syntactic cues are given to the model, such as the one that the sentence root should be a verb or a noun. For example, when such a constraint on the root POS tag is given, our method that penalizes center-embeddedness achieves an attachment score of 62.0 on Google universal treebanks (averaged across 10 languages, evaluated on length $\leq 10$ sentences), which is superior to the model with the bias on dependency length (58.6) and the model utilizing a larger number of hand crafted rules between POS tags (56.0) \cite{naseem-EtAl:2010:EMNLP}. \section{Approach Overview} \label{sec:ind:overview} \subsection{Structure-Constrained Models} Every model presented in this section can be formalized as the following joint model over a sentence $x$ and a parse tree $z$: \begin{equation} p(x,z|\theta) = \frac{p_\textsc{orig}(x,z|\theta) \cdot f(z,\theta)}{Z(\theta)} \label{eqn:ind:joint} \end{equation} where $p_\textsc{orig}(x,z|\theta)$ is a (baseline) model, such as DMV. $f(z,\theta)$ assigns a value between $[0,1]$ for each $z$, i.e., it works as a penalty or a cost, {\it reducing} the original probability depending on $z$. One such penalty that we consider is prohibiting any trees that contain any center-embedding, which is represented as follows: \begin{equation} f(z,\theta) = \left\{ \begin{array}{cl} 1& \textrm{if } z \textrm{ contains no center-embedding;} \\ 0& \textrm{otherwise.} \\ \end{array} \right. \label{eqn:ind:cost} \end{equation} In Section \ref{sec:ind:simulate}, we present a way to encode such a penalty term during the CKY-style algorithm. Though $f(z,\theta)$ works as adding a penalty to each original probability, the distribution $p(x,z|\theta)$ is still normalized; here $Z(\theta) = \sum_{x,z} p_\textsc{orig}(x,z|\theta) \cdot f(z,\theta)$. Intuitively, $f(z,\theta)$ models the preferences that the original model $p_\textsc{orig}(x,z|\theta)$ does not explicitly encode. Note that we do not try to learn $f(z,\theta)$; every constraint is given as an {\it external} constraint. Note that this simple approach to combine two models is not entirely new and has been explored several times. For example, \newcite{pereira-schabes:1992:ACL} present an EM algorithm that relies on partially bracketed information and \newcite{smith-eisner-2006-acl-sa} model $f(z,\theta)$ as the dependency length-based penalty term. We explore several kinds of $f(z,\theta)$ in our experiments including the existing one, e.g., dependency length, and our new idea, center-embeddedness, examining which kind of structural constraint is most helpful for learning grammars in a cross-linguistic setting. Below, we discuss the issues on learning of this model. The main result was previously shown in \newcite{smith-2006} though we summarize it here in our own terms defined in Chapter \ref{chap:bg} for completeness. \subsection{Learning Structure-Constrained Models} At first glance, the normalization constant $Z(\theta)$ in Eq. \ref{eqn:ind:joint} seems to prevent the use of the EM algorithm for parameter estimation for this model. We show here that in practice we need not care about this constant and the resulting EM algorithm will increase the likelihood of the model of Eq. \ref{eqn:ind:joint}. Recall that the EM algorithm collects expected counts for each rule $r$, $e(r|\theta)$ at each E-step and then normalizes the counts to update the parameters. We decomposed $e(r|\theta)$ into the counts for each span of each sentence as follows: \begin{equation} e(r|\theta) = \sum_{x \in \mathbf x} e_x(r|\theta) = \sum_{x \in \mathbf x} \sum_{0 \leq i \leq k \leq j \leq n_x} e_x(z_{i,k,j,r}|\theta). \end{equation} We now show that correct $e_x(z_{i,k,j,r}|\theta)$ under the model (Eq. \ref{eqn:ind:joint}) is obtained without a need to compute $Z(\theta)$. Let $q(x,z|\theta) = p_{\textsc{orig}}(x,z|\theta) \cdot f(z,\theta)$ be an {\it unnormalized} (i.e., deficient) distribution over $x$ and $z$. Then $p(x,z|\theta) = q(x,z|\theta)/Z(\theta)$. Note that we can use the inside-outside algorithm to collect counts under the deficient distribution $q(x,z|\theta)$. For example, we can obtain the (deficient) sentence marginal probability $q(x|\theta) = \sum_{z\in \mathcal{Z}(x)} q(x,z|\theta)$ by modifying rule probabilities appropriately. More specifically, in the case of eliminating center-embedding, our chart may record the current stack depth at each chart entry that corresponds to some subderivation, and then assign zero probability to a chart entry if the stack depth exceeds some threshold. We can represent $e_x(z_{i,k,j,r}|\theta)$ using $q(x,z|\theta)$ instead of $p(x,z|\theta)$, which is more complex. The key observation is that each $e_x(z_{i,k,j,r}|\theta)$ is represented as the ratio of two quantities: \begin{equation} e_x(z_{i,k,j,r}|\theta) = \frac{p(z_{i,k,j,r} = 1, x | \theta)}{p(x|\theta)}. \label{eqn:ind:expected} \end{equation} Calculating these quantities is hard due to the normalization constant. However, as we show below, the normalization constant is canceled in the course of computing the ratio, meaning that the expected counts (under the correct distribution) are obtained with the inside-outside algorithm under the unnormalized distribution $q(x,z|\theta)$. Let us first consider the denominator in Eq. \ref{eqn:ind:expected}, which can be rewritten as follows: \begin{equation} p(x|\theta) = \sum_{z\in \mathcal Z(x)} p(x,z|\theta) = \sum_{z\in \mathcal Z(x)} \frac{q(x,z|\theta)}{Z(\theta)} = \frac{q(x|\theta)}{Z(\theta)}. \end{equation} For the numerator, we first observe that \begin{align} p(z_{i,k,j,r} = 1, x |\theta) &= \sum_{z\in\mathcal{Z}(x)} p(z_{i,k,j,r} = 1, z, x |\theta) \\ &= \sum_{z\in\mathcal{Z}(x)} p(z, x |\theta) p(z_{i,k,j,r} = 1|z) \\ &= \sum_{z\in\mathcal{Z}(x)} p(z, x |\theta) \mathbb{I} (z_{i,j,k,r} \in z) \\ &= \frac{\sum_{z\in\mathcal{Z}(x)} q(x,z|\theta) \mathbb{I} (z_{i,j,k,r} \in z)}{Z(\theta)}, \label{eq:ind:suff} \end{align} where $\mathbb{I}(c)$ is an identity function that returns $1$ if $c$ is satisfied and $0$ otherwise. The numerator in Eq. \ref{eq:ind:suff} is the value that the inside-outside algorithm calculates for each $z_{i,j,k,r}$ (with Eq. \ref{eqn:2:io}), which we write as $q(z_{i,k,j,r} = 1, x | \theta)$. Thus, we can skip computing the normalization constant in Eq. \ref{eqn:ind:expected} by replacing the quantities with the ones under $q(x,z|\theta)$ as follows: \begin{equation} e_x(z_{i,k,j,r}|\theta) = \frac{p(z_{i,k,j,r} = 1, x | \theta)}{p(x|\theta)} = \frac{q(z_{i,k,j,r} = 1, x | \theta)}{q(x|\theta)}. \end{equation} The result indicates that by running the inside-outside algorithm {\it as if} our model is deficient, using $q(x,z|\theta)$ in place of $p(z,x|\theta)$, we can obtain the model with higher likelihood of $p(x,z|\theta)$ (Eq. \ref{eqn:ind:joint}). Note that the viterbi parse can also be obtained using $q(x,z|\theta)$ since $\arg\max_{z}q(x,z|\theta) = \arg\max_{z} p(x,z|\theta)$ holds. \section{Simulating split-bilexical grammars with a left-corner strategy} \label{sec:ind:simulate} Here we present the main theoretical result in this chapter. In Section \ref{sec:2:learning} we showed that the parameters of the very general model for dependency trees called split-bilexical grammars (SBGs) can be learned using the EM algorithm with CKY-style inside-outside calculation. Also, we formalized the left-corner dependency parsing algorithm as a transition system in Chapter \ref{chap:transition}, which enables capturing the {\it center-embeddedness} of the current derivation via {\it stack depth}. We combine these two parsing techniques in a non-trivial way, and obtain a new chart parsing method for split-bilexical grammars that enables us to calculate center-embeddedness of each subderivation at each chart entry. \REVISE{ We describe the algorithm based on the inference rules with items (as the triangles in Section \ref{sec:2:sbg}). The basic idea is that we {\it memoize} the subderivations of left-corner parsing, which share the same information and look the same under the model. Basically, each chart item is a stack element; for example, an item \tikz[baseline=-20pt]{\headtriangleinlinebottomfull{$i$}{$h$}{$j$}} abstracts (complete) subtrees on the stack headed by $h$ spanning $i$ to $j$. Each inference rule then roughly corresponds to an action of the transition system.\footnote{We also introduce extra rules, which are needed for encoding parameterization of SBGs, or achieving head-splitting as we describe later.} Thus, if we extract one derivation from the chart, which is a set of inference rules, it can be mapped to a particular sequence of transition actions. On this chart, each item is further decorated with the current stack depth, which is the key to capture the center-embeddedness efficiently during dynamic programming. A particular challenge for efficient tabulation is similar to the one that we discussed in Section \ref{sec:2:sbg}; that is, we need to eliminate the spurious ambiguity for correct parameter estimation and for reducing time complexity. In Section \ref{sec:ind:algorithm}, we describe how this can be achieved by applying the idea of head-splitting into the tabulation of left-corner parsing. In the following two sections we discuss some preliminaries for developing the algorithm, i.e., how to handle dummy nodes on the chart (Section \ref{sec:ind:handling}) and a note on parameterization of SBGs with the left-corner strategy (Section \ref{sec:bg:head-outward}). } \subsection{Handling of dummy nodes} \label{sec:ind:handling} An obstacle when designing chart items abstracting many derivations is the existence of predicted nodes, which were previously abstracted with {\it dummy} nodes in the transition system. Unfortunately, we cannot use the same device in our dynamic programming algorithm because it leads to very inefficient asymptotic runtime. Figure \ref{fig:ind:dummy-is-bad} explains the reason for this inefficiency. In the transition system, we postponed scoring of attachment preferences between a dummy token and its left dependents (e.g., Figure \ref{subfig:ind:dummy1}) until {\it filling} the dummy node with an actual token by an {\sc Insert} action; this mechanism makes the algorithm fully incremental, though it requires remembering every left dependent token (see {\sc Insert} in Figure \ref{fig:actions}) at each step. This tracking of child information is too expensive for our dynamic programming algorithm. To solve this problem, we instead fill a dummy node with an actual token when the dummy is first introduced (not when {\sc Insert} is performed). This is impossible in the setting of a transition system since we do not observe the unread tokens in the portion of the sentence remaining in the buffer. Figure \ref{subfig:ind:pred-rect} shows an example of an item used in our dynamic programming, which does not abstracts the predicted token as a dummy node, but abstracts the construction of child subtrees spanning $i$ to $j$ below the predicted node $p$. An arc from $p$ indicates that at least one token between $i$ and $j$ is a dependent of $p$, although the number of dependents as well as the positions are unspecified. \begin{figure}[t] \centering \begin{minipage}[b]{.3\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.5cm] $1$ \& $2$ \& $3$ \& $x$ \\ \end{deptext} \depedge{4}{1}{} \depedge{4}{2}{} \depedge{4}{3}{} \end{dependency} \subcaption{}\label{subfig:ind:dummy1} \end{minipage} \begin{minipage}[b]{.3\linewidth} \centering \begin{dependency}[theme=simple] \begin{deptext}[column sep=0.5cm] $1$ \& $2$ \& $3$ \& $x$ \\ \end{deptext} \depedge{4}{1}{} \depedge{4}{2}{} \depedge{2}{3}{} \end{dependency} \subcaption{} \end{minipage} \begin{minipage}[b]{.3\linewidth} \centering \begin{tikzpicture}[thick, level distance=0.8cm, >=stealth'] \draw (0,0) rectangle (2.0, -1.0); \draw[->] (3, -1.0) arc (0:180:0.9cm); \draw (0, -1.25) node {$i$} ++(2.0, 0) node {$j$} +(1.0, 0) node {$p$}; \end{tikzpicture} \subcaption{}\label{subfig:ind:pred-rect} \end{minipage} \caption{Dummy nodes ($x$ in (a) and (b)) in the transition system cannot be used in our transition system because with this method, we have to remember every child token of the dummy node to calculate attachment scores at the point when the dummy is filled with an actual token, which leads to an exponential complexity. We instead abstract trees in a different way as depicted in (c) by not abstracting the predicted node $p$ but filling with the actual word ($p$ points to some index in a sentence such that $j < p \leq n$). If $i=1, j=3$, this representation abstracts both tree forms of (a) and (b) with some fixed $x$ (corresponding to $p$). } \label{fig:ind:dummy-is-bad} \end{figure} \subsection{Head-outward and head-inward} \label{sec:bg:head-outward} The generative process of SBGs described in Section \ref{sec:2:sbg} is {\it head-outward}, in that its state transition $q_1 \xmapsto{~a~} q_2$ is defined as a process of {\it expanding} the tree by generating a new symbol $a$ which is the most distant from its head when the current state is $q_1$. The process is called {\it head-inward} (e.g., \newcite{alshawi:1996:ACL}) if it is reversed, i.e., when the closest dependent of a head on each side is generated last. Note that the generation process of left-corner parsing cannot be described fully head-outward. In particular, its generation of left dependents of a head is inherently head-inward since a parser builds a tree from left to right. For example, the tree of Figure \ref{subfig:ind:dummy1} is constructed by first doing {\sc LeftPred} when $1$ is recognized, and then attaching $2$ and $3$ in order. Fortunately, these two processes, head-inward and head-outward, can generally be interchanged by reversing transitions \cite{Eisner:1999:EPB:1034678.1034748}. In the algorithm described below, we model its left automaton $L_a$ as a head-inward process while right automaton $R_a$ as a head-outward process. Specifically, that means if we write $q_1 \xmapsto{~a'~} q_2 \in L_a$, the associated weight for this transition is $p(a'|q_2)$ instead of $p(a'|q_1)$. We also do not modify the meaning of sets $\textit{final}(L_a)$ and $\textit{init}(L_a)$; i.e., the left state is initialized with $q \in \textit{final}(L_a)$ and finishes with $q \in \textit{init}(L_a)$. \subsection{Algorithm} \label{sec:ind:algorithm} \begin{figure}[p] \begin{tikzpicture}[thick, level distance=0.8cm, >=stealth'] \begin{scope}[xshift=0cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Shift-Left:}; \node at (1.25, -0.5) {$q \in \textit{final}(L_h)$}; \draw (0.1, -0.75) -- +(2.5, 0) node[right] {$1 \leq h \leq n$}; \begin{scope}[xshift=1.5cm, yshift=-1.3cm] \lefttriangle{$q:d$}{$h$}{$h$}; \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-4cm] \node at (0, 0) [anchor=west] {\sc Finish-Left:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \lefttriangle{$q:d$}{$i$}{$h$}; \end{scope} \node at (1.5, -1.25) [anchor=west] {$q \in \textit{init}(L_h)$}; \draw (0.1, -1.75) -- +(4.1, 0); \begin{scope}[xshift=2.0cm, yshift=-2.3cm] \lefttriangle{$I:d$}{$i$}{$h$}; \end{scope} \end{scope} \begin{scope}[xshift=5cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Shift-Right:}; \node at (1.25, -0.5) {$q \in \textit{init}(R_h)$}; \draw (0.1, -0.75) -- +(2.5, 0) node[right] {$1 \leq h \leq n$}; \begin{scope}[xshift=0.75cm, yshift=-1.3cm] \righttriangle{$q:d$}{$h$}{$h$}; \end{scope} \end{scope} \begin{scope}[xshift=5cm,yshift=-4.cm] \node at (0, 0) [anchor=west] {\sc Finish-Right:}; \begin{scope}[xshift=0.5cm, yshift=-0.75cm] \righttriangle{$q:d$}{$h$}{$i$} \end{scope} \node at (1.5, -1.25) [anchor=west] {$q \in \textit{final}(R_h)$}; \draw (0.1, -1.75) -- +(4.1, 0); \begin{scope}[xshift=1.25cm, yshift=-2.3cm] \righttriangle{$F:d$}{$h$}{$i$} \end{scope} \end{scope} \begin{scope}[xshift=10cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc Insert-Left:}; \begin{scope}[xshift=0.25cm, yshift=-0.5cm] \predrectangle{$i$}{$p-1$}{$p$}{$q:d$} \end{scope} \node at (2.75, -1.0) [anchor=west] {$q \in \textit{init}(L_p)$}; \draw (0.1, -1.5) -- +(5.1, 0); \begin{scope}[xshift=2.5cm, yshift=-2.05cm] \lefttriangle{$I:d$}{$i$}{$p$} \end{scope} \end{scope} \begin{scope}[xshift=10cm,yshift=-4cm] \node at (0, 0) [anchor=west] {\sc Insert-Right:}; \begin{scope}[xshift=0.5cm, yshift=-0.75cm] \predright{$r:d$}{$h$}{$p-1$}{$p$}{$q$} \end{scope} \node at (2.75, -1.25) [anchor=west] {$q \in \textit{init}(L_p)$}; \draw (0.1, -1.75) -- +(5.1, 0); \begin{scope}[xshift=2.0cm, yshift=-2.3cm] \righttriangle{$r:d$}{$h$}{$p$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-8cm] \node at (0, 0) [anchor=west] {\sc LeftPred:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \headtriangle{$:d$}{$i$}{$h$}{$j$} \end{scope} \node at (2.5, -0.75) [anchor=west] {$r \in \textit{final}(L_p)$}; \node at (2.5, -1.25) [anchor=west] {$r \xmapsto{~h~} q \in L_p$}; \draw (0.1, -1.75) -- +(5.1, 0); \begin{scope}[xshift=1.5cm, yshift=-2cm] \predrectangle{$i$}{$j$}{$p$}{$q:d$} \end{scope} \end{scope} \begin{scope}[xshift=6cm,yshift=-8cm] \node at (0, 0) [anchor=west] {\sc RightPred:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \righttriangle{$r:d$}{$h$}{$i$} \end{scope} \node at (2.5, -0.75) [anchor=west] {$q \in \textit{final}(L_p)$}; \node at (2.5, -1.25) [anchor=west] {$r \xmapsto{~p~} r' \in R_h$}; \draw (0.1, -1.75) -- +(5.1, 0); \begin{scope}[xshift=1.5cm, yshift=-2.25cm] \predright{$r':d$}{$h$}{$i$}{$p$}{$q$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-12cm] \node at (0, 0) [anchor=west] {\sc Combine:}; \begin{scope}[xshift=1.0cm, yshift=-0.75cm] \lefttriangle{$I:d$}{$i$}{$h$} \end{scope} \begin{scope}[xshift=2.5cm, yshift=-0.75cm] \righttriangle{$F:d$}{$h$}{$j$} \end{scope} \draw (0.2, -1.75) -- +(3.1, 0); \begin{scope}[xshift=1.5cm, yshift=-2.25cm] \headtriangle{$:d$}{$i$}{$h$}{$j$} \end{scope} \end{scope} \begin{scope}[xshift=5cm,yshift=-12cm] \node at (0, 0) [anchor=west] {\sc Accept:}; \begin{scope}[xshift=1.25cm, yshift=-0.75cm] \lefttriangle{$I:1$}{}{}; \draw[anchor=mid] (-0.65, -0.75) node {$1$} +(1.15, 0) node {$n+1$}; \end{scope} \draw (0.2, -1.75) -- +(2.1, 0); \node at (1.25, -2.1) {\textit{accept}}; \end{scope} \end{tikzpicture} \caption{An algorithm for parsing SBGs with a left-corner strategy in $O(n^4)$ given a sentence of length $n$, except the composition rules which are summarized in Figure \ref{fig:ind:ded-comp}. The $n+1$-th token is a dummy root token $\$$, which only has one left dependent (sentence root). $i,j,h,p$ are indices of tokens. The index of a head which is still collecting its dependents is decorated with a state (e.g., $q$). $L_h$ and $R_h$ are left and right FSAs of SBGs given a head index $h$, respectively; we reverse the proces of $L_h$ to start with $q\in\textit{final}(L_h)$ and finish with $q\in\textit{init}(L_h)$ (see the body). Each item is also decorated with depth $d$ that corresponds to the stack depth incurred when building the corresponding tree with left-corner parsing. Since an item with larger depth is only required for composition rules, the depth is unchanged with the rules above, except {\sc Shift-*}, which corresponds to {\sc Shift} transition and can be instantiated with arbitrary depth. Note that {\sc Accept} is only applicable for an item with depth 1, which guarantees that the successful parsing process remains a single tree on the stack. Each item as well as a statement about a state (e.g., $r\in \textit{final}(L_p)$) has a weight and the weight of a consequence item is obtained by the product of the weights of its antecedent items. } \label{fig:ind:ded1} \end{figure} \begin{figure}[p] \begin{tikzpicture}[thick, level distance=0.8cm, >=stealth'] \begin{scope}[xshift=0cm,yshift=0cm] \node at (0, 0) [anchor=west] {\sc LeftComp-L-1:}; \begin{scope}[xshift=0.25cm, yshift=-0.5cm] \predrectangle{$i$}{$j$}{$p$}{$q:d$} \end{scope} \begin{scope}[xshift=4cm, yshift=-0.5cm] \lefttriangle{$I:d$}{$j+1$}{$h$} \end{scope} \draw (0.1, -1.5) -- +(4.4, 0); \node at (4.8, -1.25)[anchor=west,text width=3cm] { \begin{equation*} b= \left\{ \begin{array}{cl} 1& \textrm{if } h-(j+1) \geq 1 \\ 0& \textrm{otherwise.} \\ \end{array} \right. \end{equation*} }; \begin{scope}[xshift=1.75cm, yshift=-2.05cm] \halfrectangle{$i$}{$h$}{$p$}{$q:d$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-4cm] \node at (0, 0) [anchor=west] {\sc LeftComp-L-2:}; \begin{scope}[xshift=0.25cm, yshift=-0.75cm] \halfrectangle{$i$}{$h$}{$p$}{$q:d$} \end{scope} \begin{scope}[xshift=3.5cm, yshift=-0.75cm] \righttriangle{$F:d'$}{$h$}{$j$} \end{scope} \node [anchor=west] at (4.5, -1.25) {$q \xmapsto{~h~} q' \in L_p$}; \draw (0.1, -1.75) -- +(6.6, 0); \node at (7.0, -1.5)[anchor=west,text width=3cm] { \begin{equation*} d'= \left\{ \begin{array}{ll} d+1& \textrm{if } b=1 \textrm{ or } (j-h) \geq 1 \\ d& \textrm{otherwise.} \\ \end{array} \right. \end{equation*} }; \begin{scope}[xshift=2.5cm, yshift=-2.0cm] \predrectangle{$i$}{$j$}{$p$}{$q':d$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-8cm] \node at (0, 0.) [anchor=west] {\sc LeftComp-R-1:}; \begin{scope}[xshift=0.25cm, yshift=-0.75cm] \predright{$r:d$}{$i$}{$j$}{$p$}{$q$} \end{scope} \begin{scope}[xshift=4cm, yshift=-0.75cm] \lefttriangle{$I:d$}{$j+1$}{$h$} \end{scope} \draw (0.1, -1.75) -- +(4.4, 0); \node at (4.8, -1.5)[anchor=west,text width=3cm] { \begin{equation*} b= \left\{ \begin{array}{ll} 1& \textrm{if } h-(j+1) \geq 1 \\ 0& \textrm{otherwise.} \\ \end{array} \right. \end{equation*} }; \begin{scope}[xshift=1.75cm, yshift=-2.3cm] \halfright{$r:d$}{$i$}{$h$}{$p$}{$q$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-12cm] \node at (0, 0) [anchor=west] {\sc LeftComp-R-2:}; \begin{scope}[xshift=0.25cm, yshift=-0.75cm] \halfright{$r:d$}{$i$}{$h$}{$p$}{$q$} \end{scope} \begin{scope}[xshift=3.5cm, yshift=-0.75cm] \righttriangle{$F:d'$}{$h$}{$j$} \end{scope} \node [anchor=west] at (4.5, -1.25) {$q \xmapsto{~h~} q' \in L_p$}; \draw (0.1, -1.75) -- +(6.6, 0); \node at (7.0, -1.5)[anchor=west,text width=3cm] { \begin{equation*} d'= \left\{ \begin{array}{ll} d+1& \textrm{if } b=1 \textrm{ or } (j-h) \geq 1 \\ d& \textrm{otherwise.} \\ \end{array} \right. \end{equation*} }; \begin{scope}[xshift=2.5cm, yshift=-2.3cm] \predright{$r:d$}{$i$}{$j$}{$p$}{$q'$} \end{scope} \end{scope} \begin{scope}[xshift=0cm,yshift=-16cm] \node at (0, 0) [anchor=west] {\sc RightComp:}; \begin{scope}[xshift=0.25cm, yshift=-0.75cm] \predright{$r:d$}{$i$}{$h-1$}{$h$}{$q$} \end{scope} \begin{scope}[xshift=3.5cm, yshift=-0.75cm] \righttriangle{$q':d'$}{$h$}{$j$} \end{scope} \node [anchor=west] at (4.5, -0.15) {$q \in \textit{init}(L_h)$}; \node [anchor=west] at (4.5, -0.75) {$q' \xmapsto{~p~} q'' \in \textit{final}(R_h)$}; \node [anchor=west] at (4.5, -1.35) {$s \in \textit{final}(L_p)$}; \draw (0.1, -1.75) -- +(6.6, 0); \node at (7.0, -1.5)[anchor=west,text width=3cm] { \begin{equation*} d'= \left\{ \begin{array}{ll} d+1& \textrm{if } (j-h) \geq 1 \\ d& \textrm{otherwise.} \\ \end{array} \right. \end{equation*} }; \begin{scope}[xshift=2.5cm, yshift=-2.3cm] \predright{$r:d$}{$i$}{$j$}{$p$}{$s$} \end{scope} \end{scope} \end{tikzpicture} \caption{The composition rules that are not listed in Figure \ref{fig:ind:ded1}. {\sc LeftComp} is devided into two cases, {\sc LeftComp-L-*} and {\sc LeftComp-R-*} depending on the position of the dummy (predicted) node on the left antecedent item (corresponding to the second top element on the stack). They are further divided into two processes, {\sc 1} and {\sc 2} for achieving head-splitting. $b$ is an additional annotation on an intermediate item for correct depth computation in {\sc LeftComp}. } \label{fig:ind:ded-comp} \end{figure} Figures \ref{fig:ind:ded1} and \ref{fig:ind:ded-comp} describe the algorithm that parses SBGs with the left-corner parsing strategy. Each inference rule can be basically mapped to a particular action of the transition, though the mapping is sometimes not one-to-one. For example {\sc Shift} action is divided into two cases, {\sc Left} and {\sc Right}, for achieving head-splitting. Some actions, e.g., {\sc Insert} ({\sc Insert-Left} and {\sc Insert-Right}) and {\sc LeftComp} ({\sc LeftComp-L-*} and {\sc LeftComp-R-*}) are further divided into two cases depending on tree structure of a stack element, i.e., whether a predicted node (a dummy) is a head of the stack element or some right dependent. Each chart item preserves the current stack depth $d$. The algorithm only accepts an item spanning the whole sentence (including the dummy root symbol $\$$ at the end of the sentence) with the stack depth one and this condition certifies that the derivation can be converted to a valid sequence of transitions. When our interest is to eliminate derivations that contain the specific depth of center-embedding, it can be achieved by assigning zero weight to every chart cell in which the depth exceeds the threshold. See {\sc LeftComp-L-1} and {\sc LeftComp-L-2} ({\sc LeftComp-R-*} is described later); these are the points where deeper stack depth might be detected. These two rules are the result of decomposing a single {\sc LeftComp} action in the transition system into the {\it left phase}, which collects only left half constituent given a head $h$, and {\it right phase}, which collects the remaining. Figure \ref{fig:ind:split-leftcomp} describes how this decomposition looks like in the transition system. As shown in Figure \ref{subfig:ind:left-comp-after-decomp}, we imagine that the top subtree on the stack is divided into a left and right constituents.\footnote{ This assumption does not change the incurred stack depth at each step since in the left-corner transition system, right half dependents of a head are collected only after its left half construction is finished. Splitting a head as in Figure \ref{subfig:ind:left-comp-after-decomp} means that we treat these left and right parsing processes independently. } In Figure \ref{fig:ind:split-leftcomp} we number each subtree on the stack from left to right. Note that then the number of the rightmost (top) element on the stack corresponds to the stack depth of the configuration. This value corresponds to the depth annotated on each item in the algorithm, such as $d'$ in \hspace{-8pt}\tikz[anchor=mid]{\righttriangleinline{$F:d'$}} appeared as the right antecedent item of {\sc LeftComp-L-2} in Figure \ref{fig:ind:ded-comp}. Then, since the left antecedent item of the same rule, i.e., \tikz[anchor=mid]{\halfrectangleinline}, corresponds to the second top element on the stack, its depth $d$ is generally smaller by one, i.e., $d=d'-1$. \begin{figure}[t] \centering \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[thick,edge from parent/.style={draw,->},sibling distance=15pt,level distance=20pt] \node at (-0.25, 0) {Stack:}; \draw (0, -0.5) -- ++(3, 0) -- ++(0, -0.5) -- ++(-3, 0); \node at (0.75, -0.75) {$x$} child { node[anchor=mid] {$a$} } child[missing]; \node at (1.5, -0.75) {$x$} child { node[anchor=mid] {$b$} } child[missing]; \node[anchor=mid] at (2.25, -0.75) {$d$} child { node {$c$} } child { node {$e$} }; \draw (0.75, -0.25) node {\color{red}{$1$}} ++(0.75, 0) node {\color{red}{$2$}} ++(0.75, 0) node {\color{red}{$3$}}; \node at (3.8, 0) {Buffer:}; \draw (5.0, -0.5) -- ++(-1.5, 0) -- ++(0, -0.5) -- ++(1.5, 0); \node[anchor=west] at (3.75, -0.75) {$f ~ \cdots$}; \end{tikzpicture} \subcaption{A configuration of the transition system.} \label{subfig:ind:left-comp-transition} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[thick,edge from parent/.style={draw,->},sibling distance=15pt,level distance=20pt] \node at (-0.25, 0) {Stack:}; \draw (0, -0.5) -- ++(3, 0) -- ++(0, -0.5) -- ++(-3, 0); \node at (0.75, -0.75) {$x$} child { node[anchor=mid] {$a$} } child[missing]; \node at (1.5, -0.75) {$x$} child { node[anchor=mid] {$b$} } child[missing]; \node[anchor=mid] at (2.25, -0.75) {$d$} child { node {$c$} } child[missing]; \node[anchor=mid] at (2.5, -0.75) {$d$} child[missing] child { node {$e$} }; \draw (0.75, -0.25) node {\color{red}{$1$}} ++(0.75, 0) node {\color{red}{$2$}} ++(0.9, 0) node {\color{red}{$3$}}; \node at (3.8, 0) {Buffer:}; \draw (5.0, -0.5) -- ++(-1.5, 0) -- ++(0, -0.5) -- ++(1.5, 0); \node[anchor=west] at (3.75, -0.75) {$f ~ \cdots$}; \end{tikzpicture} \subcaption{After head splitting.} \label{subfig:ind:left-comp-after-decomp} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[thick,edge from parent/.style={draw,->},sibling distance=15pt,level distance=20pt] \node at (-0.25, 0) {Stack:}; \draw (0, -0.5) -- ++(3, 0) -- ++(0, -0.5) -- ++(-3, 0); \node at (0.5, -0.75) {$x$} child { node[anchor=mid] {$a$} } child[missing]; \node at (1.5, -0.75) {$x$} child { node[anchor=mid] {$b$} } child { node[anchor=mid] {$d$} [sibling distance=7pt] child { node[anchor=mid] {$c$} } child[missing] } child[missing] child[missing]; \node[anchor=mid] at (2.5, -0.75) {$d$} child[missing] child { node {$e$} }; \draw (0.5, -0.25) node {\color{red}{$1$}} ++(1.0, 0) node {\color{red}{$2$}} ++(0.9, 0) node {\color{red}{$3$}}; \node at (3.8, 0) {Buffer:}; \draw (5.0, -0.5) -- ++(-1.5, 0) -- ++(0, -0.5) -- ++(1.5, 0); \node[anchor=west] at (3.75, -0.75) {$f ~ \cdots$}; \end{tikzpicture} \subcaption{After {\sc LeftComp-L-1} on (b).} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \begin{tikzpicture}[thick,edge from parent/.style={draw,->},sibling distance=15pt,level distance=20pt] \node at (-0.25, 0) {Stack:}; \draw (0, -0.5) -- ++(3, 0) -- ++(0, -0.5) -- ++(-3, 0); \node at (0.5, -0.75) {$x$} child { node[anchor=mid] {$a$} } child[missing]; \node at (1.5, -0.75) {$x$} child { node[anchor=mid] {$b$} } child { node[anchor=mid] {$d$} [sibling distance=9pt] child { node[anchor=mid] {$c$} } child { node[anchor=mid] {$e$} } } child[missing] child[missing]; \draw (0.75, -0.25) node {\color{red}{$1$}} ++(0.75, 0) node {\color{red}{$2$}}; \node at (3.8, 0) {Buffer:}; \draw (5.0, -0.5) -- ++(-1.5, 0) -- ++(0, -0.5) -- ++(1.5, 0); \node[anchor=west] at (3.75, -0.75) {$f ~ \cdots$}; \end{tikzpicture} \subcaption{After {\sc LeftComp-L-2} on (c).} \end{minipage} \caption{We decompose the {\sc LeftComp} action defined for the transition system into two phases, {\sc LeftComp-L-1} and {\sc LeftComp-L-2}, each of which collects only left or right half constituent of a subtree on the top of the stack. A number above each stack element is the stack depth decorated on the corresponding chart item.} \label{fig:ind:split-leftcomp} \end{figure} One complicated point with this depth calculation is that larger stack depth should not always be detected during this computation. Recall the discussion in Section \ref{sec:memorycost} that there are two kinds of stack depth that we called depth$_{re}$ and depth$_{sh}$, in which only depth$_{re}$ correctly captures the center-embeddedness of a construction. Depth$_{sh}$ is the depth of a configuration after a shift action, on which the top element is {\it complete}, i.e., contains no dummy (predicted) node. Note that the depth annotated on each subtree in Figure \ref{fig:ind:ded-comp} is in fact depth$_{sh}$, as the right antecedent item of each rule (corresponds to the top element on the stack) does not contain a predicted node. Since our goal is to capture the center-embeddedness during parsing, we fix this discrepancy with a small trick, which is described as the side condition of {\sc LeftComp-L-1} and {\sc LeftComp-L-2} in Figure \ref{fig:ind:ded-comp}. The point at which only depth$_{sh}$ increases by 1 is when a shifted token is immediately reduced with a following composition rule. We treat this process as a special case and do not increase the stack depth when the span length of the subtree that is reduced with a composition rule (right antecedent) is 1. The remained problem is that because we split each constituent into left and right constituents, we cannot calculate the size of the reduced constituent immediately. The additional variable $b$ in Figure \ref{fig:ind:ded-comp} is introduced for checking this condition. $b$ is set to 1 if the left part, i.e., {\sc LeftComp-L-1} collects a constituent with span length greater than one ($h=j+1$ indicates one word constituent). If $b=1$, regardless of the size of remaining right constituent, the second phase, or the right part {\sc LeftComp-L-2} always increases the stack depth (i.e., $d'=d+1$). $b=0$ means the size of left constituent is zero; in this case, the stack depth is increased when the size of the right constituent is greater than one. In summary, the stack depth of the right antecedent item is increased {\it unless} three indices, $j+1$ and $h$ in {\sc LeftComp-L-1} and $j$ in {\sc LeftComp-L-2} are identical, which occurs when the reduced constituent is a single shifted token. Finally, we note that we do not modify the depth of the right antecedent item of {\sc LeftComp-L-1}. This is correct since the consequent item of this rule \tikz[anchor=mid]{\halfrectangleinline}, which waits for a right half constituent of $h$, can only be used as an antecedent of the following {\sc LeftComp-L-2} rule. The role of {\sc LeftComp-L-1} is just to annotate the head of the reduced constituent and the span length. Then {\sc LeftComp-L-2} checks the depth condition and calculates the weight associated with {\sc LeftComp} action (i.e., $q\xmapsto{~h~}q' \in L_p$). The other part of the algorithm can be understood as follows: \begin{itemize} \item {\sc LeftComp-R-*} is almost the same as {\sc LeftComp-L-*}. The difference is in the tree shape of the left antecedent item; in the {\sc R} case, it is a right half constituent with a predicted node \tikz[anchor=mid]{\predrightinline}, which corresponds to a subtree in which the predicted node is not the head but the tail of the right spine. We distinguish these two trees in the algorithm since they often behave very differently as shown in Figure \ref{fig:ind:ded1}. Note that \tikz[anchor=mid]{\predrightinline} has two FSA states since it contains two head tokens collecting its dependents (i.e., the head of the tree and the predicted token). \item Differently from {\sc LeftComp}, {\sc RightComp} is summarized as a single rule while it seems a bit complicated. In the algorithm, {\sc RightComp} can only be performed when the predicted token of \tikz[anchor=mid]{\predrightinline} finishes collecting its left dependents (indicated as the consecutive indices of $h-1$ and $h$). See Section \ref{sec:ind:spurious} for the reason of this restriction. Another condition for applying this rule is that the right state $q'$ of head $h$ must be a final state after applying transition $q' \xmapsto{~p~} q''$, which collects new rightmost dependent of $h$, i.e., $p$. Under these conditions, the rule performs the following parser actions: 1) attach $p$ as a right dependent of $h$; 2) finishes the right FSA of $h$; and 3) start collecting left dependents of $p$ by setting the final state to the left state of it. \item Some rules such as {\sc Finish-*} and {\sc Combine} do not exist in the original transition system. We introduce these to represent the generative process of SBGs in the left-corner algorithm. \item We do not annotate a state on a triangle \tikz[baseline=-10pt]{\headtriangleinline{$:d$}}. This is because it can only be deduced by {\sc Combine}, which combines two finished constituents with the same head. \item A parse always finishes with a consecutive application of {\sc LeftPred}, {\sc Insert-Left}, and {\sc Accept} after a parse spanning the sentence \tikz[baseline=-3ex,thick]{ \draw (0, 0) -- ++(-0.4, -0.4) -- ++(0.8, 0) -- cycle; \draw[anchor=mid] (-0.55, -0.5) node {$1$} +(1.1, 0) node {$n$}; } is recognized. {\sc LeftPred} predicts the arc between the dummy root token $\$$ at $n+1$-th position and this parse tree, and then {\sc Insert-Left} removes this predicted arc. Note that to get a correct parse the left FSA for $\$$ should be modified appropriately for not collecting more than one dependent (the common parameterization of DMV automatically achieve this). \end{itemize} \subsection{Spurious ambiguity and stack depth} \label{sec:ind:spurious} We have splitted each head token into left and right, which means each derivation with this algorithm has one-to-one correspondence to a dependency tree. That is, there is no spurious ambiguity and the EM algorithm works correctly (Section \ref{sec:2:when}). In Section \ref{sec:oracle}, we developed an oracle for a transition system that returns a sequence of gold actions given a sentence and a dependency tree, and found that the presented oracle is {\it optimal} in terms of incurred stack depth. This optimality essentially comes from the implicit binarization mechanism of the oracle given a dependency tree (Theorem \ref{theorem:trans:binarize}). The chart algorithm presented above has the same mechanism of binarization, and thus is optimal in terms of incurred stack depth. Essentially this is due to our design of {\sc LeftComp} and {\sc RightComp} in Figure \ref{fig:ind:ded-comp}. We do not allow {\sc RightComp} for a rectangle \tikz[anchor=mid]{\predrectangleinline} as in the case of {\sc LeftComp-L-*}. Also, there is no second phase in {\sc RightComp}, meaning that the reduced constituent (i.e., the top stack element) does not have left children. We notice that these two conditions are exactly the same as the statement in Lemma \ref{lemma:trans:rightcomp}, which was the key to prove the binarization mechanism in Theorem \ref{theorem:trans:binarize}. When we are interested in another {\it nonoptimal} parsing algorithm, what we should do is to modify the allowed condition for {\sc LeftComp} and {\sc RightComp}, e.g., perhaps {\sc RightComp} is divided into several parts and instead the condition that {\sc LeftComp} can be applied is highly restricted. \subsection{Relaxing the definition of center-embedding} \label{sec:ind:modif} In the previous chapter, we have examined a simple relaxation for the definition of center-embedding by allowing constituents up to some length to be at the top on the stack. Here we demonstrate how this relaxation can be implemented with a simple modification to the presented algorithm. Let us assume the situation in which we allow constructions with one degree of center-embedding {\it if} the length of embedded constituent is at most three; that is, we allow a small part of center-embedding. We write this as $(D,C)=(1,3)$, meaning that the maximum stack depth is generally one ($D=1$) though we partially allow depth two when the length of the embedded constituent is less than or equal to three ($C=3$). This can be achieved by modifying the role of variable $b$ and equations in Figure \ref{fig:ind:ded-comp}. As we have seen, the current algorithm does not increase the stack depth of right antecedent item with {\sc LeftComp} or {\sc RightComp} if the length of those reduced constituents is just one (has no dependent), which corresponds to the case of $C=1$. Our goal is to generalize this calculation and to judge whether the length of the reduced constituent is greater than $C$ or not. We modify the side condition of {\sc LeftComp-*-1} as follows: \begin{equation} b = \max(C, h - (j + 1)). \end{equation} Now $b$ is a variable in the range $[0, C]$. $b=0$ means the left constituent is one word. Then, the side condition of {\sc LeftComp-*-2} is changed as follows: \begin{equation} d' = \left\{ \begin{array}{ll} d+1& \textrm{if } b + (j - h) \geq C \\ d& \textrm{otherwise.} \\ \end{array} \right. \label{eqn:ind:mod-depth} \end{equation} For example, when $C=3$, and the left constituent is \tikz[baseline=-3ex,thick]{ \draw (0, 0) -- ++(-0.4, -0.4) -- ++(0.4, 0) -- cycle; \draw[anchor=mid] (-0.55, -0.6) node {$3$} +(0.6, 0) node {$4$}; } while the right constituent is \tikz[baseline=-3ex,thick]{ \draw (0, 0) -- ++(0.4, -0.4) -- ++(-0.4, 0) -- cycle; \draw[anchor=mid] (-0.15, -0.6) node {$4$} +(0.6, 0) node {$5$}; }, the depth is unchanged since $b+(j-h) = 2 < 3$ in Eq. \ref{eqn:ind:mod-depth}. Note that the algorithm may be inefficient when $C$ is larger, although it is not a practical problem as we only explore very small values such as 2 and 3. \section{Experimental Setup} \label{sec:ind:setup} Now we move on to the empirical part of this chapter. This section summarizes the experimental settings such as the datasets that we use, evaluation method, and possible constraints that we impose to the models. In particular, we point out the crucial issue in the current evaluation metric in Section \ref{sec:ind:eval} and then propose our solution to alleviate this problem in Section \ref{sec:ind:param-const}. \subsection{Datasets} \label{sec:ind:dataset} We use two different multilingual corpora for our experiments: Universal Dependencies (UD) and Google universal dependency treebanks; the characteristics of these two corpora are summarized in Chapter \ref{chap:corpora}. We mainly use UD in this chapter, which comprises of 20 different treebanks. One problem of UD is that because this is the first study (in our knowledge) to use it in unsupervised dependency grammar induction, we cannot compare our models to previous state-of-the-art approaches. The Google treebanks are used for this purpose. It comprises of 10 languages and we discuss the relative performance of our approaches compared to the previously reported results on this dataset. \paragraph{Preprocess} Some treebanks of UD are annotated with multiword expressions although we strip them off for simplicity. Also we remove every punctuation mark in every treebank (both training and testing). This preprocessing has been performed in many previous studies. This is easy when the punctuation is at a leaf position; otherwise, we reattach every child token of that punctuation to its closest ancestor that is not a punctuation. \paragraph{Input token} Every model in this chapter only receives annotated part-of-speech (POS) tags given in the treebank. This is a crucial limitation of the current study both from the practical and cognitive points of view as we discussed in Section \ref{sec:2:unsupervised}. We learn the model on the {\it unified}, universal tags given in the respective corpora. In Google treebanks, the total number of tags is 11 (excluding punctuation) while that is 16 in UD. See Chapter \ref{chap:corpora} for more details. \paragraph{Sentence Length} Often unsupervised parsing systems are trained and tested on a subset of the original training/testing set by setting a maximum sentence length and ignoring every sentence longer than the threshold. The main reason for this filtering during training is efficiency: running dynamic programming ($O(n^4)$ in our case) for longer sentences in many number of iterations is expensive. We therefore set the maximum sentence length during training to 15 on UD experiments, which is not so expensive to explore many parameter settings and languages. We also evaluate our models against test sentences up to length 15. We choose this value because we are interested more in whether our structural constraint helps to learn basic word order of each language, which may be obscured if we use full sentences as in the supervised parsing experiments since longer sentences are typically conjoined with several clauses. This setting has been previously used in, e.g., \newcite{DBLP:journals/tacl/BiskH13}. We call this filterd dataset UD15 in the following. We use different filtering for Google treebanks and set the maximum length for training and testing to 10. This is the setting of \newcite{grave-elhadad:2015:ACL-IJCNLP}, which compares several models including the previous state-of-the-art method of \newcite{naseem-EtAl:2010:EMNLP}. See Tables \ref{tab:ind:stat-ud15} and \ref{tab:ind:stat-google10} for the statistics of the datasets. \begin{table}[t] \centering \scalebox{1.0}{ \begin{tabular}[t]{l r r r r} \hline Language & \#Sents. & \#Tokens & Av. len. & Test ratio\\\hline Basque & 3,743 & 31,061 & 8.2 & 24.9\\ Bulgarian & 6,442 & 53,737 & 8.3 & 11.0\\ Croatian & 1,439 & 15,285 & 10.6 & 6.6\\ Czech & 46,384 & 388,309 & 8.3 & 12.9\\ Danish & 2,952 & 25,455 & 8.6 & 6.5\\ English & 9,279 & 67,249 & 7.2 & 14.9\\ Finnish & 10,146 & 85,057 & 8.3 & 5.2\\ French & 5,174 & 55,413 & 10.7 & 2.1\\ German & 8,073 & 82,789 & 10.2 & 6.7\\ Greek & 746 & 6,987 & 9.3 & 11.2\\ Hebrew & 1,883 & 19,057 & 10.1 & 9.7\\ Hungarian & 580 & 5,785 & 9.9 & 11.0\\ Indonesian & 2,492 & 25,731 & 10.3 & 11.5\\ Irish & 408 & 3,430 & 8.4 & 18.6\\ Italian & 5,701 & 51,272 & 8.9 & 4.3\\ Japanese & 2,705 & 28,877 & 10.6 & 25.9\\ Persian & 1,972 & 18,443 & 9.3 & 9.9\\ Spanish & 4,249 & 45,608 & 10.7 & 2.2\\ Swedish & 3,545 & 31,682 & 8.9 & 20.7\\\hline \end{tabular} } \caption{Statistics on UD15 (after stripping off punctuations). Av. len. is the average length. Test ratio is the token ratio of the test set.} \label{tab:ind:stat-ud15} \end{table} \begin{table}[t] \centering \scalebox{1.0}{ \begin{tabular}[t]{l r r r r} \hline Language & \#Sents. & \#Tokens & Av. len. & Test ratio\\\hline German & 3,036 & 23,833 & 7.8 & 8.9\\ English & 4,341 & 31,287 & 7.2 & 5.7\\ Spanish & 1,258 & 9,731 & 7.7 & 3.2\\ French & 1,629 & 13,221 & 8.1 & 2.7\\ Indonesian & 799 & 6,178 & 7.7 & 10.7\\ Italian & 1,215 & 9,842 & 8.1 & 6.1\\ Japanese & 5,434 & 32,643 & 6.0 & 3.6\\ Korean & 3,416 & 21,020 & 6.1 & 7.7\\ Br-Portuguese & 1,186 & 9,199 & 7.7 & 11.7\\ Swedish & 1,912 & 12,531 & 6.5 & 18.5\\\hline \end{tabular} } \caption{Statistics on Google trebanks (maximum length = 10).} \label{tab:ind:stat-google10} \end{table} \subsection{Baseline model} \label{sec:ind:baseline} Our baseline model is the featurized DMV model \cite{bergkirkpatrick-EtAl:2010:NAACLHLT}, which we briefly described in Section \ref{sec:bg:loglinear}. We choose this model as our baseline since it is very simple yet performs competitively to the more complicated state-of-the-art systems. Other more sophisticated methods exist, but they typically require much complex inference techniques \cite{spitkovsky-alshawi-jurafsky:2013:EMNLP} or external information \cite{marevcek-straka:2013:ACL2013}, which obscure the contribution of our imposing constraints. Studying the effect of the structural constraints for these more strong models is remained for the future work. This model contains two tunable parameters, the regularization parameter and the feature templates. We fix the regularization parameter to 10, which is the same as the value in \newcite{bergkirkpatrick-EtAl:2010:NAACLHLT} since we did not find significant performance changes with this value in our preliminary study. We also basically use the same feature templates as Berg-Kirkpatrick et al.; the only difference is that we add additional backoff features for {\sc stop} probabilities that ignore both direction and adjacency, which we found slightly improves the performance. \subsection{Evaluation} \label{sec:ind:eval} Evaluation is one of the {\it unsolved} problems in the unsupervised grammar induction task. The main source of difficulty is the inherent ambiguity of the notion of {\it heads} in dependency grammar that we mentioned several times in this thesis (see Section \ref{sec:corpora:heads} for details). Typically the quality of the model is evaluated in the same way as the supervised parsing experiments: At test time, the model predicts dependency trees on test sentences; then the {\it accuracy} of the prediction is measured by an {\it unlabelled attachment score} (UAS): \begin{equation} \textrm{UAS} = \frac{\textrm{\# tokens whose head matches the gold head}}{\textrm{\# tokens}} \end{equation} The problem of this measure is that it completely ignores the ambiguity of head definitions since its score calculation is against the single gold dependency treebank. Some attempts to alleviate the problem of UAS exist, such as direction-free (undirected) measure \cite{klein-manning:2004:ACL} and a more sophisticated measure called neutral edge detection (NED) \cite{schwartz-EtAl:2011:ACL-HLT20112}. NED expands the set of {\it correct} dependency constructions given the predicted tree and the gold tree to save the errors that seem to be caused by annotation variations. However NED is a too lenient metric and causes different problems. For example, under NED (also under the undirected measure) the two trees ${\sf dogs^\curvearrowleft ran}$ and ${\sf dogs^\curvearrowright ran}$ are treated equal, although it is apparent that the correct analysis is the former. We suspect this is the reason why many researchers have not used NED and instead select UAS while recognizing the inherent problems \cite{cohen-2011,DBLP:journals/tacl/BiskH13}. However, the current situation is really unhealthy for our community. For example, if we find some method that can boost UAS from 40 to 60, we cannot identify whether this improvement is due to the acquisition of essential word orders such as dependencies between nouns and adjectives, or just overfitting to the current gold treebank. The latter case occurs, e.g., when the current gold treebank assumes that heads of prepositional phrases are the content words and the improved model changes the analysis for them from functional heads to content heads. Since our goal is not to obtain a model that can overfit to the gold treebank in a surface level, but to understand the mechanism that the model can acquire better word orders, we want to remove the possibility to make such (fake) improvements in our experiments. We try to minimize this problem not by revising the evaluation metric but by customizing models. We basically use UAS since the other metrics have more serious drawbacks. However, to avoid unexpected improvements/degradations, we constraint the model to explore only trees that may follow the conventions in the current gold data. This is possible in our corpora as they are annotated under some consistent annotation standard (see Chapter \ref{chap:corpora}). This approach is conceptually similar to \newcite{naseem-EtAl:2010:EMNLP}, although we do not incorporate many constraints on word orders, such as the dependencies between a verb and a noun. The detail of the constraints we impose to the models is described next. \subsection{Parameter-based Constraints} \label{sec:ind:param-const} The goal of the current experiments is to see the effect of {\it structural constraint}, which we hope to guide the model to find better parameters. To do so, on the baseline model (Section \ref{sec:ind:baseline}) we impose several additional constraints in the framework of structural constraint model described in Section \ref{sec:ind:overview}, and examine how performance changes (we list these constraints in Section \ref{sec:ind:structural-const}). In addition to the structural constraints, we also consider another kind of constraint that we call {\it parameter-based constraint} in the same framework, that is, as a cost function $f(z,\theta)$ in Eq. \ref{eqn:ind:cost}. The parameter-based constraints are constraints on POS tags in a given sentence and are specified decralatively, e.g., {\it X cannot have a dependent in the sentence}. Note that our main focus in this experiment is the effect of structural constraints. As we describe below, the parameter-based ones are introduced to make the empirical comparison of structural constrains more meaningful. Note that all constraints below are imposed during training only, as we found in our preliminary experiments that the constraints during decoding (at test time) make little performance changes. This is natural in particular for parameter-based constraints since the model learns the parameters that follow the given constraints during training. We consider the following constraints in this category: \begin{description} \item[Function words in UD] This constraint is introduced to alleviate the problem of evaluation that we discussed in Section \ref{sec:ind:eval}. One characteristic of UD is that its annotation style is consistently content head based, that is, every function word is analyzed as a dependent of the most closely related content word.\footnote{We find small exceptions in each treebank probably due to the remaining annotation errors though they are negligibly small.} By forcing the model to explore only structures that follow this convention, we expect we can minimize the problem of {\it arbitrariness} of head choices. This constraint can easily be implemented by setting every {\sc stop} probability of DMV for function words to 1. We regard the following six POS tags as function words: {\sc adp}, {\sc aux}, {\sc conj}, {\sc det}, {\sc part}, and {\sc sconj}. Since most arbitrary constructions are around function words, we hope this makes the performance change due to other factors such as the structural constraints clearer. Note that this technique is still not the perfect and cannot neutralize some annotation variations such as internal structures of noun phrases; we do not consider further constraints to save such more complex cases. \item[Function words in Google treebanks] We consider the similar constraints on Google treebanks. The Google treebanks uses the following four POS tags for function words: {\sc det}, {\sc conj}, {\sc prt}, and {\sc adp}. {\sc prt} is a particle corresponding to {\sc part} in UD. As in UD it also follows the annotation standard of Stanford typed dependencies \cite{mcdonald-EtAl:2013:Short} and analyzes most function words as dependents, although it is not the case for {\sc adp}, which is in most cases analyzed as a head of the governing phrase. We therefore introduce another kind of constraint for {\sc adp}, which prohibits to become a dependent, i.e., {\sc adp} must have at least one dependent. Implementing this constraint in our dynamic programming algorithm is a bit involved compared to the previous {\it unheadable} constraints, mainly due to our {\it split-head} representation. We can achieve this constraint in a similar way to the constituent length memoization technique that we introduced in Figure \ref{fig:ind:ded-comp} with variable $b$. Specifically, at {\sc LeftComp-L-1}, we remember whether the reduced head $h$ has at least one dependent if $h$ is {\sc adp}; then at {\sc LeftComp-L-2}, we disallow the rule application if that {\sc adp} is recognized as having no dependent. We also disallow {\sc Combine} if the head is {\sc adp} and the sizes of two constituents are both 1 (i.e., $i=h=j$). The resulting full constituent would be reduced by {\sc LeftPred} to be some dependent, which is although prohibited. Other function words are in most cases analyzed as dependents so we use the same restriction as the function words in UD. \item[Candidates for root words] This constraint is also parameter based though should be distinguished from the above two. Note that the constraints discussed so far are only for alleviating the problem of annotation variations in that they give no hint for acquiring basic word orders for the model such as the preference of an arc from a verb to a noun. This constraint is intended to give such a hint to the model by restricting possible root positions in the sentence. We consider two types of such constraints. The first one is called the {\it verb-or-noun constraint}, which restricts the possible root word of the sentence to a verb, or a noun. The second one is called the {\it verb-otherwise-noun constraint}, which more eagerly restricts the search space by only allowing a verb to become a root, if at least one verb exists in the sentence; otherwise, nouns become a candidate. If both do not exist, then every word becomes a candidate. In both corpora, {\sc verb} is the only POS tag for representing verbs. We regard three POS tags, {\sc noun}, {\sc pron}, and {\sc propn} in UD as nouns. In Google treebanks, {\sc noun} and {\sc pron} are considered as nouns. This type of knowledge is often employed in the previous unsupervised parsing models in different ways \cite{gimpel-smith:2012:NAACL-HLT2,gormley-eisner:2013:ACL2013,DBLP:conf/aaai/BiskH12,DBLP:journals/tacl/BiskH13} as {\it seed knowledge} given to the model. We will see how this simple hint to the target grammar affects the performance. \end{description} \subsection{Structural Constraints} \label{sec:ind:structural-const} These are the constraints that we focus on in the experiments. \begin{description} \item[Maximum stack depth] This constraint removes parses involving center-embedding up to a specific degree and can be implemented by setting the maximum stack depth in the algorithm in Figures \ref{fig:ind:ded1} and \ref{fig:ind:ded-comp}. We also investigate the relaxation of the constraint with a small constituent that we described in Section \ref{sec:ind:modif}. Studying the effect of this constraint is the main topic in the our experiments. \item[Dependency length bias] We also explore another structural constraint that biases the model to prefer shorter dependency length, which has previously been examined in \newcite{smith-eisner-2006-acl-sa}. With this constraint, each attachment probability is changed as follows: \begin{equation} \theta'_{\textsc{a}}(a|h,\textit{dir}) = \theta_{\textsc{a}}(a|h,\textit{dir}) \cdot \exp(-\beta_{len} \cdot (|h - d| - 1)), \end{equation} where $\theta_{\textsc{a},h,d}$ is the original DMV parameter attaching $d$ as a dependent of $h$. Differently from \newcite{smith-eisner-2006-acl-sa}, we subtract 1 in each length cost calculation to add no penalty for an arc between adjacent words. As \newcite{smith-eisner-2006-acl-sa} noted, this constraint leads to the following form of $f(z,\theta)$ in Eq. \ref{eqn:ind:cost}: \begin{equation} f_{len}(z,\theta) = \exp \left( -\beta_{len} \cdot \left( \sum_{1\leq d \leq n} (|h_{z,d} - d|) - n \right) \right), \label{eqn:ind:betalen} \end{equation} where $h_{z,d}$ is the analyzed head position for a dependent at $d$. Notice that $\sum_{1\leq d \leq n} (|h_{z,d} - d|)$ is the sum of dependency lengths in the sentence, which means that this model tries to {\it minimize} the sum of dependency length in the tree and is closely related to the theory of dependency length minimization, a typological hypothesis that grammars may universally favor shorter dependency length \cite{gildea-temperley:2007:ACLMain,DBLP:journals/cogsci/GildeaT10,Futrell18082015,gulordava-merlo-crabbe:2015:ACL-IJCNLP}. \end{description} Notice that typically center-embedded constructions are accompanied by longer dependencies. However the opposite is generally not the case; there are many constructions that do accompany longer dependencies though do not contain center-embedding. By comparing these two constraints, we discuss whether center-embedding is the phenomena worth considering than the simpler method of shorter length bias. \subsection{Other Settings} \label{sec:ind:setting} \paragraph{Initialization} Much previous works of unsupervised dependency induction, in particular DMV and related models, relied on heuristic initialization called {\it harmonic initialization} \cite{klein-manning:2004:ACL,bergkirkpatrick-EtAl:2010:NAACLHLT,cohen-smith:2009:NAACLHLT09,blunsom-cohn:2010:EMNLP}, which is obtained by running the first E-step of the training by setting every attachment probability between $i$ and $j$ to $(|i-j|)^{-1}$.\footnote{There is another variant of harmonic initialization \cite{smith-eisner-2006-acl-sa} though we do not explore this since the method described here is the one that is employed in \newcite{bergkirkpatrick-EtAl:2010:NAACLHLT} (p.c.), which is our baseline model (Section \ref{sec:ind:baseline}).} Note that this method also biases the model to favor shorter dependencies. We do not use this initialization with our structural constraints since one of our motivation is to invent a method that does not rely on such heursitics highly connected to a specific model (like DMV). We therefore initialize the model to be a uniform model. However, we also compare such uniform + structural constrained models to the harmonic initialized model {\it without} structural constraints to see the relative strength of our approach. \paragraph{Decoding} As noted above, every constraint introduced so far is only imposed during training. At decoding (test time), we do not consider the bias term of Eq. \ref{eqn:ind:joint} and just run the Viterbi algorithm to get the best parse under the original DMV model. \section{Empirical Analysis} \label{sec:ind:exp} We first check the performance differences of several settings on UD and then move on to Google treebanks to compare our approach to the state-of-the-art methods. \subsection{Universal Dependencies} \begin{figure}[p] \centering \resizebox{0.99\textwidth}{!} {\includegraphics[]{./data/ud/induction/feature_dmv_plot.pdf}} \vspace{-20pt} \caption{Attachment accuracies on UD15 with the function word constraint and structural constrains. The numbers in parentheses are the maximum length of a constituent allowed to be embedded. For example (3) means a part of center-embedding of depth two, in which the length of embedded constituent $\leq 3$, is allowed.} \label{fig:ind:ud15-none} \end{figure} \paragraph{When no help is given to the root word} We first see how the performance changes when using different length biases or stack depth biases (Section \ref{sec:ind:structural-const}). The parameter-based constraint is only the function word constraint of UD, that is, any function word cannot be a head of others. Figure \ref{fig:ind:ud15-none} summarizes the results. Although the variance is large, we can make the following observations: \begin{itemize} \item Often small (weak) length biases (e.g., $\beta_{len}=0.1$) work better than more strong biases. In many languages, e.g., English, Indonesian, and Croatian, the performance improves with a small bias and degrades as the bias is sharpened. Note that the left most point is the score with no constraints, e.g., just the uniformly initialized model. \item We try five different stack depth bound between depth one and depth two. The result shows in many cases the middle, e.g., 1(2), 1(3), and 1(4) works better. The performance of depth two is almost the same as the no-constraint baseline, meaning that stack depth two is too lenient to restrict the search space during learning. This observation is consistent with the empirical stack depth analysis in the previous chapter (Section \ref{sec:trans:ud-result}). \item On average, we find the best setting is the small length bias $\beta_{len}=0.1$. In Table \ref{tab:ind:ud15-none} we summarizes the accuracies of selected configurations in this figure, which work better, as well as the harmonic initialized models. \item The performance for some languages, in particular Greek and Finnish, is quite low compared to other languages. We inspect the output trees for these languages, and found that the model fails to identify very basic word orders, such as the tendency of a verb to be a root word. Essentially, the models so far do not receive such explicit knowledge about grammar, which is known to be particularly hard. We thus see next how performances change if a small amount of {\it seed knowledge} about the grammar is given the model. \end{itemize} \begin{table}[t] \centering \begin{tabular}{lccccc}\hline &Unif.&$C=2$&$C=3$&$\beta_{len} = 0.1$&Harmonic\\\hline Basque&44.0&{\bf48.5}&47.6&46.1&44.5\\ Bulgarian&73.6&{\bf74.7}&74.2&70.0&72.5\\ Croatian&40.3&30.5&37.7&{\bf54.5}&47.3\\ Czech&61.8&54.8&{\bf64.5}&58.3&54.2\\ Danish&40.9&{\bf43.0}&41.3&40.9&40.9\\ English&38.7&54.8&41.3&{\bf56.6}&39.1\\ Finnish&26.6&16.7&25.6&{\bf28.3}&26.4\\ French&35.2&45.9&{\bf46.6}&35.7&34.6\\ German&49.8&47.3&{\bf53.6}&50.3&49.5\\ Greek&29.2&19.3&18.3&30.4&{\bf 49.3}\\ Hebrew&60.7&59.3&57.1&{\bf60.9}&57.3\\ Hungarian&66.7&43.2&{\bf72.2}&63.6&65.8\\ Indonesian&36.4&{\bf58.7}&57.4&50.0&40.8\\ Irish&64.1&64.5&{\bf64.8}&63.0&64.4\\ Italian&61.0&65.0&70.7&{\bf72.2}&65.8\\ Japanese&56.1&{\bf76.7}&55.0&{\bf 76.7}&48.2\\ Persian&{\bf49.0}&44.4&44.7&40.7&41.4\\ Spanish&57.3&56.0&57.3&{\bf62.5}&55.4\\ Swedish&43.4&53.6&55.2&{\bf55.9}&49.5\\\\ Avg&49.2&50.4&51.8&{\bf53.5}&49.8\\\hline \end{tabular} \caption{Accuracy comparison on UD15 for selected configurations including harmonic initialization (Harmonic). Unif. is a baseline model without structural constraints. $C$ is the allowed constituent length when the maximum stack depth is one. $\beta_{len}$ is strength of the length bias. } \label{tab:ind:ud15-none} \end{table} \begin{table}[t] \centering \begin{tabular}{lcccccccc}\hline & \multicolumn{4}{c}{Verb-or-noun constraint} & \multicolumn{4}{c}{Verb-otherwise-noun constraint} \\ &Unif.&$C=2$&$C=3$&$\beta_{len} = 0.1$&Unif.&$C=2$&$C=3$&$\beta_{len} = 0.1$ \\\hline Basque &44.7&{\bf55.2}&54.3&46.4&{\bf55.8}&55.6&54.8&51.0\\ Bulgarian &73.4&{\bf75.8}&75.1&64.1&72.7&{\bf75.8}&75.2&70.6\\ Croatian &40.1&{\bf52.5}&41.4&47.3&{\bf57.0}&52.5&52.5&55.8\\ Czech &50.7&54.8&{\bf64.7}&59.2&63.2&54.9&{\bf66.3}&58.1\\ Danish &40.9&{\bf43.1}&41.3&40.9&48.7&46.9&{\bf50.1}&47.3\\ English &39.8&{\bf55.8}&41.3&40.2&57.2&55.2&{\bf58.5}&53.9\\ Finnish &26.2&27.7&27.7&{\bf28.3}&40.3&32.5&34.3&{\bf40.4}\\ French &35.7&{\bf50.9}&49.5&47.0&44.2&{\bf55.8}&54.6&42.1\\ German &49.7&47.1&{\bf56.0}&51.2&49.5&55.7&{\bf57.4}&49.9\\ Greek &61.7&{\bf70.0}&62.1&60.2&60.5&{\bf68.8}&62.0&60.2\\ Hebrew &52.9&58.7&{\bf60.9}&57.5&54.8&{\bf62.6}&54.2&57.2\\ Hungarian &68.8&41.6&{\bf71.3}&63.6&69.2&65.5&{\bf72.4}&64.8\\ Indonesian&32.0&{\bf58.3}&58.1&43.6&50.2&58.6&58.5&{\bf59.4}\\ Irish &63.1&64.5&{\bf65.2}&63.0&63.4&64.4&{\bf64.7}&63.9\\ Italian &62.7&{\bf77.1}&73.6&72.5&69.2&65.2&69.8&{\bf72.4}\\ Japanese &56.4&70.5&56.9&{\bf73.9}&56.9&69.0&57.0&{\bf73.5}\\ Persian &46.9&45.1&{\bf51.2}&39.7&48.0&45.1&{\bf51.1}&41.7\\ Spanish &46.8&56.1&57.3&{\bf63.1}&57.7&56.2&58.5&{\bf62.3}\\ Swedish &43.5&{\bf44.8}&43.2&43.5&{\bf57.9}&53.3&53.3&56.9\\\\ Avg.&49.3&55.2&{\bf55.3}&52.9&56.7&57.6&{\bf58.2}&56.9\\\hline \end{tabular} \caption{Accuracy comparison on UD15 for selected configurations with the hard constraints on possible root POS tags.} \label{tab:ind:ud15-vn} \end{table} \paragraph{Constraining POS tags for root words} Table \ref{tab:ind:ud15-vn} shows the results when we add two kinds of seed knowledge as parameter-based constraints (Section \ref{sec:ind:param-const}). We first see the result with the verb-or-noun constraint. This constraint comes from the main assumption of UD that the root token of a sentence is its main predicate, which is basically a verb, or a noun or a adjective if the main verb is copula. We remove adjective from this set as we found it is relative rare across languages. Interestingly in this case, the stack depth constraints ($C=2$ and $C=3$) work the best. In particular, the average score of the length bias ($\beta_{len}=0.1$) drops. We inspect the reason of this below. We next see the effect of another constraint, the verb-otherwise-noun constraint, which excludes nouns from the candidate for the root if both a verb and noun exist. This probably decreases the recall though we expect that it increases the performance as the majority of predicates is verbs. As we expected, with this constraint the average performance of baseline uniform model increases sharply from 49.2 to 56.7 (+7.5), which is larger than any increases with structural constraint to the original baseline model. In this case, though the change is small, again our stack depth constraints perform the best (58.2 with $C=3$); the average score with the length bias does not increase. \subsection{Qualitative analysis} When we inspect the scores of the models without root POS constraints in Table \ref{tab:ind:ud15-none} and the models with the verb-or-noun constraint in Table \ref{tab:ind:ud15-vn}, we notice that the behaviors of our models with stack depth constraints and other models are often quite different. Specifically, \begin{itemize} \item It is only Greek on which the baseline uniform model improves score from the setting with no root POS constraint. \item In other languages, the scores of the uniform model are unchanged or dropped when adding the root POS constraint. For example the score for Czech drops from 61.8 to 50.7. \item The same tendency is observed in the models of $\beta_{len}=0.1$. Its score for Greek improves from 30.4 to 60.2 while other scores are often unchanged or dropped; an exception is French, on which the score improves from 35.7 to 47.0. For other languages, such as Croatian (-7.2), English (-16.4), Indonesian (-6.4), and Swedish (-12.4), the scores sharply drop. \item On the other hand, we observe no significant performance drops in our models with stack depth constraints (i.e., $C=2$ and $C=3$) by adding the root POS constraint. \end{itemize} These differences between the length constraints and stack depth constraints are interesting and may shed some light on the characteristics of two approaches. Here we look into the output parses in English, with which the performance changes are typical, i.e., the model of $\beta_{len}=0.1$ drops while the scores of other models are almost unchanged. \begin{figure}[p] \centering \definecolor{g70}{gray}{0.6} \newcommand{\gw}[1]{\color{g70}{#1}} \newcommand{\onthenext}{\gw{On} \& \gw{the} \& \gw{next} \& \gw{two} \& \gw{pictures} \& \gw{he} \& \gw{took} \& \gw{screenshots} \& \gw{of} \& \gw{two} \& \gw{beheading} \& \gw{video's}\\ {\sc adp} \& {\sc det} \& {\sc adj} \& {\sc num} \& {\sc noun} \& {\sc pron} \& {\sc verb} \& {\sc noun} \& {\sc adp} \& {\sc num} \& {\sc noun} \& {\sc noun} } \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge{5}{1}{} \depedge{5}{2}{} \depedge{5}{3}{} \depedge{5}{4}{} \depedge{7}{5}{} \depedge{7}{6}{} \deproot[edge unit distance=3ex]{7}{} \depedge{7}{8}{} \depedge{12}{9}{} \depedge{12}{10}{} \depedge{12}{11}{} \depedge{8}{12}{} \end{dependency} } \subcaption{Gold parse.} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge[red]{6}{1}{} \depedge[red]{3}{2}{} \depedge{5}{3}{} \depedge{5}{4}{} \depedge[red]{6}{5}{} \depedge[red]{8}{6}{} \depedge[red]{6}{7}{} \depedge[red]{11}{8}{} \depedge[red]{10}{9}{} \depedge[red]{8}{10}{} \depedge{12}{11}{} \deproot[edge unit distance=3ex,red]{12}{} \end{dependency} } \subcaption{Output by uniform, uniform + verb-or-noun, and $\beta=0.1$ + verb-or-noun.} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge{5}{1}{} \depedge[red]{3}{2}{} \depedge{5}{3}{} \depedge{5}{4}{} \deproot[edge unit distance=3ex,red]{5}{} \depedge{7}{6}{} \depedge[red]{5}{7}{} \depedge{7}{8}{} \depedge[red]{10}{9}{} \depedge[red]{11}{10}{} \depedge[red]{8}{11}{} \depedge[red]{11}{12}{} \end{dependency} } \subcaption{Output by $C=2$, $C=2$ + verb-or-noun.} \label{fig:ind:c2-eng-ud} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge{5}{1}{} \depedge[red]{3}{2}{} \depedge{5}{3}{} \depedge{5}{4}{} \deproot[edge unit distance=3ex,red]{5}{} \depedge[red]{5}{6}{} \depedge[red]{6}{7}{} \depedge{7}{8}{} \depedge[red]{10}{9}{} \depedge[red]{11}{10}{} \depedge[red]{8}{11}{} \depedge[red]{11}{12}{} \end{dependency} } \subcaption{Output by $C=3$, $C=3$ + verb-or-noun.} \label{fig:ind:c3-eng-ud} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge[red]{4}{1}{} \depedge[red]{3}{2}{} \depedge[red]{4}{3}{} \depedge{5}{4}{} \depedge{7}{5}{} \depedge{7}{6}{} \deproot[edge unit distance=4ex]{7}{} \depedge{7}{8}{} \depedge[red]{10}{9}{} \depedge[red]{11}{10}{} \depedge{12}{11}{} \depedge[red]{7}{12}{} \end{dependency} } \subcaption{Output by $\beta_{len}=0.1$.} \label{fig:ind:beta-eng-ud} \end{minipage} \caption{Comparison of output parses of several models on a sentence in English UD. The outputs of $C=2$ and $C=3$ do not change with the root POS constraint, while the output of $\beta_{len}=0.1$ changes to the same one of the uniform model with the root POS constraint. Colored arcs indicate the wrong predictions. Note surface forms are not observed by the models (only POS tags are). } \label{fig:ind:error-eng-ud} \end{figure} When we compare output parses of different models, we notice that often the same tree is predicted by several different models. Figure \ref{fig:ind:error-eng-ud} shows examples of output parses of different models. The following observations are made with those errors. Note that these are typical, in that the same observation can be often made on other sentences as well. \begin{enumerate} \item One strong observation from Figure \ref{fig:ind:error-eng-ud} is that the output of $\beta_{len}=0.1$ reduces to that of the uniform model when the root POS constraint is added to the model. As can be seen in other parses, every model in fact predicts that the root token is a noun or a verb, which suggests this explicit root POS constraint is completely redundant in the case of English. \item Contrary to $\beta_{len}=0.1$, the stack depth constraints, $C=2$ and $C=3$, are not affected by the root POS constraint. This is consistent with the scores in Tables \ref{tab:ind:ud15-none} and \ref{tab:ind:ud15-vn}; the score of $C=3$ is unchanged and that of $C=2$ increases by just 1.0 point with the root POS constraint. \item Whie the scores of the uniform model and $C=3$ are similar in Table \ref{tab:ind:ud15-none} (38.7 and 41.3, respectively), the properties of output parses seem very different. The typical errors made by $C=3$ are the root tokens, which are in most cases predicted as nouns as in Figure \ref{fig:ind:c3-eng-ud}, and arcs between nouns and verbs, which also are typically predicted as {\sc noun} $\rightarrow$ {\sc verb}. Contrary to these {\it local} mistakes, the uniform model often fail to capture the basic structure of a sentence. For example, while $C=3$ correctly identifies that ``of two beheading video's'' comprises a constituent, which modifies ``screenshots'', which in turn becomes an argument of ``took'', the parse of the uniform model is more corrupt in that we cannot identify any semantically coherent units from it. See also Figure \ref{fig:ind:error-eng-ud-2} where we compare outputs of these models on another sentence. \end{enumerate} \begin{figure}[p] \centering \definecolor{g70}{gray}{0.6} \newcommand{\gw}[1]{\color{g70}{#1}} \newcommand{\onthenext}{\gw{But} \& \gw{he} \& \gw{has} \& \gw{insisted} \& \gw{that} \& \gw{he} \& \gw{wants} \& \gw{nuclear} \& \gw{power} \& \gw{for} \& \gw{peaceful} \& \gw{purposes}\\ {\sc conj} \& {\sc pron} \& {\sc aux} \& {\sc verb} \& {\sc sconj} \& {\sc pron} \& {\sc verb} \& {\sc adj} \& {\sc noun} \& {\sc adp} \& {\sc adj} \& {\sc noun} } \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge{4}{1}{} \depedge{4}{2}{} \depedge{4}{3}{} \deproot[edge unit distance=3ex]{4}{} \depedge{7}{5}{} \depedge{7}{6}{} \depedge{4}{7}{} \depedge{9}{8}{} \depedge{7}{9}{} \depedge{12}{10}{} \depedge{12}{11}{} \depedge{7}{12}{} \end{dependency} } \vspace{-10pt} \subcaption{Gold parse.} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge[red]{2}{1}{} \deproot[edge unit distance=3ex,red]{2}{} \depedge{4}{3}{} \depedge[red]{2}{4}{} \depedge[red]{6}{5}{} \depedge[red]{11}{6}{} \depedge[red]{6}{7}{} \depedge{9}{8}{} \depedge{7}{9}{} \depedge[red]{9}{10}{} \depedge{12}{11}{} \depedge[red]{4}{12}{} \end{dependency} } \vspace{-10pt} \subcaption{Output by the uniform model.} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge[red]{2}{1}{} \deproot[edge unit distance=3ex,red]{2}{} \depedge{4}{3}{} \depedge[red]{2}{4}{} \depedge[red]{6}{5}{} \depedge[red]{4}{6}{} \depedge[red]{6}{7}{} \depedge{9}{8}{} \depedge{7}{9}{} \depedge{12}{10}{} \depedge{12}{11}{} \depedge[red]{9}{12}{} \end{dependency} } \vspace{-10pt} \subcaption{Output by $C=3$.} \label{fig:ind:c3-eng-ud-2} \end{minipage} \begin{minipage}[b]{.99\linewidth} \centering {\small \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \onthenext \\ \end{deptext} \depedge[red]{2}{1}{} \depedge{4}{2}{} \depedge{4}{3}{} \deproot[edge unit distance=3ex,red]{4}{} \depedge[red]{6}{5}{} \depedge{7}{6}{} \depedge{4}{7}{} \depedge{9}{8}{} \depedge{7}{9}{} \depedge[red]{9}{10}{} \depedge{12}{11}{} \depedge[red]{4}{12}{} \end{dependency} } \vspace{-10pt} \subcaption{Output by $\beta_{len}=0.1$.} \label{fig:ind:beta-eng-ud-2} \end{minipage} \caption{Another comparison between outputs of the uniform model and $C=3$ in English UD. We also show $\beta_{len}=0.1$ for comparison. Although the score difference is small (see Table \ref{tab:ind:ud15-none}), the types of errors are different. In particular the most of parse errors by $C=3$ are at local attachments (first-order). For example it consistently recognizes a noun is a head of a verb, and a noun is a sentence root. Note an error on ``power $\rightarrow$ purposes'' is an example of PP attachment errors, which may not be solved under the current problem setting receiving only a POS tag sequence. } \label{fig:ind:error-eng-ud-2} \end{figure} \paragraph{Discussion} The first observation, i.e., the output of $\beta_{len}=0.1$ + verb-or-noun reduces to that of the uniform model, is found in most other sentences as well. Along with the results in other languages, we suspect the effect of the length bias gets weak when the root POS constraint is given. We do not analyze the cause of this degradation more, but the discussion below on the difference between two constraints, i.e., the stack depth constraint and the length bias, might be relevant to that. The essential difference between these two approaches is in the assumed structural form to be constrained: The length bias (i.e., $\beta_{len}$) is a bias for each dependency arcs on the tree, while the stack depth constraint, which corresponds to the center-embeddedness, is inherently the constraint on constituent structures. Interestingly, we can see the effect of this difference in the output parses in Figures \ref{fig:ind:error-eng-ud} and \ref{fig:ind:error-eng-ud-2}. Note that we do not use the constraints at decoding and all differences are due to the learned parameters with the constraints during training. Nevertheless, we can detect some typical errors in two approaches. One difference between trees in Figure \ref{fig:ind:error-eng-ud} is in the constructions of a phrase ``On ... pictures''. $\beta_{len}=0.1$ predicts that ``On the next two'' comprises a constituent, which modifies ``pictures'' while $C=2$ and $C=3$ predict that ``the next two pictures'' comprises a constituent, which is correct, although the head of a determiner is incorrectly predicted. On the other hand, $\beta_{len}=0.1$ works well to find more primitive dependency arcs between POS tags, such as arcs from verbs to nouns, which are often incorrectly recognized by stack depth constraints. Similar observations can be made in trees in Figure \ref{fig:ind:error-eng-ud-2}. See the constructions on ``for peaceful purposes''. In is only $C=3$ (and $C=2$ though we omit) that predicts it becomes a constituent. In other positions, again, $\beta_{len}=0.1$ works better to find local dependency relationships. The head of ``purposes'' is predicted differently, but this choice is equally difficult in the current problem setting (see the caption of Figure \ref{fig:ind:error-eng-ud-2}). These observations may explain the reason why the root POS constraints work better with the stack depth constraints than the dependency length bias. With the stack depth constraints, the main source of improvements is detections of constituents, but this constraint itself does not help to resolve some dependency relationships, e.g., the dependency direction between verbs and nouns. The root POS constraints are thus orthogonal to this approach. They may help to solve the remaining ambiguities, e.g., the head choice between a noun and a verb. On the other hand, the dependency length bias is the most effective to find basic dependency relationships between POS tags while the resulting tree may contain implausible constituent units. Thus the effect of the length bias seems somewhat overlapped with the root POS constraints, which may be the reason why they do not well collaborate with each other. \paragraph{Other languages} \REVISE{ We further inspect the results of some languages with exceptional behaviors seprately below. \begin{description} \item[Japanese] In Figure \ref{fig:ind:ud15-none}, we can see that the performance of Japanese is the best with a strong stack depth constraint, such as depth 1 and $C=2$, and the performance drops when relaxing the constraint. This may be counterintuitive from our oracle results in Chapter \ref{chap:transition} (e.g., Figure \ref{fig:load-comparison-relax-ud}) that Japanese is the language in which the ratio of center-embedding is relatively higher. Inspecting the output parses, we found that these results are essentially due to the word order of Japanese, which is mainly head final. With a strong constraint (e.g., the stack depth one), the model tries to build a parse that is purely left- or right-branching. An easy way to create such parse is placing a root word at the beginning or the end of the sentence, and then connecting adjacent tokens from left to right, or right to left. This is what happened when a severe constraint, e.g., the maximum stack depth of 1 is imposed. Since the position of root token is in most cases correctly identified, the score gets relatively higher. On the other hand, when relaxing the constraint, the model also try to explore parses in which the root token is not the beginning/end of the sentence, but internal positions, and the model fail to find the head final pattern of Japanese. This Japanese result suggests that sometimes our stack depth constraint helps learning even when the imposed stack depth bound does not fit well to the syntax of the target language, though the learning behavior differs from our expectation. In this case, the model does not capture the syntax correctly in the sense that Japanese sentences cannot be parsed with a severe stack depth bound, but the model succeeded to find syntactic patterns that are a very rough approximation of the true syntax, resulting in a higher score. \item[Finnish] Finnish is an inflectional language with rich morphologies and with little function words. This is essentially the reason for consistent lower accuracies of Finnish even when the constraint on root POS tags is given. Recall that all our models are imposed the function word constraint (Section \ref{sec:ind:param-const}). Though our primary motivation to introduce this constraint is to alleviate problems in evaluation, it also greatly reduces the search space if the ratio of function words is high. Also at test time, a higher ratio of function words indicates a higher chance of correct attachments since the head candidates for a function word is limited to other content words.\footnote{Recall that although we remove constraints at test time the model rarely find a parse with function words at internal positions since the model is trained to avoid such parses.} Below is an example of a dependency tree in Finnish treebank: \begin{center} {\small \definecolor{g70}{gray}{0.6} \newcommand{\gw}[1]{\color{g70}{#1}} \begin{dependency}[theme=simple,thick] \begin{deptext}[column sep=0.2cm] \gw{Liikett\"a} \& \gw{ei} \& \gw{ole} \& \gw{ei} \& \gw{toimintaa} \\ {\sc noun} \& {\sc verb} \& {\sc verb} \& {\sc verb} \& {\sc noun} \\ \end{deptext} \depedge{3}{1}{} \depedge{3}{2}{} \deproot[edge unit distance=1.5ex]{3}{} \depedge{3}{4}{} \depedge{4}{5}{} \end{dependency} } \end{center} This sentence comprises of {\sc noun} and {\sc verb} only, and there are a lot of similar sentences. This example also explains the reason why the performance of Finnish is still low with the root POS constraints. Table \ref{ind:tab:func} lists the statistics about the ratio of function words in the training corpora. We can see that it is only Finnish that the ratio of function words is less than 10\%. Also, the ratio in Japanese is very high. This probably explains the reason for relatively high overall scores of Japanese. Thus, the variation of the scores across languages in the current experiment is largely explained by the ratio of function words in each language. \item[Greek] In Figure \ref{tab:ind:ud15-none}, the scores on Greek with the stack depth constraints are consistently worse than the uniform baseline. Though overall scores are low, the situation largely changes with the root POS constraints, and with them the scores get stable. A possible explanation for these exceptional behaviors might be the relatively small ratio of function words (Table \ref{ind:tab:func}) in the data along with the small size of the training data (Table \ref{tab:ind:stat-ud15}), both of which could be partially alleviated with the root POS constraints. More linguistically intuitive explanation might be that Greek is a relatively free word order language and our structural constraints do not work well for guiding the model for finding such grammars. However, to make such conclusion, we have to set up experiments more carefully, e.g., by eliminating the bias caused by the smaller size of the data. We thus leave it our future work to discuss the limitation of the current approach with a typological difference in each language. \end{description} \begin{table}[t] \centering \begin{minipage}{.35\textwidth} \centering \begin{tabular}[t]{l|l} & Ratio (\%) \\\hline basque & 26.57 \\ bulgarian & 25.88 \\ croatian & 24.55 \\ czech & 20.09 \\ danish & 30.66 \\ english & 27.98 \\ finnish & {\bf 9.63} \\ french & 37.84 \\ german & 32.09 \\ greek & 16.94 \\ \end{tabular} \end{minipage} \begin{minipage}{.35\textwidth} \centering \begin{tabular}[t]{l|l} & Ratio (\%) \\\hline hebrew & 32.29 \\ hungarian & 23.76 \\ indonesian & 19.68 \\ irish & 36.09 \\ italian & 37.73 \\ japanese & {\bf 45.14} \\ persian & 23.25 \\ spanish & 36.99 \\ swedish & 29.64 \\ \end{tabular} \end{minipage} \caption{Ratio of function words in the training corpora of UD (sentences of length 15 or less).} \label{ind:tab:func} \end{table} } \subsection{Google Universal Dependency Treebanks} So far our comparison is limited in the models of our baseline DMV model with some constraints. Next we see the relative performance of this approach compared to the current state-of-the-art unsupervised systems on another dataset, Google treebanks. Table \ref{tab:ind:google} shows the result. The scores of the other systems are borrowed from \newcite{grave-elhadad:2015:ACL-IJCNLP}. In this experiment, we only focus on the settings where the root word is restricted with the verb-otherwise-noun constraint. Among our structural constraints, again our stack depth constraints perform the best. In particular the scores with $C=2$ are stable across languages. \begin{table}[t] \centering \begin{tabular}[t]{lcccccc}\hline &Unif.&$C=2$&$C=3$&$\beta_{len} = 0.1$&Naseem10&Grave15 \\\hline German&64.5&64.3&64.6&62.5&53.4&60.2 \\ English&57.9&59.5&57.9&56.9&66.2&62.3 \\ Spanish&68.2&71.1&70.5&69.6&71.5&68.8 \\ French&69.2&69.6&70.1&66.4&54.1&72.3 \\ Indonesian&66.8&67.4&66.0&66.7&50.3&69.7 \\ Italian&43.9&67.3&65.9&44.0&46.5&64.3 \\ Japanese&47.5&54.5&47.4&47.6&58.2&57.5 \\ Korean&28.6&30.7&28.3&43.2&48.8&59.0 \\ Br-Portuguese&63.0&67.1&62.7&62.6&46.4&68.3 \\ Swedish&67.4&67.9&67.3&66.4&64.3&66.2 \\\\ Avg&57.7&62.0&60.1&58.6&56.0&64.8 \\\hline \end{tabular} \caption{Attachment scores on Google universal treebanks (up to length 10). All proposed models are trained with the verb-otherwise-noun constraint. Naseem10 = the model with manually crafted syntactic rules between POS tags (Naseem et al., 2010); Grave15 = also relies on the syntactic rules but is trained discriminatively (Grave and Elhadad, 2015).} \label{tab:ind:google} \end{table} All our method outperforms the strong baseline model of \newcite{naseem-EtAl:2010:EMNLP}, which encodes manually crafted rules (12 rules) such as {\sc verb $\rightarrow$ noun} and {\sc adp $\rightarrow$ noun} via the posterior regularization method \cite{journals/jmlr/GanchevGGT10}. Compared to this, our baseline method uses fewer syntactic rules via parameter-based constraints, in total 5 (3 for function words and the verb-otherwise-noun constraint) and is much simpler than their posterior regularization method. \newcite{grave-elhadad:2015:ACL-IJCNLP} is a more sophisticated model, which utilizes the same syntactic rules as the Naseem et al.'s method. Our models do not outperform this model, though it is only Korean that ours do not perform competitively to their model. \section{Discussion} We found that our stack depth constraints improve the performance of unsupervised grammar induction across languages and datasets in particular when some seed knowledge about grammar is given to the model. However, we also find that in many languages the improvements from the no structural constraint baseline becomes small when such knowledge is given. Also, the performance reaches to the current state-of-the-art method, which utilizes much more complex machine learning techniques as well as manually specific syntactic rules. We thus speculate that our models already reach some limitation under the current problem setting, that is, learning of dependency grammar from the POS input only. Recall that the annotation of UD is content-head based and every function word is a dependent of the most closely related content word. This means under the current first-order model on POS tag inputs, many important information that currently a supervised parser would exploit is abandoned. For example, the model would collapses both some noun phrase and prepositional phrase into its head (probably {\sc noun}) while this information is crucial; e.g., an adjective cannot be attached to a prepositional phrase, etc. One way to exploit such clue for disambiguation is to utilize the boundary information \cite{spitkovsky-alshawi-jurafsky:2012:EMNLP-CoNLL,spitkovsky-alshawi-jurafsky:2013:EMNLP}. Typically, unsupervised learning of structures gets more challenging when employing more structurally complex models. However, one of our strong observations from the current experiment is that our stack depth constraint rarely hinders, i.e., does not decrease the performance. Here we have focused on very simple generative model of DMV though it may be more interesting to see what happens when imposing this constraint on more structurally complex models on which learning is much harder. There remains many rooms for further improvements and such type of study would be important toward one of the goals of unsupervised parsing of identifying which structure can be learned without explicit supervisions \cite{bisk-hockenmaier:2015:ACL-IJCNLP}. \REVISE{ In this work, our main focus was the imposed structural constraints (or linguistic prior) themselves, and we did not care much about the method to encode these prior knowledge. That is, our method to inject constraints was a crude way, i.e., via hard constraints in the E step (Eq. \ref{eqn:ind:cost}), and there exist more sophisticated methods with newer machine learning techniques. Posterior regularization (PR) that we compared the performance with \cite{naseem-EtAl:2010:EMNLP} is one of such techniques. The crucial difference between our imposing hard constraints and PR is that PR imposes constraints in expectation, i.e., every constraint becomes a {\it soft} constraint. If we reformulate our model with PR, then that may probably impose a soft constraint, e.g., {\it the expected number of occurrence of center-embedding up to some degree in a parse is less than X}. {\it X} becomes 1 if we try to resemble the behavior of our hard constraints, but could be any values, and such flexibility is one advantage, which our current method cannot appreciate. Thus, from a machine learning perspective, one interesting direction is to compare the performances of two approaches with the (conceptually) same constraints. } \section{Conclusion} In this study, we have shown that our imposed stack depth constraint improves the performance of unsupervised grammar induction in many settings. Specifically, it often does not harm the performance when it already performs well while it reinforces the relatively poorly performed models (Table \ref{tab:ind:google}). One limitation of the current approach is that the information that the parser can utilize is very superficial (i.e., the first order model on content-head based POS tags). However, our positive results in the current experiment are an important first step for the current line of research and encourage further study on more structurally complex model beyond the simple DMV model. \chapter{Conclusions} \label{chap:conclusion} Identifying universal syntactic constraints of language is an attractive goal both from the theoretical and empirical viewpoints. To shed light on this fundamental problem, in this thesis, we pursued the {\it universalness} of the language phenomena of center-embedding avoidance, and its practical utility in natural language processing tasks, in particular unsupervised grammar induction. Along with these investigations, we develop several computational tools capturing the syntactic regularities stemming from center-embedding. The tools we presented in this thesis are left-corner parsing methods for dependency grammars. We formalized two related parsing algorithms. The transition-based algorithm presented in Chapter \ref{chap:transition} is an incremental algorithm, which operates on the stack, and its stack depth only grows when processing center-embedded constructions. We then considered tabulation of this incremental algorithm in Chapter \ref{chap:induction}, and obtained an efficient polynomial time algorithm with the left-corner strategy. In doing so, we applied {\it head-splitting} techniques \cite{Eisner:1999:EPB:1034678.1034748,eisner-2000-iwptbook}, with which we removed the spurious ambiguity and reduced the time complexity from $O(n^6)$ to $O(n^4)$, both of which were essential for our application of inside-outside calculation with filtering. Dependency grammars were the most suitable choice for our cross-linguistic analysis on language universality, and we obtained the following fruitful empirical findings using the developed tools for them. \begin{itemize} \item Using multilingual dependency treebanks, we quantitatively demonstrate the universalness of center-embedding avoidance. We found that most syntactic constructions across languages can be covered within highly restricted bounds on the degree of center-embedding, such as one, or zero, when relaxing the condition of the size of embedded constituent. \item From the perspective of parsing {\it algorithms}, the above findings mean that a left-corner parser can be utilized as a tool for exploiting universal constraints during parsing. We verified this ability of the parser empirically by comparing the growth of stack depth when analyzing sentences on treebanks with those of existing algorithms, and showed that only the behavior of the left-corner parser is consistent across languages. \item Based on these observations, we examined whether the found syntactic constraints help in finding the syntactic patterns (grammars) in the given sentences through experiments on unsupervised grammar induction, and found that our method often boosts the performance from the baseline, and competes with the current state-of-the-art method in a number of languages. \end{itemize} We believe the presented study will be the starting point of many future inquiries. As we have mentioned several times, our choice of dependency grammars for the representation was motivated by its cross-linguistic suitability as well as its computational tractability. Now we have evidences on the language universality of center-embedding avoidance. We consider thus one exciting direction would be to explore unsupervised learning of constituent structures exploiting our found constraint, which has not been solved yet with traditional PCFG-based methods. Note that unlike the dependency length bias, which is only applicable for dependency-based models, our constraint is conceptually free from grammar formalisms As we have mentioned in Chapter \ref{chap:1}, recently there has been a growing interest on the tasks of grounding, or semantic parsing. Also, another direction of grammar induction in particular with more sophisticated grammar formalisms, such as CCG, has been initiated with some success. There remains many open questions in these settings, e.g., on the necessary amount of seed knowledge to make learning tractable \cite{bisk-hockenmaier:2015:ACL-IJCNLP,AAAI159835}. Arguably, the system with less initial seed knowledge or less assumption on the specific task is preferable (by keeping accuracies). We hope our introduced constraint helps in reducing the required assumption, or improving the performance in those more general grammar induction tasks. Finally, we suspect that the study of child language acquisition would also have to be discussed within the setting of grounding, i.e., with some kind of distant supervision or perception. Although we have not explored the cognitive plausibility of the presented learning and parsing methods, our empirical finding that when learning from relatively short sentences a severe stack depth constraint (relaxed depth one) often improves the performance may become an appealing starting point for exploring computational models of child language acquisition with human-like memory constraints.
2024-02-18T23:40:28.826Z
2016-08-02T02:10:18.000Z
algebraic_stack_train_0000
2,524
65,368
proofpile-arXiv_065-12380
\section{Introduction} Randomized bin-picking refers to the problem of automatically picking an object that is randomly stored in a box. If randomized bin-picking is introduced to a production process, we do not need any parts-feeding machines or human workers to once arrange the objects to be picked by a robot. Although a number of researches have been done on randomized bin-picking \cite{Domae2014,SII,Dupuis_08,icra13}, randomized bin-picking is still difficult and is not widely introduced to production processes. Since one of the main reasons is its low success rate of the pick, we have proposed a learning based approach which can automatically increase the success rate \cite{RA-L}. Fig. \ref{intro:RA-L2015} illustrates the randomized bin-picking where we use a dual-arm manipulator with a vision sensor (3D depth sensor) and two-fingered grippers both attached at the wrist. We first detect the poses of randomly stacked objects by using the visual information obtained from the 3D vision sensor attached at the wrist. Once the objects' poses are obtained, we consider predicting whether the robot can successfully pick one of the objects from the pile. If it is predicted that the robot successfully picks an object, the robot tries to pick an object. In our approach, the success rate is expected to increase if the number of detected object increases. Here, in the conventional research on randomized bin-picking, we have tried to detect the poses of all objects at every picking experiment in spite of the fact that the configuration of object while executing the current picking trial is almost same as that while executing the previous picking trial. The configuration on objects while executing the current picking trial is usually just partially different from that while executing the previous picking trial since a finger usually contacts just a few objects during the previous picking trial. To cope with this problem, we propose a new method for object pose detection for the randomized bin-picking. In our proposed method, we consider detecting the objects' poses at a portion of the pile where its visual information is different from the visual information obtained during the previous picking experiment. \begin{figure}[t] \centering \includegraphics[width=6cm]{visualPlan_intro.eps} \caption{Overview of our bin-picking system \label{intro:RA-L2015}} \end{figure} In our proposed method, we first obtain the pose of 3D vision sensor attached at the wrist to capture the point cloud on the randomly stacked pile realizing its maximum visibility. Here, based on the occupancy grid map, the maximum visibility of the pile is realized by merging the point cloud captured during the current picking trial with the point cloud captured during the previous picking trial. Then, we show a method for detecting the poses of objects. We consider comparing each segment of the point cloud captured during current picking trial with that captured during the previously picking trial. If the difference is small, we do not estimate the poses of objects and can save the time needed for the estimation. \section{Learning Based Bin-Picking Overview} We first briefly explain the leaning based bin-picking proposed previously \cite{RA-L}. As shown in Fig. \ref{intro:RA-L2015}, let us consider the case in which the same objects are randomly stored in a box. To pick an object from the pile, a 3D vision sensor (e.g., Xtion PRO) first captures a point cloud of randomly stacked objects. Then, we try to estimate the poses of randomly stacked objects. Then, we try to pick one of the objects which poses were detected. First among multiple candidates of grasping postures, we solve IK to check the reachability of the robot. Then, for each reachable grasping posture, a discriminator trained through a number of picking trials estimates whether or not the robot can successfully pick an object. Here, the estimation is performed based on the distribution of point cloud included in the swept volume of finger motion as shown in Fig. \ref{fig:sweepvol}. If there are multiple grasping postures which are estimated to successfully pick an object, we consider selecting a grasping posture from multiple candidates according to the value of an index function. Then, the robot actually picks an object according to the selected grasping posture. \begin{figure} \centering \includegraphics[width=9cm]{sweepvol.eps} \caption{Finger swept volume \label{fig:sweepvol}} \end{figure} \section{Sensor Pose Calculation} We assume that the manipulator has at least 6 DOF such that the wrist can make an arbitrary pose within its movable range. The pose of the 3D vision sensor is determined to maximize the visibility of randomly stacked objects so as to precisely estimate the poses of randomly stacked objects. As shown in Fig. \ref{fig:polygon}, let us assume a $n$-faced regular polyhedron sharing its geometrical center with the geometrical center of box's bottom surface. Let us also assume a line orthogonally intersecting a face of the polyhedron and passing through the geometrical center. Let us consider a point along the line where the distance measured from the geometrical center is $l$. We make a 3D vision sensor locating at this point and facing the geometrical center. By discretizing a position of a 3D vision sensor along the line as $l=l_1, l_2, \cdots, l_m$, we can totally assume $m \cdot n$ candidates of a 3D sensor's pose. We consider imposing the following conditions for each candidate: \begin{itemize} \item[] The 3D vision sensor is located above the box's bottom surface. \item[] IK (inverse kinematics) of the arm where the 3D vision sensor is attached at its wrist is solvable. \item[] For a pose of the 3D vision sensor where IK is solvable, no collision occurs among the links and between a link and the environment. \end{itemize} Among a set of 3D sensor's pose satisfying the above conditions, we consider selecting one maximizing the visibility of randomly stacked objects. Here, robotic bin-picking is usually iterated until there is no object remained in a box. For the first picking trial, we consider selecting a 3D sensor's pose minimizing the occluded area of the box's bottom surface as shown in Fig. \ref{fig:occupancy} (a). After the second picking trial, we consider using previous result of measurement to determine the pose of a 3D sensor as shown in Fig. \ref{fig:occupancy} (b) and (c). We consider partitioning the storage area into multiple grid cells \cite{Thrun,Nagata10}. By using the point cloud captured in the previous picking experiment, we mark {\it occupied} to the grid cells including the point cloud. We also mark {\it occluded} to the grid cells occluded by the grid cells marked as {\it occupied}. Pose of a 3D sensor is determined to maximize the number of grid cells marked as {\it occluded} to be visible. Here, through the previous picking experiment, configuration of stacked objects may change since the manipulator contacts the objects. However, the method explained in this subsection does not consider the change of configuration. Our method approximates the optimum pose of the 3D sensor by assuming the change of configuration is small. \begin{figure}[thb] \centering \includegraphics[width=6cm]{polygon.eps} \caption{Regular polygon assumed at the geometrical center of bottom surface \label{fig:polygon}} \end{figure} \begin{figure}[thb] \centering \includegraphics[width=12cm]{Occupancy_rev.eps} \caption{Determination of camera pose maximizing the visibility of stacked objects \label{fig:occupancy}} \end{figure} \section{Object Pose Detection} This section explains a method for detecting the pose of randomly stacked objects. For the first picking trial, we consider detecting the poses of as many objects as possible. After the second picking trial, we consider detecting the poses of objects which poses are changed. \subsection{Object Pose Detection for the First Picking Trial} To pick an object from the pile, the 3D vision sensor first captures a point cloud of randomly stored objects. Then, we segment the captured point cloud as shown in Fig. \ref{fig:merge}(a). In this research, we used a segmentation method based on the Euclidian cluster prepared in the PCL (Point Cloud Library) \cite{pcl}. For each segment of point cloud which bounding-box size is similar to the bounding-box size of an object, we try to estimate the pose of an object using a two-step algorithm: first roughly detecting the pose by using the CVFH (Clustered Viewpoint Feature Histogram) \cite{CVFH} and the CRH (Camera Roll Histogram) estimation, and then detecting the precise pose by using the ICP (Iterative Closest Point) estimation method. In a preprocessing process before starting the detection, we prepared 42 partial view of the object model, and precompute the CVFH and CRH features of each view. During the detection, we extract the plenary surface from the point cloud, segment the remaining points cloud, and compute the CVFH and CRH features of each segmentation. Then, we match the precomputed features with the features of each segment and estimate the orientation of the segmentations. The matched segments are further refined using ICP estimation method to ensure good matching. The segmentation that has highest ICP matches and smallest outlier points will be used as the output. For the first picking trial, we usually detect the poses of a number of objects. In such cases, since we have to solve ICP estimation for a number of times, we consider using multiple threads and solving multiple ICP estimation in parallel. \begin{figure} \centering \includegraphics[width=12.cm]{MergeMethod_rev.eps} \caption{Segmentation of point cloud after the second picking trial \label{fig:merge}} \end{figure} \subsection{Object Pose Detection after the Second Picking Trial} After the second picking trial, we consider using the current point cloud together with the previously captured one. If a part of the previously captured point cloud is similar to the current one, we do not need to calculate the object's pose belonging to the part of point cloud and can save the time needed to calculate the objects' poses. In a picking task, after a 3D sensor capture a point cloud of randomly stacked objects, a robot manipulator tries to pick an object from the pile. The configuration of objects after a robot manipulator tries to pick an object is usually partially different from the configuration before the picking trial. If the previously captured point cloud is partially similar to the current point cloud, we consider merging the part of previously captured point cloud to the current one. By merging a part of the previous point cloud to the current one, the occluded area of the point cloud is expected to be smaller. The algorithm of merging the point cloud is outlined in Fig. \ref{fig:merge} and Algorithm 1. Let $\bar{P} = (\bar{p}_1, \bar{p}_2, \cdots, \bar{p}_m)$ and $P = (p_1, p_2, \cdots, p_n)$ be the previously captured point cloud and the current one, respectively. Let also $\bar{P}_1, \bar{P}_2, \cdots$, and $\bar{P}_s$ be the segments of previous point cloud. The overview of the merging algorithm is explained in the following. Fig. \ref{fig:merge} (a) shows the segmented point cloud obtained during the previous picking trial. On the other hand, Fig. \ref{fig:merge} (b) shows the current point cloud where the configuration of object is partially different from the previous one. As shown in Fig. \ref{fig:merge} (c), for each point included in the current point cloud, we search for the point included in the previous point cloud making the minimum distance between them (lines 6 and 7). We further find a segment of previous point cloud where the point making the minimum distance belongs to (line 8). For each segment of previous point cloud, we introduce two integer numbers ${\rm near}(i)$ and {\rm far}(i) expressing the number of points included in the segment $P_i$ where the minimum distance is smaller and larger, respectively, than the threshold {\rm MinDistance} (lines 9 and 10). We determine whether or not we merge the segment $\bar{P}_i$ into the point cloud $P$ depending on the ratio between ${\rm far}(i)$ and ${\rm near}(i)$. \bigskip \noindent {\bf Algorithm 1}\ \ Merging method between two point clouds\\ \\ 1. for $i \leftarrow 1:s$ \\ 2. \hspace*{0.5cm} near$(i)=0$ \\ 3. \hspace*{0.5cm} far$(i)=0$ \\ 4. end for\\ 5. for $j \leftarrow 1:m$ \\ 6. \hspace*{0.5cm} $d \leftarrow {\rm min}(|\bar{p}_1 - p_j| , \cdots, |\bar{p}_m-p_j|)$ \\ 7. \hspace*{0.5cm} $k \leftarrow {\rm argmin}(|\bar{p}_1 - p_j| , \cdots, |\bar{p}_m-p_j|)$ \\ 8. \hspace*{0.5cm} $t \leftarrow {\rm SegmentNumber}(\bar{p}_k)$ \\ 9. \hspace*{0.5cm} if $ d < {\rm MinDistance}$ then : ${\rm near}(t) \leftarrow {\rm near}(t) + 1$ \\ 10. \hspace*{0.43cm} else : ${\rm far}(t) \leftarrow {\rm far}(t) + 1$ \\ 11. end for\\ 12. for $i \leftarrow 1:s$\\ 13. \hspace*{0.43cm} if $\frac{{\rm far}(i)}{{\rm near}(i)} < {\rm Threshold}$, then $P \leftarrow {\rm Merge}(P, \bar{P}_i)$\\ 14. end for \bigskip We further segment the merged point cloud. For each segment of point cloud, we calculate the distance between a point in the segment and the object which pose is estimated during the previous picking trial. If the distance is less than the threshold, we use the result of pose estimation during the previous pick. On the other hand, if the distance is larger than the threshold, we newly estimate the pose of an object by using two step algorithm using the CVFH and CRF estimation and the ICP algorithm. \section{Experiment} We performed experiments on bin-picking. As shown in Fig. \ref{fig:segmentation}(a), we randomly placed nine objects in a box. We put nine objects close to each other such that the finger contacts a neighboring object when picking the target one. In the experiment, we performed the picking trial for three times. After the three times picking trial, we additionally captured the visual information. Fig. \ref{fig:capture} shows the grid cells of captured point cloud during a series of picking tasks where the red cells include the newly captured point cloud while the green cells include the previously captured point cloud. We can see that object recognition is performed only for the object where red cells are included. Fig. \ref{fig:camera} shows the pose of 3D vision sensor during a series of picking task by using the dual-arm industrial manipulator HiroNX. \begin{figure} \centering \includegraphics[width=9cm]{segmentation.eps} \caption{Estimation of objects' pose \label{fig:segmentation}} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{Capture_rev.eps} \caption{Grid cells of captured point cloud \label{fig:capture}} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{CameraPose_rev.eps} \caption{Pose of 3D sensor during a series of picking task \label{fig:camera}} \end{figure} \section{Conclusions} In this paper, we discussed the visual recognition system for learning based randomized bin-picking. We first explained the view planning method to maximize the visibility of randomly stacked objects. Then, since randomized bin-picking usually estimates the pose of a number of objects, we relaxed the computational cost of the object pose detection by using the visual information on randomly stacked objects captured during the current picking task together with the visual information captured during the previous picking tasks. Through experimental results, we confirmed that the computational cost of the object recognition is reduced. Here, in our visual recognition of randomly stacked objects, we used the conventional Euclidian cluster based method to segment the stacked objects. Using more advanced method on segmentation is considered to be our future topic. Also, performing experiment for different shaped objects is also considered to be our future research topic. \input{bib} \end{document}
2024-02-18T23:40:29.450Z
2016-08-02T02:11:12.000Z
algebraic_stack_train_0000
2,549
2,635
proofpile-arXiv_065-12413
\section{Introduction} As the density if the crowd density increases in public transportation hubs, security inspection has become more and more important in protecting public safety. X-ray scanners, which are adopted usually to scan the luggage and generate the complex X-ray images, play an important role in security inspection scenario. However, security inspectors struggle to accurately detect the prohibited items after a long time highly-concentrating work, which may cause severe danger to the public. Therefore, it is imperative to develop a rapid, accurate and automatic detection method. Fortunately, the innovation of deep learning \cite{qin2020bipointnet,Qin_2020_CVPR,dualCVPR2021,DBLP:journals/corr/abs-2005-09257,zhang2018hcic,li2021multi}, especially the convolutional neural network, makes it possible to accomplish this goal by transferring it into object detection task in computer vision \cite{lecun2015deep, nah2017deep, young2018recent, hua2018pointwise}. However, different from traditional detection tasks, in this scenario, items within a luggage are randomly overlapped where most areas of objects are occluded resulting in heavy noise in X-ray images. Thus, this characteristic leads to strong requirement of high-quality datasets and models with satisfactory performance for this task. Regarding dataset, to the best of our knowledge, there are only three released X-ray benchmarks, namely GDXray \cite{mery2015gdxray}, SIXray \cite{miao2019sixray} and OPIXray \cite{WeiOccluded2020}. Both GDXray and SIXray are constructed for classification task and the images of OPIXray are synthetic. Besides, the categories and quantities of labeled instances in the three datasets are far from meeting the requirements in real-world applications. We make a detailed comparison in Table \ref{dataset-comparison}. Regarding models, traditional CNN-based models \cite{xiao2018deep,fu2018refinet,qin2020forward} trained through common detection datasets fail to achieve satisfactory performance in this scenario because that different from natural images \cite{cheng2017focusing, shi2017detecting} with simple visual information, X-ray images \cite{uroukov2015preLIMinary, mery2016modern, mery2017logarithmic} are characterized by the lacking of strong identification properties and containing heavy noises. This urgently requires researchers to make breakthroughs in both datasets and models. To address the above drawbacks, in this work, we contribute the largest high-quality dataset for prohibited items detection in X-ray images, named \textbf{Hi}gh-quality \textbf{X-ray} (HiXray) dataset, which contains 102,928 common labeled instances of 8 categories, such as lithium battery, liquid, \etc. All of these images are gathered from real-world daily security inspections in an international airport. Thus, The categories, quantities and locations of prohibited items are in line with the data distribution in real-world scenarios. Besides, each instance is manually annotated by professional inspectors from the international airport, guaranteeing the accurate annotations. In addition, our HiXray dataset can serve the evaluation of various detection tasks including small, occluded object detection, \etc. For accurate prohibited items detection, we present the \textbf{L}ateral \textbf{I}nhibition \textbf{M}odule (LIM), which is inspired by the fact that humans recognize these items by ignoring irrelevant information and focusing on identifiable characteristics, especially when objects are overlapped with each other. LIM consists of two core sub-modules, namely Bidirectional Propagation (BP) and Boundary Activation (BA). BP filters the noise information to suppress the influence from the neighbor regions to the object regions and BA activates the boundary information as the identification property, respectively. Specifically, BP eliminates noises adaptively through the bidirectional information flowing across layers and BA captures the boundary from four directions inside each layer and aggregates them into a whole outline. HiXray dataset and LIM model provide a new and reasonable evaluation benchmark for the community, and helps make a wider breadth of real-world applications. The main contributions of this work are as follows: \begin{itemize} \item We present the largest high-quality dataset named HiXray for X-ray prohibited items detection, providing a new and reasonable evaluation benchmark for the community. We hope that contributing this dataset can promote the development of this issue. \item We propose the LIM model, which exploits the lateral inhibition mechanism to improve the detecting ability for accurate prohibited items detection, inspired by the intimate relationship between deep neural networks and biological neural networks. \item We evaluate LIM on the HiXray and OPIXray datasets and the results show that LIM can not only be versatile to SOTA detection methods but also improve the performance of them. \end{itemize} \section{Related Work} \textbf{Prohibited Items Detection in X-ray Images.} X-ray imaging offers powerful ability in many tasks such as medical image analysis \cite{guo2019improved, chaudhary2019diagnosis, lu2019towards} and security inspection \cite{miao2019sixray, huang2019modeling}. As a matter of fact, obtaining X-ray images is difficult, so rare studies touch security inspection in computer vision due to the lack of specialized high-quality datasets. \begin{table*}[tp!] \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \setlength{\tabcolsep}{1.5mm} { \small \begin{tabular}{lccccccccc} \toprule \multirow{2}{*}{Dataset} & \multirow{2}{*}{Year} & \multirow{2}{*}{Category} & \multirow{2}{*}{$N_{p}$} & \multicolumn{3}{c}{Annotation} & \multirow{2}{*}{Color} & \multirow{2}{*}{Task} & \multirow{2}{*}{Data Source}\\ \cmidrule{5-7} & && & Bounding Box & Number & Professional & & &\\ \midrule GDXray \cite{mery2015gdxray} & 2015 & 3 & 8,150 & \footnotesize\Checkmark & 8,150 & \footnotesize\XSolidBrush & Gray-scale & Detection & Unknown\\ SIXray \cite{miao2019sixray} & 2019 & 6 & 8,929 & \footnotesize\XSolidBrush& \footnotesize\XSolidBrush& \footnotesize\XSolidBrush&RGB & Classification & Subway Station\\ OPIXray \cite{WeiOccluded2020} & 2020 & 5 & 8,885 & \footnotesize\Checkmark & 8,885 & \footnotesize\Checkmark &RGB & Detection & Artificial Synthesis\\ \midrule \textbf{HiXray} &\textbf{2021}&\textbf{8}&\textbf{45,364} & \footnotesize\Checkmark & \textbf{102,928} & \footnotesize\Checkmark & RGB& Detection& Airport\\ \bottomrule \end{tabular} } \end{center} \caption{Comparison of existing open-source X-ray datasets. $N_{p}$ refers to the number of images containing prohibited items. In our HiXray dataset, some images contain more than one prohibited item and every prohibited item is located with a bounding-box annotation, causing the number of annotation is greater than $N_{p}$.} \label{dataset-comparison} \end{table*} Several recent efforts \cite{akcay2017evaluation, mery2015gdxray, akcay2018using, miao2019sixray, liu2019deep, WeiOccluded2020} have been devoted to constructing such datasets. A released benchmark, GDXray \cite{mery2015gdxray} contains 19,407 gray-scale images, part of which contain three categories of prohibited items including gun, shuriken and razor blade. SIXray \cite{miao2019sixray} is a large-scale X-ray dataset which is about 100 times larger than the GDXray dataset but the positive samples are less than 1\% to mimic a similar testing environment and the labels are annotated for classification. Recently, \cite{WeiOccluded2020} proposed the OPIXray dataset, containing 8,885 X-ray images of 5 categories of cutters. The images of OPIXray dataset are artificial synthetic. Other relevant works \cite{akcay2017evaluation, akcay2018using, liu2019deep} have not made their data available to download. \textbf{Object Detection.} In computer vision area, object detection is one of important tasks, which underpins a few instance-level recognition tasks and many downstream applications. Here we review some works that is the closest to ours. Most of the CNN-based methods can be summarized into two general approaches: one-stage detectors and two-stage detectors. Recently one-stage methods have gained much attention over two-stage approaches due to their simpler design and competitive performance. SSD \cite{liu2016ssd} discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales. YOLO \cite{redmon2016you, redmon2017yolo9000, redmon2018yolov3, bochkovskiy2020yolov4, yolov5} is the collection of a series of well-known methods, which values both real-time and accuracy among one-stage detection algorithms. Moreover, FCOS \cite{tian2019fcos} proposes a fully convolutional one-stage object detector to solve object detection in a per-pixel prediction fashion, analogue to other dense prediction problems. \section{HiXray Dataset} As Table \ref{dataset-comparison} illustrates, the existing datasets are less than satisfactory and thus fail to meet the requirements in real-world applications. In this work, we construct a new high-quality dataset for X-ray prohibited items detection. Then we introduce the construction principles, data properties and potential tasks of the proposed HiXray dataset. \subsection{Construction Principles} We construct the HiXray dataset in accordance with the following five principles: \textbf{Realistic Source}. Considering realistic source can make the data more meaningful for research, we gather the images of the HiXray dataset from daily security inspections in an international airport to ensure the authenticity of data. \textbf{Data Privacy}. We strictly follow the standard de-privacy procedure by deleting private information (name, place, \etc), ensuring that nobody can connect the luggage with owners through the images to guarantee the privacy. \textbf{Extensive Diversity}. HiXray contains the 8 categories of prohibited items such as lithium battery, liquid, lighter, etc., all of which are frequently seen in daily life. \textbf{Professional Annotation}. Objects in X-ray images are difficult to be recognized for people without professional training. In HiXray, each instance is manually localized with a box-level annotation by professional security inspectors of the airport, who are very skillful in daily work. \textbf{Quality Control}. We followed the similar quality control procedure of annotation as the famous Pascal VOC \cite{everingham2010pascal}. All inspectors followed the same annotation guidelines including what to annotate, how to annotate bounding, how to treat occlusion, \etc. Besides, the accuracy of each annotation was checked by another inspector, including checking for omitted objects to ensure exhaustive labelling. \subsection{Data Details} \textbf{Instances per category}. HiXray contains 45,364 X-ray images, 8 categories of 102,928 common prohibited items. The statistics are shown in Table \ref{table-para}. \begin{table}[!ht] \begin{center} \setlength{\tabcolsep}{0.5mm}{ \small \begin{tabular}{lccccccccc} \toprule Category & PO1 & PO2 & WA & LA & MP & TA & CO & NL & Total \\ \midrule \footnotesize{Training} & \footnotesize{9,919} & \footnotesize{6,216} & \footnotesize{2,471} & \footnotesize{8,046} & \footnotesize{43,204} & \footnotesize{3,921} & \footnotesize{7,969} & \footnotesize{706} & \footnotesize{82,452} \\ \footnotesize{Testing} & \footnotesize{2,502} & \footnotesize{1,572} & \footnotesize{621} & \footnotesize{1,996} & \footnotesize{10,631} & \footnotesize{997} & \footnotesize{1,980} & \footnotesize{177} & \footnotesize{20,476} \\ \midrule \footnotesize{Total} & \footnotesize{12,421} & \footnotesize{7,788} & \footnotesize{3,092} & \footnotesize{10,042} & \footnotesize{53,835} & \footnotesize{4,918} & \footnotesize{9,949} & \footnotesize{883} & \footnotesize{102,928} \\ \bottomrule \end{tabular} } \end{center} \caption{The statistics of category distribution of HiXray dataset, where PO1, PO2, WA, LA, MP, TA, CO and NL denote ``Portable Charger 1 (lithium-ion prismatic cell)'', ``Portable Charger 2 (lithium-ion cylindrical cell)'', ``Water'', ``Laptop'', ``Mobile Phone'', ``Tablet'', ``Cosmetic'' and ``Nonmetallic Lighter''.} \label{table-para} \end{table} \textbf{Instances per image}. On average there are 2.27 instances per image in HiXray dataset. In comparison SIXray has 1.37 (in positive samples), both OPIXray and GDXray have 1 instance per image on average. Obviously, the larger average number of instances per image brings more contextual information, which is more valuable. The statistics is shown in Table \ref{table-para_items}. \begin{table}[!ht] \begin{center} \setlength{\tabcolsep}{0.9mm}{ \small \begin{tabular}{lcccccccccc} \toprule $N_{i}$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 &10 \\ \midrule \footnotesize{Training} & \footnotesize{12,726} & \footnotesize{10,905} & \footnotesize{6,860} & \footnotesize{3,286} & \footnotesize{1,521} & \footnotesize{602} & \footnotesize{254} & \footnotesize{91} & \footnotesize{35} &\footnotesize{11} \\ \footnotesize{Testing} & \footnotesize{3,227} & \footnotesize{2,722} & \footnotesize{1,705} & \footnotesize{810} & \footnotesize{354} & \footnotesize{145} & \footnotesize{54} & \footnotesize{41} & \footnotesize{8} &\footnotesize{2} \\ \midrule \footnotesize{Total} & \footnotesize{15,953} & \footnotesize{13,627} & \footnotesize{8,565} & \footnotesize{4,096} & \footnotesize{1,875} & \footnotesize{747} & \footnotesize{308} & \footnotesize{132} & \footnotesize{43} &\footnotesize{13} \\ \bottomrule \end{tabular} } \end{center} \caption{The quantity distribution of images containing different numbers of prohibited items. Note that $N_{i}$ refers to the number of prohibited items in each image.} \label{table-para_items} \end{table} \textbf{Division of training and testing}. The dataset is partitioned into a training set and a testing set, where the ratio is about 4 : 1. The statistics of category distribution of training set and testing set are also shown in Table \ref{table-para}. \textbf{Color Information}. Different X-ray machine models may have some differences in color imaging, and we adopt one of the most classic color imaging strategy. The colors of objects under X-ray are mainly determined by their chemical composition, which is introduced in detail in Table \ref{X-ray-color}. \begin{table}[!ht] \begin{center} \setlength{\tabcolsep}{4.1mm} \small \begin{tabular}{ccllllc} \toprule Color & \multicolumn{5}{c}{Material} & Typical examples \\ \midrule Orange & \multicolumn{5}{c}{Organic Substances} & Plastics, Clothes \\ Blue & \multicolumn{5}{c}{Inorganic Substances} & Irons, Coppers \\ Green & \multicolumn{5}{c}{Mixtures} & Edge of phones \\ \bottomrule \end{tabular} \end{center} \caption{The color information of different objects under X-ray.} \label{X-ray-color} \end{table} \textbf{Data Quality}. All images are stored in JPG format with a 1200*900 resolution, averagely. The maximum resolution of samples can reach 2000*1040. \subsection{Potential Tasks} Our HiXray dataset can further serve the evaluation of various detection tasks including small object detection, occluded object detection, \etc \textbf{Small Object Detection}. Security inspectors often struggle to find small prohibited items in baggage or suitcase. In our HiXray dataset, there are many small prohibited items. According to the definition of small by SPIE, the size of small object is usually no more than 0.12\% of entire image size. We thus define the small object as the object whose ground-truth bounding box accounts for less than 0.1\% of entire image, while the large object is defined as the object whose ground-truth bounding box takes up more than 0.2\% proportion in the entire image, and the rest is the medium. The images of ``Portable Charger 2'' and ``Mobile Phone'' can be divided into three subsets respectively. The categories distribution is illustrated in Table \ref{table-num}. \begin{table}[!ht] \begin{center} \setlength{\tabcolsep}{3.7mm} \small { \begin{tabular}{lcccc} \toprule Category & Total & Large & Medium & Small \\ \midrule PO2 &2,502 & 587 & 986 & 929 \\ \midrule MP & 10,631 & 3,547 & 4,248 & 2,836 \\ \bottomrule \end{tabular} } \end{center} \caption{The category distribution of ``Portable Charger 2'' and ``Mobile Phone'' (PO2 and MB for short) when the two thresholds are set as 0.1\% and 0.2\%.} \label{table-num} \end{table} \textbf{Occluded Object Detection}. The items in baggage or suitcase are often overlapped with each other, causing the occlusion problem in X-ray prohibited items detection. \cite{WeiOccluded2020} proposed the occluded prohibited items detection task in X-ray security inspection. The occlusion problem also exists in HiXray dataset with large-scale images with more categories and numbers. In order to study the impact brought by object occlusion levels, researchers can divide the HiXray dataset into three (or more) subsets according to different occlusion levels (illustrated in Figure \ref{fig:zhedang}). \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\linewidth]{zhedang.pdf} \end{center} \caption{Three occlusion levels on prohibited items in the images of HiXray, where the numbers indicates occlusion levels.} \label{fig:zhedang} \end{figure} \begin{figure*}[!ht] \begin{center} \includegraphics[width=\linewidth]{framework_0315.pdf} \end{center} \caption{The network structure of Lateral Inhibition Module (LIM). Bidirectional Propagation filters noisy information to suppress the influence from neighbor regions to object regions and Boundary Activation activates the boundary as the identification property, respectively.} \label{fig:structure} \end{figure*} \section{The Lateral Inhibition Module} In neurobiology, lateral inhibition disables the spreading of action potentials from excited neurons to neighboring neurons in the lateral direction. We mimic this mechanism by designing a bidirectional propagation architecture to adaptively filter the noisy information generated by the neighboring regions of the prohibited items. Also, lateral inhibition creates contrast in stimulation that allows increased sensory perception, so we activate the boundary information by intensifying it from four directions inside each layer and aggregating them into a whole shape. Therefore, inspired by the mechanism that lateral suppression by neighboring neurons in the same layer making the network more efficient, we propose the Lateral Inhibition Module (LIM). In this section, we will first introduce the network architecture in Section \ref{sec:arch} and further the two core sub-modules, namely Bidirectional Propagation (BP) and Boundary Activation (BA), in Section \ref{sec:BA} and Section \ref{sec:BP}, respectively. \subsection{Network Architecture}\label{sec:arch} Figure \ref{fig:structure} illustrates the architecture of our LIM. It takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps at multiple levels. Similar to FPN \cite{lin2017feature} and some other varieties like PANet \cite{wang2019panet}, this process is independent of the backbone architectures. Specifically, suppose there are $N$ training images $\textbf{X} = \left\{\textbf{x}_1,\cdots,\textbf{x}_N\right\}$ and $L$ convolutional layers in the backbone network. One sample $\textbf{x}\in\textbf{X}$ is fed into the backbone network and computed feed-forwardly, which computes a feature hierarchy consisting of feature maps at several scales with a scaling step of 2. \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[!t] \caption{The Procedure of LIM.} \label{alg:1} \KwIn{The feature map set $\textbf{F}=\{\mathcal{F}^{1}(\textbf{x}),\cdots,\mathcal{F}^{L}(\textbf{x})\}$.} \KwOut{The refined feature map set $\textbf{C}=\{\mathbf{C}^{1},\cdots,\mathbf{C}^{L}\}$.} \For{all $l=1,2,\dots, L$} { Compute $\textbf{A}^l$ based on Eq. $\left(\ref{U_1}\right)$\; \For{direction in fourDirections} { \If{direction is horizontal} { // Avoid column loop for faster speed\; Rotate feature map\; } \For{row $\leftarrow$ 1 to heightOfMap} { Compute $\textbf{B}^l_{ijc}$ based on Eq. $\left(\ref{BA_formular}\right)$\; } } Generate $\textbf{B}^l$ by concatenating all $\textbf{B}^l_{ijc}$\; Compute $\textbf{C}_{\mathrm{t}}^{l}$ based on Eq. $\left(\ref{C_1}\right)$\; Compute $\textbf{C}^l$ based on Eq. $\left(\ref{C_2}\right)$\; } Obtain the feature map set $\textbf{C}$=$\{\mathbf{C}^{1},\cdots,\mathbf{C}^{L}\}$. \label{Alg} \end{algorithm} Suppose $\mathcal{F}(\cdot)$ as a composite function of three consecutive operations: Batch Normalization (BN) \cite{ioffe2015batch}, followed by a rectified linear unit (ReLU) \cite{glorot2011deep} and a 3 $\times$ 3 Convolution (Conv). $\mathcal{F}^{l}(\textbf{x})$ is the feature map generated by the $l$-th layer of the network. \textbf{Firstly}, in the left part of BP, the noisy information is adaptively filtered because their propagation is reduced from high-level to low-level feature maps in the top-down pathway. \textbf{Secondly}, the output feature maps are fed into BA. BA refines the feature maps to activate the boundary information by enhance it from four directions and outputs the refined feature maps. \textbf{Thirdly}, similar to the left, the right part of BP reduces the propagation of noisy information from low-level to high-level feature maps through the bottom-up pathway. \textbf{Finally}, the feature maps outputted by each layer of the right of BP combine those of the corresponding layer from the backbone network. And the combined feature maps is conveyed to the following prediction layers. Algorithm \ref{Alg} summarizes the whole process (We add explanations about acceleration operation in code implementation in Algorithm \ref{Alg}) and the details of modules are described in the following sections. \subsection{Bidirectional Propagation}\label{sec:BP} To disable the spreading of noisy information of neighboring regions, we mimic this mechanism by designing the bidirectional propagation architecture. Moreover, we add a dense mechanism to enhance the ability of BP to choose proper information to propagate. As shown in Figure \ref{fig:structure}, for the dense top-down pathway on the left of BP, up-sampling spatially coarser but semantically stronger feature maps from higher pyramid levels hallucinates higher resolution features. These feature maps are enhanced by the corresponding feature maps from the convolutional layers via lateral connections. Each lateral connection merges feature maps of the same spatial size from the convolutional layer and the top-down pathway. The feature map of low convolutional layer is of lower-level semantics, but its activation is more accurately localized as it was sub-sampled fewer times. Further, we construct the dense connections to ensure maximum the filter. Specifically, to preserve the feed-forward nature, $\mathcal{F}^{l}(\textbf{x})$ obtains additional inputs from the feature maps $\mathcal{F}^{l+1}(\textbf{x})$, $\cdots$, $\mathcal{F}^{L}(\textbf{x})$ of all preceding layers and passes on its own feature-maps to the feature maps $\mathcal{F}^{l-1}(\textbf{x})$, $\cdots$, $\mathcal{F}^{1}(\textbf{x})$ of all subsequent layers. Figure \ref{fig:structure} illustrates this layout schematically. We define $\mathcal{U}^m(\cdot)$ as the up-sampling operation ($2^m$ times) and $\mathcal{V}(\cdot)$ as a $1\times1$ convolutional layer to reduce channel dimensions. The process is formulated as follows: \begin{equation} \textbf{A}^{l}=\mathcal{V}\left(\mathcal{F}^{l}(\textbf{x})\right)+\sum_{m=1}^{L-l}\mathcal{U}^{m}\left(\textbf{A}^{l+m}\right) \label{U_1}, \end{equation} where $\textbf{A}^{l}$ refers to the feature map outputted by the $m$-th layer of the left part of BP. Regarding the right part of BP, as the Figure \ref{fig:structure} illustrated, suppose the input feature map $\textbf{B}^{l}$ refers to the feature map whose boundary has been activated in Eq. $\left(\ref{BA_formular}\right)$ (Boundary Activation will be introduced in the following section). Similar to the previous definition, $\mathcal{D}^m(\cdot)$ is the down-sampling operation ($2^m$ times). This process can be formulated as follows: \begin{equation} \textbf{C}_{\mathrm{t}}^{l}=\mathcal{V}\left(\textbf{B}^{l}\right)+\sum_{m=1}^{l-1} \mathcal{D}^{m}\left(\textbf{C}_{\mathrm{t}}^{l-m}\right) \label{C_1}, \end{equation} \begin{equation} \textbf{C}^{l}=\textbf{C}_{\mathrm{t}}^{l}+\mathcal{F}^{l}(\textbf{x}) \label{C_2}, \end{equation} where $\textbf{C}_{\mathrm{t}}^{l}$ refers to the output of $l$-th layer of the bottom-up pathway and $\textbf{C}^{l}$ refers to the feature map generated of the $l$-th layer of BP. Finally, we convey $\textbf{C}^{l}$, outputted by LIM, to the following prediction layers. \begin{figure}[tp!] \begin{center} \includegraphics[width=1.0\linewidth]{instance_1.pdf} \end{center} \caption{The schematic diagram of Boundary Activation.} \label{fig:theory_and_instance} \end{figure} \begin{table*}[!t] \begin{center} \setlength{\tabcolsep}{2mm}{ \small \begin{tabular}{l|ccccccccc|cccccc} \hline \multirow{2}{*}{Method} & \multicolumn{9}{c|}{HiXray Dataset (\textbf{Ours})} & \multicolumn{6}{c}{OPIXray Dataset \cite{WeiOccluded2020}}\\ \cline{2-16} & \textbf{AVG}& PO1 & PO2 & WA & LA & MP & TA & CO & NL & \textbf{AVG} & FO & ST & SC & UT & MU \\ \hline \footnotesize{SSD} \cite{liu2016ssd} & 71.4& 87.3 & 81.0 & 83.0 & 97.6 & 93.5 & 92.2 & 36.1 & 0.01 & 70.9 & 76.9 & 35.0 & 93.4 & 65.9 & 83.3 \\ \footnotesize{SSD+DOAM} \cite{WeiOccluded2020} & 72.1& 88.6 & 82.9 & 83.6 & 97.5 & 94.1 & 92.1 & 38.2 & 0.01 & 74.0 & 81.4 & 41.5 & 95.1 & 68.2 & \textbf{83.8} \\ \footnotesize{\textbf{SSD+LIM}} & \textbf{73.1} &\textbf{89.1}&\textbf{84.3}&\textbf{84.0} & \textbf{97.7} & \textbf{94.5} & \textbf{92.4} & \textbf{42.3} & \textbf{0.1} & \textbf{74.6} &\textbf{81.4}&\textbf{42.4} & \textbf{95.9} & \textbf{71.2} & 82.1 \\ \hline \footnotesize{FCOS} \cite{tian2019fcos} & 75.7& 88.6 & 86.4 & 86.8 & 89.9 & 88.9 & 88.9 & 63.0 & 13.3 & 82.0& 86.4 & 68.5 & 90.2 & 78.4 & 86.6 \\ \footnotesize{FCOS+DOAM} \cite{WeiOccluded2020} & 76.2 & 88.6& 87.5 & 87.8 & 89.9 & 89.7 & 88.8 & 63.5 & 12.7 & 82.4 & 86.5 & 68.6 & 90.2 & 78.8 & \textbf{87.7} \\ \footnotesize{\textbf{FCOS+LIM}} & \textbf{77.3} & \textbf{88.9}& \textbf{88.2} & \textbf{88.3} & \textbf{90.0} & \textbf{89.8} & \textbf{89.2} & \textbf{69.8} & \textbf{14.4} & \textbf{83.1} & \textbf{86.6} & \textbf{71.9} & \textbf{90.3} & \textbf{79.9} & 86.8 \\ \hline \footnotesize{YOLOv5} \cite{yolov5} & 81.7& 95.5 & 94.5 & 92.8 & 97.9 & 98.0 & 94.9 & 63.7 & 16.3 & 87.8 & 93.4 & 67.9 & 98.1 & 85.4 & 94.1 \\ \footnotesize{YOLOv5+DOAM} \cite{WeiOccluded2020} & 82.2& 95.9 & 94.7 & 93.7 & 98.1 & 98.1 & 95.8 & 65.0 & 16.1 & 88.0 & 93.3 & 69.3 & 97.9 & 84.4 & \textbf{95.0} \\ \footnotesize{\textbf{YOLOv5+LIM}} & \textbf{83.2}& \textbf{96.1} & \textbf{95.1} & \textbf{93.9} & \textbf{98.2} & \textbf{98.3} & \textbf{96.4} & \textbf{65.8} & \textbf{21.3} & \textbf{90.6} & \textbf{94.8} & \textbf{77.6} & \textbf{98.2} & \textbf{88.9} & 93.8 \\ \hline \end{tabular} } \end{center} \caption{Comparisons of common detection approaches SSD, FCOS and YOLOv5 (for simplicity, we use the lightest YOLOv5s model in the YOLOv5 experiment), and the latest related model DOAM on the HiXray dataset and OPIXray dataset, where the definition of the categories (PO1, PO2, \etc.) can be found in Table \ref{table-para}. FO, ST, SC, UT and MU donate ``Folding Knife'', ``Straight Knife'', ``Scissor'', ``Utility Knife'' and ``Multi-tool Knife'' in the OPIXray dataset, respectively. } \label{table-traditional} \end{table*} \subsection{Boundary Activation}\label{sec:BA} To mimic the mechanism that lateral inhibition creates contrast in stimulation that allows increased sensory perception, we activate the boundary information by intensifying it from four directions inside the feature maps outputted by each layer and aggregating them into a whole shape. The schematic diagram is shown in Figure \ref{fig:theory_and_instance}. As is shown in Figure \ref{fig:theory_and_instance}, the key to capturing the boundary of object is to determine whether a position is a boundary-point. Motivated by the schematic diagram, we design the module Boundary Activation to perceive the sudden changes of boundary and its surroundings. Suppose we want to capture the left boundary of the object in the feature map $\textbf{A}^{l} \in \mathbb{R}^{{H}\times{W}\times{C}}$ (the output of left part of Bidirectional Propagation). $\textbf{A}^{l}_{c}$ donates the $c$-th channel of $\textbf{A}^{l}$. Further, $\textbf{A}^{l}_{ijc}$ refers to the location ($i,j$) of the feature map $\textbf{A}^{l}_{c}$. To determine whether there is a sudden change between a position and the left of the point, the right-most point $\textbf{A}^{l}_{iWc}$ traverses to the left. The process of perceiving the left boundary can be formulated as Eq. $\left(\ref {BA_formular}\right)$. \begin{equation} \small \textbf{B}^{l}_{ijc}\! =\!\left\{\! \begin{array}{cc} \textbf{A}^{l}_{iWc}\quad &\text{if } j=W, \\ \\ \max \left\{\!\textbf{A}^{l}_{ijc},\textbf{A}^{l}_{i(j+1)c},\dots,\textbf{A}^{l}_{iWc}\right\}\!\!&\! \text {otherwise,} \end{array}\right. \label{BA_formular} \end{equation} where the $\textbf{B}^{l}_{ijc}$ refers to the location ($i,j$) of $c$-th channel of the feature map $\textbf{B}^{l}$ after Boundary Activation. \section{Experiments} In this section, we conduct comprehensive experiments on HiXray and OPIXray dataset to evaluate the effectiveness of LIM. To the best of our knowledge, HiXray and OPIXray \cite{WeiOccluded2020} are the only two datasets currently available for X-ray prohibited items detection (RGB). First, we verify the effectiveness of LIM by comparing the base and LIM-integrated classic or SOTA detection methods (SSD \cite{liu2016ssd}, FCOS \cite{tian2019fcos} and YOLOv5 \cite{yolov5}). We evaluate all the base detection methods and the LIM-integrated methods on HiXray and OPIXray datasets. Second, we evaluate the superiority of our LIM over other feature pyramid mechanisms by comparing two famous methods FPN \cite{lin2017feature} and PANet \cite{wang2019panet} on the HiXray dataset. Third, we perform an ablation study to thoroughly evaluate each part of LIM. Finally, we conduct the visualization experiment to demonstrate the performance improvement. \subsection{Experiment Setting Details} \textbf{LIM:} LIM is implemented by PyTorch for its high flexibility and powerful automatic differentiation mechanism. The LIM-integrated model refers to the model that we implement this mechanism inside (Section~\ref{s52}). Both FPN and PANet contain feature pyramid mechanism similar to LIM, but they are not plug-in model. Therefore, we refer to their published code and re-implemented the mechanism deployed in SSD (Section~\ref{s53}). Unless specified, we use the following implementation details. \textbf{Backbone Networks:} The backbone networks of SSD, FCOS and YOLOv5 are VGG16 \cite{simonyan2014very}, ResNet50 \cite{he2016deep} and CSPNet \cite{wang2020cspnet} respectively. For each backbone network, we modify the corresponding network architecture to implement LIM mechanism. \textbf{Parameters:} All experiments of LIM and baselines are optimized by the SGD optimizer and the initial learning rate is set to 0.0001. The momentum and weight decay are set to 0.9 and 0.0005 respectively. The batch size is set to 32 with shuffle strategy while training. We evaluate the mean Average Precision (mAP) of the object detection to measure the performance of all models fairly. Besides, the IOU threshold measuring the accuracy of the predicted bounding box against the ground-truth is set to 0.5. \subsection{Comparing with SOTA Detection Methods} \label{s52} We verify the effectiveness of LIM by implementing this mechanism to several detection approaches, including traditional SSD, the latest FCOS and YOLOv5. We integrate LIM to the three detection approaches and compare the LIM-integrated methods to the original baselines. In addition, we integrate the latest detection method for security inspection DOAM (in the work of OPIXray dataset) into the three detection approaches above and compare the results with our LIM. The experimental results on HiXray dataset and OPIXray dataset are shown in Table \ref{table-traditional}. Table \ref{table-traditional} demonstrates that in HiXray dataset, the LIM-integrated network improves the average performance by 1.7\%, 1.6\% and 1.5\% over the original base models SSD, FCOS and YOLOv5 respectively. Besides, LIM outperforms DOAM by 1\%, 1.1\% and 1\% with the base model SSD, FCOS and YOLOv5 respectively. In OPIXray dataset, the LIM-integrated network improves the mean performance by 3.7\%, 1.1\% and 2.8\% over the original models SSD, FCOS and YOLOv5 respectively. Besides, LIM outperforms DOAM by 0.6\%, 0.7\% and 2.6\% with the base models SSD, FCOS and YOLOv5 respectively. Note that the performances are particularly low in two classes (CO and NL) for all models in Table \ref{table-traditional}, it is mainly because that, compared to other categories, NL and CO are far more difficult to recognize. For NL, it is very small in size and composed of a small piece of iron and a plastic body. The plastic shown under X-ray appears orange, which almost blends in with the background. For CO, the main reason is that there are big differences in the shapes of cosmetics, such as round and square, which are easy to be confused with other kinds of items. \subsection{Comparing with Feature Pyramid Mechanisms} \label{s53} LIM can be regarded as another feature pyramid method with a novel dense connection mechanism with specific feature enhancement. Therefore, We compare our LIM with the classical feature pyramid mechanism FPN and the variety PANet in different base models. Note that there is the same feature pyramid mechanism of FPN in FCOS and a variety of PANet mechanism in YOLOv5, so we replace the feature pyramid mechanism with our LIM in FCOS and YOLOv5 to verify that our mechanism works better in the base model FCOS and YOLOv5 (the same as Section~\ref{s52}). The experimental results are shown in Table \ref{table-eBFPN}. \begin{figure*}[tp!] \begin{center} \includegraphics[width=\linewidth]{contradistinction_1.pdf} \end{center} \caption{Visualization of the performance of both the baseline SSD and the LIM-integrated model. We select one pair of images of each category. The ability to identify and locate prohibited items of the LIM-integrated model outperforms the baseline SSD obviously.} \label{fig:contradistinction} \end{figure*} \begin{table}[!t] \begin{center} \setlength{\tabcolsep}{0.7mm} { \small \begin{tabular}{lccccccccc} \toprule Method & \textbf{AVG}& PO1 & PO2 & WA & LA & MP & TA & CO & NL \\ \midrule SSD \cite{liu2016ssd} & 71.4& 87.3 & 81.0 & 83.0 & 97.6 & 93.5 & 92.2 & 36.1 & 0.01 \\ \midrule +FPN \cite{lin2017feature} & 72.0& 87.4 & 81.5 & 83.2 & 97.9 & 93.9 & 92.2 & 40.3 & 0.02 \\ +PANet \cite{wang2019panet} & 72.0 & 88.3 & 83.2 & 82.8 & \textbf{97.9} & 93.8 & \textbf{92.6}& 37.3 & 0.01 \\ \textbf{+LIM} & \textbf{73.1}& \textbf{89.1}& \textbf{84.3} &\textbf{84.0}& 97.7 & \textbf{94.5} & 92.4 & \textbf{42.3} & \textbf{0.1} \\ \bottomrule \end{tabular} } \end{center} \caption{Comparing with feature pyramid mechanisms in the base SSD model, where the feature pyramid mechanisms and our LIM are implemented inside the base model respectively.} \label{table-eBFPN} \end{table} LIM improves by both 1.1\% to FPN and PANet in the base SSD model, 1.1\% to FPN in the base FCOS model and 1.5\% to the variety of PANet in the base YOLOv5 model. Further, we observe from the Table \ref{table-eBFPN} that LIM improves significantly than FPN on the categories like ``Portable Charger 1'' (0.7\%), ``Portable Charger 2'' (1.9\%) and ``Water'' (1.1\%). The visual information like boundary of the three categories is more abundant in their X-ray images, demonstrating the effectiveness of Boundary Activation in our LIM and verifies the novel dense connection mechanism with specific feature enhancement. \subsection{Ablation Study} In this section, we conduct several ablation studies to in-depth investigate our method. We first analyze the effectiveness of the Dense mechanism by implementing the Single-directional Propagation (the left part of the Boundary Propagation) inside the base model. Then, we evaluate the performance of Boundary Activation alone that no boundary information aggregation inside the feature map. Moreover, we add the Boundary Activation module. The experimental results are shown in Table \ref{table-our}. In Table \ref{table-our}, we can observe that the performance of the network with only Single-directional Propagation improves by 0.7\% than the base model, verifying the effectiveness of our dense mechanism. After applying the propagation toward another direction, the performance improves by 1.2\% than the base model and 0.5\% than the Single-directional Propagation, which demonstrates the effectiveness of our bidirectional mechanism. Further, Table \ref{table-our} shows that after the integration of our Boundary Activation module, the performance improves 1.7\% than the base model and 0.5\% than Boundary Propagation alone, indicating the effectiveness of boundary information aggregation inside the feature map. In conclusion, ablation studies have verified the validity of each part of our LIM model. \begin{table}[!t] \begin{center} \setlength{\tabcolsep}{1mm} { \small \begin{tabular}{lccccccccc} \toprule Method& \textbf{AVG} & PO1 & PO2 & WA & LA & MP & TA & CO & NL \\ \midrule SSD \cite{liu2016ssd} & 71.4& 87.3 & 81.0 & 83.0 & 97.6 & 93.5 & 92.2 & 36.1 & 0.01 \\ \midrule +SP & 72.1& 87.9& 82.3 & 83.8& 97.9 & 92.4 & 92.6 & 38.8 & 0.63 \\ +BP & 72.6& 88.1& 83.4 & 83.9 & \textbf{97.8} & 93.8 & \textbf{92.8} & 40.3 & 0.03 \\ +BP+BA & \textbf{73.1}&\textbf{89.1}&\textbf{84.3}&\textbf{84.1} & 97.7 & \textbf{94.5} & 92.4 & \textbf{42.3} & \textbf{0.1} \\ \bottomrule \end{tabular} } \end{center} \caption{Results of ablation study. Note that in this table, SP refers to the single-directional dense propagation (the left part of Boundary Propagation). BP refers to the Boundary Propagation and BA refers to the Boundary Activation.} \label{table-our} \end{table} \subsection{Visualization} In this section, we visualize the accuracy of recognition and localization in Figure \ref{fig:contradistinction} and the effectiveness of LIM and traditional boundary-enhanced methods in Figure \ref{fig:visual_boundary}. Figure \ref{fig:contradistinction} shows that the LIM-integrated model has a significant improvement over the baseline. In columns 1, 2, 5, 6 and 8, the detection boundaries of prohibited items by the base SSD model are not precise enough and the LIM-integrated model performs better obviously. In column 3, the cosmetic escapes from the detection of the base SSD model but is caught by LIM-integrated model with the confidence of 91\%. In column 7, the base SSD model only detects one prohibited item while there are two, but both of them are detected by LIM. Figure \ref{fig:visual_boundary} illustrated the effectiveness of our LIM and traditional boundary-enhanced methods, including DOAM \cite{WeiOccluded2020}, EEMEFN \cite{zhu2020eemefn}, \etc. \begin{figure}[!ht] \begin{center} \includegraphics[width=\linewidth]{visual_bound.pdf} \end{center} \caption{Visualization of the effectiveness of our LIM.} \label{fig:visual_boundary} \end{figure} \section{Conclusion} In this paper, we investigate prohibited items detection in X-ray security inspection, which plays an important role in protecting public safety. However, this track has not been widely studied due to the lack of specialized public datasets. To facilitate research in this field, we construct and release a dataset with high-quality X-ray images for prohibited items detection, namely HiXray, including 8 categories of 102,928 common prohibited items. All images are gathered from the real-world scenario and manually annotated by professional inspectors. Besides, we propose the Lateral Inhibition Module (LIM) to address the problem that the items to be detected are usually overlapped with the stacked objects during X-ray imaging. Inspired by the lateral suppression mechanism of neurobiology, LIM eliminates the influence of noisy neighboring regions on the object regions of interest and activates the boundary of items by intensifying it. We comprehensively evaluate LIM on the HiXray and OPIXray dataset and the results demonstrate that LIM can improve the performance of SOTA detection methods. We hope that contributing this high-quality data set and LIM model can promote the rapid development of prohibited items detection in X-ray security inspection. \section*{Acknowledge} This work was supported by National Natural Science Foundation of China (62022009, 61872021), Beijing Nova Program of Science and Technology (Z191100001119050), and the Research Foundation of iFLYTEK, P.R. China. {\small \bibliographystyle{ieee_fullname}
2024-02-18T23:40:29.585Z
2021-08-24T02:26:50.000Z
algebraic_stack_train_0000
2,556
6,330
proofpile-arXiv_065-12493
\section{Introduction}\label{sec:intro} For a random variable $X$ with density $f$ its R\'enyi entropy of order $\alpha \in (0,\infty) \setminus \{1\}$ is defined as \[ h_\alpha(X)=h_\alpha(f) = \frac{1}{1-\alpha} \log\left( \int f^\alpha(x) \mathrm{d} x \right), \] assuming that the integral converges, see \cite{R61}. If $\alpha \to 1$ one recovers the usual Shannon differential entropy $h(f)=h_1(f)=-\int f \ln f$. Also, by taking limits one can define $h_0(f)=\log|\supp f|$, where $\supp f$ stand for the support of $f$ and $h_\infty(f)=-\log\|f\|_\infty$, there $\|f\|_\infty$ is the essential supremum of $f$. It is a well known fact that for any random variable one has \[ h(X) \leq \frac12 \log \var(X) + \frac12 \log(2 \pi e) \] with equality only for Gaussian random variables, see e.g. Theorem 8.6.5 in \cite{CT06}. The problem of maximizing R\'enyi entropy under fixed variance has been considered independently by Costa, Hero and Vignat in \cite{CHV03} and by Lutwak, Yang and Zhang in \cite{LYZ05}, where the authors showed, in particular, that for $\alpha \in (\frac13,\infty) \setminus \{1\}$ the maximizer is of the form \[ f(x) = c_0(1+(1-\alpha)(c_1 x)^2)_+^{\frac{1}{\alpha-1}}, \] which will be called the \emph{generalized Gaussian density}. Any density satisfying $f(x) \sim x^{-3}(\log x)^{-2}$ shows that for $\alpha \leq \frac13$ the supremum of $h_\alpha$ under fixed variance is infinite. One may also ask for reverse bounds. However, the infimum of the functional $h_\alpha$ under fixed variance is $-\infty$ as can be seen by considering $f_n(x)=\frac{n}{2}\mathbf{1} _{[1,1+n^{-1}]}(|x|)$ for which the variance stays bounded whereas $h_\alpha(f_n) \to -\infty$ as $n \to \infty$. Therefore, it is natural to restrict the problem to a certain natural class of densities, in which the R\'enyi entropy remains lower bounded in terms of the variance. In this context it is natural to consider the class of log-concave densities, namely densities having the form $f=e^{-V}$, where $V:\mathbb{R} \to (-\infty,\infty]$ is convex. In \cite{MNT21} it was proved that for any symmetric log-concave random variable one has \[ h(X) \geq \frac12 \log \var(X) + \frac12 \log 12 \] with equality if and only if $X$ is a uniform random variable. In the present article we shall extend this result to general R\'enyi entropy. Namely, we shall prove the following theorem. \begin{theorem}\label{thm:main} Let $X$ be a symmetric log-concave random variable and $\alpha > 0$, $\alpha \neq 1$. Define $\alpha^*$ to be the unique solution to the equation $\frac{1}{\alpha-1}\log \alpha= \frac12 \log 6$ ($\alpha^* \approx 1.241$). Then \[ h_{\alpha}(X) \geq \frac12 \log \var(X) + \frac12 \log 12 \text{ \qquad for } \alpha \leq \alpha^* \] and \[ h_{\alpha}(X) \geq \frac12 \log \var(X) + \frac12 \log2+\frac{\log\alpha}{\alpha-1}\text{ \qquad for } \alpha \geq \alpha^*. \] For $\alpha < \alpha^*$ equality holds if and only if $X$ is uniform random variable on a symmetric interval, while for $\alpha > \alpha^*$ the bound is attained only for two-sided exponential distribution. When $\alpha=\alpha^*$, two previously mentioned densities are the only cases of equality. \end{theorem} The above theorem for $\alpha <1$ trivially follows from the case $\alpha=1$ as already observed in \cite{MNT21} (see Theorem 5 therein). This is due to the monotonicity of R\'enyi entropy in $\alpha$. As we can see the case $\alpha \in [1,\alpha^*]$ of Theorem \ref{thm:main} is a strengthening of the main result of \cite{MNT21}, as in this case $h_\alpha(X) \leq h(X)$ and the right hand sides are the same. This article is organized as follows. In Section \ref{sec:reduction} we reduce Theorem \ref{thm:main} to the case $\alpha=\alpha^*$. In Section \ref{sec:degrees-of-freedom} we further simplify the problem by reducing it to \emph{simple} functions via the concept of degrees of freedom. Section \ref{sec:proof} contains the proof for these simple functions. In the last section we derive two applications of our main result. \section{Reduction to the case $\alpha=\alpha^*$}\label{sec:reduction} The following lemma is well known. We present its proof for completeness. The proof of point (ii) is taken from \cite{FMW16}. As pointed out by the authors, it can also be derived from Theorem 2 in \cite{B73} or from Theorem VII.2 in \cite{BM11}. \begin{lemma}\label{lem:log-concavity} Suppose $f$ is a probability density in $\mathbb{R}^n$. \begin{itemize} \item[(i)] The function $p \mapsto \int f^p$ is log-convex on $(0,\infty)$. \item[(ii)] If $f$ is log-concave then the function $p \mapsto p^n \int f^p$ is log-concave on $(0,\infty)$. \end{itemize} \end{lemma} \begin{proof} (i) This is a simple consequence of H\"older's inequality. (ii) Let $\psi(p)=p^n \int f^p(x) \mathrm{d} x$. The function $f$ can be written as $f=e^{-V}$, where $V:\mathbb{R}^n \to (-\infty,+\infty]$ is convex. Changing variables we get $\psi(p)=\int e^{-p V(\frac{z}{p}) }\mathrm{d} z$. For any convex $V$ the so-called \emph{perspective function} $W(z,p)= pV(\frac{z}{p})$ is convex on $\mathbb{R}^n \times (0,\infty)$. Indeed, for $\lambda \in [0,1]$, $p_1,p_2 > 0$ and $z_1, z_2 \in \mathbb{R}^n$ we have \begin{align*} W(\lambda z_1 & + (1-\lambda) z_2, \lambda p_1+(1-\lambda)p_2) = (\lambda p_1+(1-\lambda)p_2) V\left( \frac{\lambda p_1 \frac{z_1}{p_1}+(1-\lambda)p_2 \frac{z_2}{p_2}}{\lambda p_1+(1-\lambda)p_2} \right) \\ & \leq \lambda p_1 V\left(\frac{z_1}{p_1} \right)+(1-\lambda) p_2 V\left(\frac{z_2}{p_2} \right) = \lambda W(z_1,p_1)+(1-\lambda) W(z_2,p_2). \end{align*} Since $\psi(p)=\int e^{-W(z,p)} \mathrm{d} z$, the assertion follows from the Pr\'ekopa’s theorem from \cite{P73} saying that a marginal of a log-concave function is again log-concave. \end{proof} \begin{remark*} The use of the term \emph{perspective function} appeared in \cite{H-UL93}, however the convexity of this function was known much earlier. \end{remark*} The next corollary is a simple consequence of Lemma \ref{lem:log-concavity}. The right inequality of this corollary appeared in \cite{FMW16}, whereas the left inequality is classical. \begin{corollary}\label{cor:monot-ent} Let $f$ be a log-concave probability density in $\mathbb{R}^n$. Then for any $p \geq q > 0$ we have \[ 0 \leq h_q(f)-h_p(f) \leq n \frac{\log q}{q-1} - n \frac{\log p}{p-1}. \] In fact the first inequality is valid without the log-concavity assumption. \end{corollary} \begin{proof} To prove the first inequality we observe that due to Lemma \ref{lem:log-concavity} the function defined by $\phi_1(p)=(1-p)h_p(f)$ is convex. From the monotonicity of slopes of $\phi_1$ we get that $\frac{\phi_1(p)-\phi_1(1)}{p-1} \geq \frac{\phi_1(q)-\phi_1(1)}{q-1}$, which together with the fact that $\phi_1(1)=0$ gives $h_p(f) \leq h_q(f)$. Similarly, to prove the right inequality we note that $\phi_2(p) = n \log p + (1-p)h_p(f)$ is concave with $\phi_2(1)=0$. Thus $\frac{\phi_2(p)-\phi_2(1)}{p-1} \leq \frac{\phi_2(q)-\phi_2(1)}{q-1}$ gives $\frac{n \log p}{p-1}-h_p(f) \leq \frac{n \log q}{q-1}-h_q(f)$, which finishes the proof. \end{proof} Having Corollary \ref{cor:monot-ent} we can easily reduce Theorem \ref{thm:main} to the case $\alpha=\alpha^*$. Indeed, the case $\alpha < \alpha^*$ follows from the left inequality of Corollary \ref{cor:monot-ent} ($h_p$ is non-increasing in $p$). The case $\alpha>\alpha^*$ is a consequence of the right inequality of the above corollary, according to which the quantity $h_\alpha(X)-\frac{\log \alpha}{\alpha-1}$ is non-decreasing in $\alpha$. \section{Reduction to simple functions via degrees of freedom}\label{sec:degrees-of-freedom} The content of this section is a rather straightforward adaptation of the method from \cite{MNT21}. Therefore, we shall only sketch the arguments. \\ By a standard approximation argument it is enough to prove our inequality for functions from the set $\mc{F}_L$ of all continuous even log-concave probability densities supported on $[-L,L]$. Thus, it suffices to show that \begin{equation}\label{eq:inf} \inf \ \{ h_{\alpha^*}(f): \ f \in \mc{F}_L, \ \var(f)=\sigma^2 \} \geq \log \sigma + \frac12 \log 2 + \frac{\log \alpha^*}{\alpha^*-1}. \end{equation} Take $A=\{f \in \mc{F}_L: \var(f)=\sigma^2\}$. We shall show that $\inf_{f \in A} h_{\alpha^*}(f)$ is attained on $A$. Equivalently, since $\alpha^*>1$ it suffices to show that $M=\sup_{f \in A} \int f^{\alpha^*}$ is attained on $A$. We first argue that this supremum is finite. This follows from the estimate $\int f^{\alpha^*} \leq 2L f(0)^{\alpha^*}$ and from the inequality $f(0) \leq \frac{1}{\sqrt{2 \var(f)}} = \frac{1}{\sqrt{2} \sigma}$, see Lemma 1 in \cite{MNT21}. Next, let $(f_n)$ be a sequence of functions from $A$ such that $\int f_n^{\alpha^*} \to M$. According to Lemma 2 from \cite{MNT21}, by passing to a subsequence one can assume that $f_n \to f$ pointwise, where $f$ is some function from $A$. Since $f_n \leq f_n(0)\leq \frac{1}{\sqrt{2}\sigma}$, by the Lebesgue dominated convergence theorem we get that $\int f_n^{\alpha^*} \to \int f^{\alpha^*}=M$ and therefore the supremum is attained on $A$. Now, we say that $f \in A$ is an \emph{extremal point} in $A$ if $f$ cannot be written as a convex combination of two different functions from $A$, that is, if $f = \lambda f_1 + (1-\lambda) f_2$ for some $\lambda \in (0,1)$ and $f_1, f_2 \in A$, then necessarily $f_1=f_2$. It is easy to observe that if $f$ is not extremal, then it cannot be a maximizer of $\int f^{\alpha^*}$ on $A$. Indeed, if $f = \lambda f_1 + (1-\lambda) f_2$ for some $\lambda \in (0,1)$ and $f_1, f_2 \in A$ with $f_1 \ne f_2$, then the strict convexity of $x \to x^{\alpha^*}$ implies \[ \int f^{\alpha^*} = \int(\lambda f_1 + (1-\lambda) f_2)^{\alpha^*} < \lambda \int f_1^{\alpha^*} + (1-\lambda) \int f_2^{\alpha^*} \leq M. \] This shows that in order to prove \eqref{eq:inf} it suffices to consider only the functions $f$ being extremal points of $A$. Finally, according to Steps III and IV of the proof of Theorem 1 from \cite{MNT21} these extremal points are of the form \[ f(x)= c \mathbf{1} _{[0,a]}(|x|)+ce^{-\gamma(|x|-a)}\mathbf{1} _{[a,a+b]}(|x|), \qquad a+b=L, \ c>0, \ a,b,\gamma \geq 0, \] where it is also assumed that $\int f=1$. \section{Proof for the case $\alpha=\alpha^*$} \label{sec:proof} Due to the previous section, we can restrict ourselves to probability densities $f$ of the form \[ f(x)= c \mathbf{1} _{[0,a]}(|x|)+ce^{-\gamma(|x|-a)}\mathbf{1} _{[a,a+b]}(|x|), \qquad a,b,\gamma \geq 0. \] The inequality is invariant under scaling $f(x) \mapsto \lambda f(\lambda x)$ for any positive $\lambda$, so we can assume that $\gamma=1$ (note that in the case $\gamma=0$ we get equality). We have \[ \int_\mathbb{R} f^\alpha = c^\alpha \int_\mathbb{R} \mathbf{1} _{[0,a]}(|x|)+ c^\alpha \int_\mathbb{R} e^{-\alpha x}\mathbf{1} _{[0,b]}(|x|)=2c^\alpha \left(a+\frac{1-e^{-\alpha b}}{\alpha}\right) \] and thus \[h_\alpha(f)=\frac{1}{1-\alpha}\log{\int_\mathbb{R} f^{\alpha}}=\frac{1}{1-\alpha}\log \left(2c^\alpha \left(a+\frac{1-e^{-\alpha b}}{\alpha}\right) \right). \] Moreover, \[ \var(f)= 2c \int_\mathbb{R} x^2\mathbf{1} _{[0,a]}(x)\mathrm{d} x + 2c \int_\mathbb{R} (x+a)^2 e^{-x} \mathbf{1} _{[0,b]}\mathrm{d} x =2c\left (\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right), \] so our inequality can be rewritten as \[ \frac{1}{1-\alpha^*}\log \left(2c^{\alpha^*} \left(a+\frac{1-e^{-\alpha^* b}}{\alpha^*}\right) \right) +\frac{\log \alpha^*}{1-\alpha^*} \geq \frac12 \log\left(2c\left (\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right) \right) +\frac12 \log 2, \] which is \[ \frac{1}{1-\alpha^*}\log \left(2c^{\alpha^*} \left(a \alpha^* +1-e^{-\alpha^* b}\right) \right) \geq \frac12 \log\left(2c\left (\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right) \right) +\frac12 \log 2. \] The constraint $\int_\mathbb{R} f=1$ gives $c=\frac12 (a+1-e^{-b})^{-1}$. After multiplying both sides by $2$, exponentiating both sides and plugging the expression for $c$ in, we get the equivalent form of the inequality, $G(a,b,\alpha^*) \geq 0$, where \begin{equation}\label{fundamental} G(a,b,\alpha)= 2 (a\alpha +1-e^{-\alpha b})^{\frac{2}{1-\alpha}}(a+1-e^{-b})^{\frac{1-3\alpha}{1-\alpha}} - \left(\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right). \end{equation} We will also write $G(a,b)=G(a,b,\alpha^*)$. To finish the proof we shall need the following lemma. \begin{lemma}\label{lem:tech} The following holds: \begin{itemize} \item[(a)] $\frac{\partial^4 }{\partial a^4} G(a,b) \geq 0$ holds for every $a,b \geq 0$, \item[(b)] $\lim_{a \to \infty} \frac{\partial^3 }{\partial a^3} G(a,b) = 0$ for every $b \geq 0$, \item[(c)] $\lim_{a \to \infty} \frac{\partial^2 }{\partial a^2} G(a,b) \geq 0$ for every $b \geq 0$, \item[(d)] $\frac{\partial }{\partial a} G(a,b) \big|_{a=0} \geq 0$ for every $b \geq 0$, \item[(e)] $G(0,b) \geq 0$ for every $b \geq 0$. \end{itemize} \end{lemma} \noindent With these claims at hand it is easy to conclude the proof. Indeed, one easily gets, one by one, \[ \frac{\partial^3 }{\partial a^3} G(a,b) \leq 0, \qquad \frac{\partial^2 }{\partial a^2} G(a,b) \geq 0, \qquad \frac{\partial }{\partial a} G(a,b) \geq 0, \qquad G(a,b) \geq 0, \qquad b \geq 0. \] The proof of points (d) and (e) relies on the following simple lemma. \begin{lemma}\label{lem:sign} Let $f(x)=\sum_{n=0}^{\infty}a_n x^n$, where the series is convergent for every nonnegative $x$. If there exists a nonnegative integer $N$ such that $a_n \geq 0$ for $n<N$ and $a_n \leq 0$ for $n \geq N$, then $f$ changes sign on $(0,\infty)$ at most once. Moreover, if at least one coefficient $a_n$ is positive and at least one negative, then there exists $x_0$ such that $f(x) > 0$ on $[0,x_0)$ and $f(x) < 0$ on $(x_0,\infty)$. \end{lemma} \begin{proof} Clearly the function $f(x)x^{-N}$ is nonincreasing on $(0,\infty)$, so the first claim follows. To prove the second part we observe that for small $x$ the function $f$ must be strictly positive and $f(x)x^{-N}$ is strictly decreasing on $(0,\infty)$. \end{proof} With this preparation we are ready to prove Lemma \ref{lem:tech}. \begin{proof}[Proof of Lemma \ref{lem:tech}] \ (a) This point is the crucial observation of the proof. It turns out that \begin{align*} \frac{\partial^4 G}{\partial a^4}(a,b,\alpha) &= 8\alpha(\alpha+1)(3\alpha-1)(1+a-e^{-b})^{\frac{3\alpha-1}{\alpha-1}}(1+a\alpha-e^{-b\alpha})^{\frac{2}{1-\alpha}} \\ & \qquad \qquad \times \left(\frac{(e^b-\alpha e^{b\alpha}+(\alpha-1)e^{b+b\alpha})}{(\alpha-1)(e^b(a+1)-1)(e^{b\alpha}(a\alpha+1)-1)}\right)^4, \end{align*} which is nonegative for $\alpha> \frac13$. \\ (b) By a direct computation we have \begin{align*} \frac{\partial^3 G(a,b,\alpha)}{\partial a^3} &= -2 - \frac{4\alpha}{(1-\alpha)^3}(1+a-e^{-b})^{\frac{2}{\alpha-1}}(1+a\alpha-e^{-b \alpha})^{\frac{1-3\alpha}{\alpha-1}} \\ & \quad \times [ (\alpha+1)(3\alpha-1)(1+a\alpha-e^{-b\alpha})^3-2\alpha^3(\alpha+1)(1+a-e^{-b})^3 \\& \qquad \qquad +3\alpha(\alpha+1)(3\alpha-1)(1+a-e^{-b})^2(1+a\alpha-e^{-b\alpha}) \\ & \qquad \qquad +6\alpha(1-3\alpha)(1+a-e^{-b})(1+a\alpha-e^{-b\alpha})^2 ]. \end{align*} When $a$ tends to infinity with $b$ fixed this converges to \[ -2 - \frac{4\alpha}{(1-\alpha)^3} \alpha^{\frac{1-3\alpha}{\alpha-1}}\left( (\alpha+1)(3\alpha-1)\alpha^3 - 2 \alpha^3 (\alpha+1)+3\alpha^2 (\alpha+1)(3\alpha-1) + 6 \alpha^3(1-3\alpha) \right), \] which is $ -2 + 12 \alpha^3 \alpha^{\frac{1-3\alpha}{\alpha-1}} $. If $\alpha=\alpha^*$, using equality $(\alpha^*)^{\frac{2}{1-\alpha^*}}=\frac16$, we get that this expression is equal to $0$. \\ (c) Again a direct computation yields \begin{align*} \frac{\partial^2 G(a,b,\alpha)}{\partial a^2} &= \frac{4\alpha^2(\alpha+1)(a-e^{-b}+1)\left(1-e^{-b}a^{-1}+a^{-1}\right)^{\frac{2\alpha}{\alpha-1}}\left(\alpha-e^{-\alpha b}a^{-1}+a^{-1}\right)^{-\frac{2\alpha}{\alpha-1}}}{(\alpha-1)^2} \\ & \qquad +\frac{4\alpha(3\alpha-1)(a-e^{-b}+1)\left(1-e^{-b}a^{-1}+a^{-1}\right)^{\frac{2}{\alpha-1}}\left(\alpha-e^{-\alpha b}a^{-1}+a^{-1}\right)^{-\frac{2}{\alpha-1}}}{(\alpha-1)^2} \\ & \qquad +\frac{8\alpha(1-3\alpha)(a-e^{-b}+1)\left(1-e^{-b}a^{-1}+a^{-1}\right)^{\frac{\alpha+1}{\alpha-1}}\left(\alpha-e^{-\alpha b}a^{-1}+a^{-1}\right)^{-\frac{\alpha+1}{\alpha-1}}}{(\alpha-1)^2} \\ & \qquad -2a+2e^{-b}-2. \end{align*} As $a$ tends to infinity, we have \[ (1-e^{-b}a^{-1}+a^{-1})^w=1+w(1-e^{-b}) a^{-1}+o(a^{-1}) \] and \[ (\alpha-e^{-\alpha b}a^{-1}+a^{-1})^w=\alpha^{w}+w(1-e^{-\alpha b})\alpha^{w-1} a^{-1}+o(a^{-1}). \] Using these formulas together with the above expression for the second derivative easily gives \[ \frac{\partial^2 G(a,b,\alpha)}{\partial a^2} = h_1(\alpha) \frac{1}{x} + h_2(b,\alpha) + o(a^{-1}), \] where \[ h_1(\alpha) = 12 \alpha^{-\frac{2}{\alpha-1}}-2 \] and \[ h_2(b, \alpha) =2(e^{-b}-1) + \frac{4 \alpha \left(\alpha^{\frac{1}{1-\alpha}}-\alpha^{\frac{\alpha}{1-\alpha}}\right)^2}{(\alpha-1)^3}\left( 2 \left(\alpha-1 -\alpha e^{-b}+e^{-b \alpha}\right)+3 \left(1-e^{-b}\right) \alpha (\alpha-1) \right). \] We have $h_1(\alpha^*)=0$. Moreover, \[ \frac{4 \alpha^* \left((\alpha^*)^{\frac{1}{1-\alpha^*}}-(\alpha^*)^{\frac{\alpha^*}{1-\alpha^*}}\right)^2}{(\alpha^*-1)^3} = \frac{4\alpha^* \left( \frac{1}{\sqrt{6}}- \frac{1}{\sqrt{6} \alpha^*} \right)^2}{(\alpha^*-1)^3} = \frac{2}{3\alpha^*(\alpha^*-1)}. \] Hence, \[ \lim_{a \to \infty} \frac{\partial^2 G(a,b,\alpha)}{\partial a^2} = h_2(b,\alpha^*) = \frac{4}{3\alpha^*(\alpha^*-1)} \left( (1-e^{-b}) \alpha^* - (1-e^{-b \alpha^*}) \right) . \] This expression is nonnegative for $b \geq 0$ since the function $h_3(x) = 1-e^{-x}$ is concave, so we have $\frac{1-e^{-b}}{b} = \frac{h_3(b)}{b} \geq \frac{h_3(\alpha^* b)}{\alpha^* b} = \frac{1-e^{-\alpha^* b}}{\alpha^* b}$ as $\alpha^*>1$ (monotonicity of slopes). \\ (e) To illustrate our method, before proceeding with the proof of (d) we shall prove (e), as the idea of the proof of (d) is similar, but the details are more complicated. Our goal is to show the inequality \begin{equation} (1-e^{-\alpha^* b})^{\frac{2}{1-\alpha^*}}(1-e^{-b})^{\frac{1-3\alpha^*}{1-\alpha^*}} \geq 1-\frac{b^2+2b+2}{2}e^{-b}. \label{zero_b} \end{equation} after taking the logarithm of both sides our inequality reduces to nonnegativity of \[ \phi(b) = \frac{2}{1-\alpha^*} \log(1-e^{-\alpha^* b}) +\frac{1-3\alpha^*}{1-\alpha^*} \log(1-e^{-b}) - \log\left(1-\frac{b^2+2b+2}{2}e^{-b}\right). \] We have \begin{equation*} \phi'(b)=\frac{2\alpha^*}{(1-\alpha^*)(e^{\alpha^* b}-1)} +\frac{1-3\alpha^*}{(1-\alpha^*)(e^{b}-1)}+\frac{b^2}{b^2+2b-2e^{b}+2}. \end{equation*} It turns out that $\phi(b)$ changes sign on $(0,\infty)$ at most once. To show that, firstly, clear out the denominators (they have fixed sign on $(0,\infty)$) to obtain the expression \begin{equation} 2\alpha^*(b^2+2b-2e^{b}+2)(e^b-1)+ (1-3\alpha^*)(e^{\alpha^* b}-1)(b^2+2b-2e^b+2)+b^2(1-\alpha^*)(e^b-1)(e^{\alpha^* b}-1). \label{logderivative} \end{equation} Now we will apply Lemma $\ref{lem:sign}$ to $\eqref{logderivative}$. That expression can be rewritten as \[-4 \alpha^* \left(\sum_{n=3}^{\infty}\frac{b^n}{n!}\right)\left(\sum_{n=1}^{\infty} \frac{b^n}{n!}\right) +\left(6\alpha^*-2\right)\left(\sum_{n=1}^{\infty} \frac{(\alpha^*b)^n}{n!}\right)\left(\sum_{n=3}^{\infty} \frac{b^n}{n!}\right)+b^2(1-\alpha^*)\left(\sum_{n=1}^\infty \frac{b^n}{n!} \right)\left(\sum_{n=1}^\infty \frac{(\alpha^* b)^n}{n!}\right), \] so the $n$-th coefficient $a_{n}$ in the Taylor expansion is equal to \begin{align*} a_{n} &= (6\alpha^*-2)\left(\sum_{j=1}^{n-3} \frac{(\alpha^*)^j}{j!(n-j)!}\right) - 4\alpha^* \left(\sum_{j=1}^{n-3} \frac{1}{j!(n-j)!}\right) +(1-\alpha^*)\left(\sum_{j=1}^{n-3} \frac{(\alpha^*)^j}{j!(n-2-j)!}\right) \\ & \leq \frac{1}{n!} (6\alpha^*-2)(\alpha^*+1)^n +\frac{1-\alpha^*}{(n-2)!}\left( (\alpha^*+1)^{n-2} - 1-(\alpha^*)^{n-2} \right) \\ & \leq \frac{6}{n!}(\alpha^*+1)^n-\frac{n(n-1)}{30n!}(\alpha^*+1)^n+\frac{8n^2}{n!}(\alpha^*)^{n} . \end{align*} When $n \geq 17$, we have $\frac{n(n-1)}{30} > 7$ and $(\frac{\alpha^*+1}{\alpha^*})^n \geq (\frac85)^n \geq 8 n^2$, so $a_n$ is less than zero for $n \geq 17$. It can be checked (preferably using computational software) that the rest of coefficients $a_n$ satisfy the pattern from Lemma $\ref{lem:sign}$, with $a_n=0$ for $n \leq 4$, $a_n>0$ for $n=5,6,7$ and $a_n<0$ for $n \geq 8$. This way we have proved that $\phi'(b)$ changes sign in exactly one point $x_0 \in (0, \infty)$. Thus, $\phi$ is first increasing and then decreasing. Since $\phi(0)=0$ and $\lim_{b \to \infty} \phi(b)=0$, the assertion follows. \\ (d) We have to show that \[ \frac{(1-e^{-b})^{\frac{2\alpha^*}{\alpha^*-1}}(1-e^{-b\alpha^*})^{-\frac{1+\alpha^*}{\alpha^*-1}}}{\alpha^*-1}[(3\alpha^*-1)(1-e^{-b\alpha^*})-2\alpha^*(1-e^{-b})] \geq 1-(b+1)e^{-b}. \] Let $\varphi_1(b)$ be the expression on the left side and $\varphi_2(b)$ on the right side. Both $\varphi_1$ and $\varphi_2$ are positive for $b > 0$, so we can take the logarithm of both sides. We will now show that $(\log(\varphi_1))'-(\log(\varphi_2))'$ changes sign at most once on $(0,\infty)$. We have \begin{align*} \label{jakasnazwa} (\log(\varphi_1))'-(\log(\varphi_2))'&=\frac{2\alpha^*}{\left(e^b-1\right)(\alpha^*-1)}-\frac{(\alpha^*+1)\alpha^*}{(\alpha^*-1)\left(e^{b\alpha^*}-1\right)} \\& \qquad + \frac{\alpha^*(3\alpha^*-1)e^{b}-2e^{b\alpha^*}\alpha^*}{e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b}}-\frac{b}{e^b-b-1}. \end{align*} Multiplying the above expression by the product of denominators does not change the hypothesis, since each of the denominators is positive. After this multiplication we get the expression \begin{align*} & [-\left(e^b-1\right)\left(e^b-1-b \right)(\alpha^*+ 1)\alpha^*+2\left(e^b-1-b\right)\alpha^*\left(e^{b\alpha^*}-1\right)-b\left(e^b-1\right)(\alpha^*-1)\left(e^{b\alpha^*}-1\right)] \\ & \qquad \qquad \times \left(e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b}\right) \\ & \qquad + \alpha^* (\alpha^*-1) \left(e^b-1\right)\left(e^b-1-b\right)\left(e^{b\alpha^*}-1\right)\left(e^b(3\alpha^*-1)-2e^{b\alpha^*}\right). \end{align*} Let us consider the Taylor series $\sum_{n \geq 0} a_n b^n$ of this function (it is clear that the series converges to the function everywhere). It can be shown (again using computational software) that coefficients of this series up to order $9$ are nonnegative and coefficients of order greater than $9$, but lesser than $30$ are negative. Now we will show negativity of coefficients of order at least $30$ (our bound will be very crude, so it would not work, if we replaced $30$ with lower number). Firstly we note that \[ e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b} \] has $n$-th Taylor coefficient equal to \[ \frac{1-3\alpha^* + 2(\alpha^*)^{n+1}+(\alpha^*-1)(\alpha^*+1)^n}{n!} \geq \frac{1-3\alpha^* + 2\alpha^*+\alpha^*-1}{n!} = 0, \] so all its coefficients are nonnegative. Thus we can change expression in square brackets to $(e^b-1)(e^{b \alpha^*-1})(5/2- b/5)$ (we discard the first term and bound from above the second and third one) to increase every Taylor coefficient of main expression. Now we want to show the negativity of coefficients of order at least $30$ for \[ (e^b-1)(e^{b\alpha^*}-1)[(5/2-b/5)(e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b})+\alpha^*(\alpha^*-1)(e^b-b-1)((3\alpha^*-1)e^b-2e^{b\alpha^*})] \] The expression in square brackets has $n$-th Taylor coefficient $c_n$ equal to zero for $n \in \{0,1\}$, while for $n \geq 2$ it is \begin{align*} c_n &= \frac{5(1-3\alpha^*)}{2n!}+\frac{3\alpha^*-1}{5(n-1)!}+\frac{5(\alpha^*)^{n+1}}{n!}-\frac{2(\alpha^*)^n}{5(n-1)!}+\frac{5(\alpha^*-1)(\alpha^*+1)^n}{2n!}-\frac{(\alpha^*-1)(\alpha^*+1)^{n-1}}{5(n-1)!} \\& \qquad + \alpha^*(\alpha^*-1)(3\alpha^*-1) \frac{2^n-n-1}{n!} -\frac{2\alpha^*(\alpha^*-1)}{n!}((\alpha^*+1)^n-(\alpha^*)^n-n(\alpha^*)^{n-1}) . \end{align*} Using the bounds \[ \frac{5(1-3\alpha^*)}{2n!} \leq 0, \qquad -\frac{2(\alpha^*)^n}{5(n-1)!} \leq 0, \qquad \alpha^*(\alpha^*-1)(3\alpha^*-1) \frac{2^n-n-1}{n!} \leq \frac{2^n}{n!} \] and \[ \frac{2\alpha^*(\alpha^*-1)}{n!}((\alpha^*)^n+n (\alpha^*)^{n-1}) \leq \frac{(n+1)(\alpha^*)^n}{n!}, \qquad -\frac{(\alpha^*-1)(\alpha^*+1)^{n-1}}{5(n-1)!} \leq -\frac{\frac45 n}{10 n!}(\alpha^*-1)(\alpha^*+1)^n \] we get the following upper bound for $c_n$ for $n \geq 2$ \begin{align*} c_n &\leq \frac{(3\alpha^*-1)}{5(n-1)!}+\frac{5(\alpha^*)^{n+1}}{n!}+\frac{2^n}{n!}+\frac{(n+1)(\alpha^*)^n}{n!} +\frac{(\alpha^*+1)^n(\alpha^*-1)(25-20\alpha^*-4n/5)}{10n!} \\ & \leq \frac{(n+8)(\alpha^*)^n}{n!}+\frac{n+2^{n}}{n!} +\frac{(\alpha^*+1)^n(1-3n)}{200n!}, \end{align*} since $\frac{1}{10}(\alpha^*-1)(25-20 \alpha^*) \leq \frac{1}{200}$ and $\frac{4}{50}(\alpha^*-1) \geq \frac{3}{200}$. This bound works for $n \in \{0,1\}$, too. We have \[ (e^b-1)(e^{b\alpha^*}-1)=\sum_{n=2}^{\infty}b^n \frac{(\alpha^*+1)^n-(\alpha^*)^n-1}{n!}, \] so $(e^b-1)(e^{b\alpha^*}-1)$ has nonnegative coefficients. Now we can bound the Taylor series coefficients $d_n$ of the main expression as follows \begin{equation*} \begin{split} d_n &\leq \frac{1}{n!} \sum_{k=0}^{n-2} \binom{n}{k} \left( (k+8)(\alpha^*)^k + k+2^k +\left(\alpha^*+1\right)^k\frac{1-3k}{200}\right)\left(\left(\alpha^*+1\right)^{n-k}-(\alpha^*)^{n-k}-1\right) \end{split} \end{equation*} Changing the upper limit of the sum from $n-2$ to $n$ increases the sum for $n\geq 30$ -- for $k=n-1$ we have $(\alpha^*+1)^{n-k}-(\alpha^*)^{n-k}-1=0$ and the term for $k=n$ is surely positive for $ n \geq 30$, thus we have \begin{align*} n! d_n \leq \sum_{k=0}^{n} &\binom{n}{k} \left( (k+8)(\alpha^*)^k + k+2^k +\left(\alpha^*+1\right)^k\frac{1-3k}{200}\right)\left(\left(\alpha^*+1\right)^{n-k}-(\alpha^*)^{n-k}-1\right) \leq \\ &\leq (n+8)(2\alpha^*+1)^n+n(\alpha^*+2)^n+(\alpha^*+3)^n +\frac{1}{200}(2\alpha^*+2)^n \\ & \qquad - \frac{3n}{400}(2\alpha^*+2)^{n} + \frac{3n}{200}(2\alpha^*+1)^{n} + \frac{3n}{200}(\alpha^*+2)^{n}, \end{align*} where we neglected all the negative terms except for the term $\sum_{k=0}^n {n \choose k}\frac{-3k}{200} (\alpha^*+1)^n = - \frac{3n}{400}(2\alpha^*+2)^{n} $ and bounded $k$ by $n$ in all the positive terms (whenever $k$ appeared linearly). It is clear that negative term $-\frac{3n}{400}(2\alpha^*+2)^{n}$ dominates, so $d_n$ is negative when $n$ is sufficiently large. In fact, the expression is negative for $n \geq 30$. It is not hard to prove (again by checking some concrete values numerically and using convexity arguments) that for $n \geq 30$ we have \[ n+8+\frac{3}{200} n < 0.104 \left( \frac{\alpha^*+3}{2\alpha^*+1} \right)^n, \qquad \left(1+\frac{3}{200} \right) n < 0.01 \left( \frac{\alpha^*+3}{2\alpha^*+1} \right)^n, \] so for $n \geq 30$ we have \[ n! d_n < 1.114 (\alpha^*+3)^n - \frac{3n-2}{400} (2\alpha^*+2)^n = (2\alpha^*+2)^n\left( 1.114 \left( \frac{\alpha^*+3}{2\alpha^*+2} \right)^n - \frac{3n-2}{400} \right)<0 . \] From Lemma \ref{lem:sign} we get that $(\log(\varphi_1))'-(\log(\varphi_2))'$ on $(0,\infty)$ is first positive and then negative. This means that $(\log(\varphi_1))-(\log(\varphi_2))$ first increasing and then decreasing. In order to prove that it is everywhere nonnegative it suffices to check that it is nonnegative when $b \to 0^+$ and $b \to \infty$. The limit when $b \to \infty$ is easily seen to be $0$. To check the limit when $b \to 0^+$ it is enough check the Taylor expansion of $\phi_1(b)-\phi_2(b))$. Note that \begin{align*} \frac{\phi_1(b)-\phi_2(b)}{b^2} & = \left(1- \frac{b}{2} + O(b^2) \right)^{\frac{2\alpha^*}{\alpha^*-1}} (\alpha^*)^{-\frac{1+\alpha^*}{\alpha^*-1}}\left( 1- \frac12 b \alpha^* + O(b^2) \right)^{-\frac{1+\alpha^*}{\alpha^*-1}} \\ & \qquad \times \left(3\alpha^* - \frac{\alpha^*(2+3\alpha^*)}{2} b + O(b^2) \right) - \frac12 + \frac{b}{3} + O(b^2). \end{align*} By using the equality $(\alpha^*)^{\frac{2}{1-\alpha^*}} = \frac16$ we see that the constant term vanishes. In fact \[ \frac{\phi_1(b)-\phi_2(b)}{b^2} = \left( \frac13 - (\alpha^*)^{\frac{2}{1-\alpha^*}} \right)b + O(b^2) = \frac16 b + O(b^2). \] \end{proof} \section{Applications} \subsection{Relative $\alpha$-entropy}\label{sec:q-entropy} Recall that if $f_X$ denotes the density of a random variable $X$ then the relative $\alpha$-entropy studied by Ashok Kumar and Sundaresan in \cite{AS15} is defined as \[ I_{\alpha}(X \| Y) = \frac{\alpha}{1-\alpha} \log\left( \int \frac{f_X}{\| f_X\|_\alpha} \left( \frac{f_Y}{\|f_Y\|_\alpha} \right)^{\alpha-1} \right) \] for $\alpha \in (0,1) \cup (1,\infty)$, where $\|f\|_\alpha= (\int |f|^\alpha)^{1/ \alpha}$. We shall derive an analogue of Corollary 5 from \cite{MNT21}. To this end we shall need the following fact. \begin{proposition}[\cite{AS15}, Corollary 13]\label{prop:1} Suppose $\alpha>0$, $\alpha \neq 1$ and let $\mc{P}$ be the family of probability measures such that the mean of the function $T : \mathbb{R} \to \mathbb{R}$ under them is fixed at a particular value $t$. Let the random variable $X$ have a distribution from $\mc{P}$, and let $Z$ be a random variable that maximizes the R\'enyi entropy of order $\alpha$ over $\mc{P}$. Then \[ I_\alpha(X \| Z) \leq h_\alpha(Z) - h_\alpha(X). \] \end{proposition} Combining Proposition \ref{prop:1} with Theorem \ref{thm:main} and using expressions for the R\'enyi entropy and variance of a generalized Gaussian density derived in \cite{LYZ05}, one gets the following corollary. \begin{corollary}\label{cor:relatice-q-entropy} Suppose $\alpha>1$. Let $X$ be a symmetric log-concave real random variable. Let $Z$ be the random variable having generalized Gaussian density with parameter $\alpha$ and satisfying $\var(X)=\var(Z)$. Then $I_\alpha(X \| Z) \leq C(\alpha)$, where \[ C(\alpha) = \log\left( (2\alpha)^{\frac{1}{1-\alpha}} (3\alpha-1)^{- \frac{1}{1-\alpha}}(\alpha-1)^{-\frac12} B\left(\frac12, \frac{\alpha}{\alpha-1} \right) \right)- \min\left( \frac12 \log12 , \frac12 \log 2 + \frac{\log \alpha}{\alpha-1} \right). \] Here $B(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$ stand for the Beta function. \end{corollary} \subsection{Reverse entropy power inequality}\label{sec:reverse-epi} The R\'enyi entropy power of order $\alpha>0$ of a real random variable $X$ is defined as $N_\alpha(X)=\exp(2h_\alpha(X))$. If we combine our Theorem \ref{thm:main} with Theorem 2 from \cite{LYZ05} we get the following sandwich bound for $\alpha>1$ and a symmetric log-concave random variable $X$, \[ C_-(\alpha) \var(X) \leq N_\alpha(X) \leq C_+(\alpha) \var(X), \] where \[ C_-(\alpha) = \left\{\begin{array}{ll} 12 & \alpha \in (1,\alpha^*) \\ 2 \alpha^{\frac{2}{\alpha-1}} & \alpha \geq \alpha^* \end{array} \right., \qquad C_+(\alpha) = \frac{3\alpha-1}{\alpha-1} \left( \frac{2\alpha}{3\alpha-1} \right)^{\frac{2}{1-\alpha}} B\left(\frac12, \frac{\alpha}{\alpha-1}\right)^2 . \] Note that the case of $\alpha \in (\frac13,1]$ was discussed in \cite{MNT21}. We point out that for the upper bound the log-concavity assumption is not needed. Nevertheless, note that for $\alpha>1$ the so called generalized Gaussian density for which the right inequality is saturated, is symmetric and log-concave. We can now easily derive an analogue of Corollary 6 from \cite{MNT21} for $\alpha>1$. \begin{corollary}\label{cor:reverse-epi} For $X,Y$ uncorrelated, symmetric real log-concave random variables one has \[ N_\alpha(X+Y) \leq \frac{C_+(\alpha)}{C_-(\alpha)} \left( N_\alpha(X)+ N_\alpha(Y) \right). \] \end{corollary} \begin{proof} We have \[ N_\alpha(X+Y) \leq C_+(\alpha) \var(X+Y) = C_+(\alpha)(\var(X)+\var(Y)) \leq \frac{C_+(\alpha)}{C_-(\alpha)} \left( N_\alpha(X)+ N_\alpha(Y) \right). \] \end{proof} More information on various forward and reverse forms of the entropy power inequality can be found in the survey article \cite{MMX17}. See also the recent articles \cite{L18} and \cite{MM19}.
2024-02-18T23:40:29.882Z
2021-08-24T02:34:40.000Z
algebraic_stack_train_0000
2,571
5,984
proofpile-arXiv_065-12582
\section{Introduction} \label{introduction} A tidal disruption event (TDE) happens when a star gets so close to a supermassive black hole that it is destroyed by the strong tidal forces of the compact object. Following the disruption, the stellar debris evolves to form a thin and elongated stream that keeps orbiting the black hole. Roughly half of the gas within that stream is bound on highly eccentric orbits while the rest gets unbound and escapes on hyperbolic trajectories \citep{lacy1982,rees1988,phinney1989,evans1989}. As the bound part of the stream comes back to the disruption site, it undergoes complex interactions during which shocks eventually lead to the formation of an accretion disc. Disc formation is primarily driven by self-crossing shocks induced by relativistic apsidal precession \citep{hayasaki2013,dai2015,shiokawa2015,bonnerot2016-circ,sadowski2016}. In addition, this process depends strongly on the black hole spin that can delay the occurrence of the first orbit crossing through Lense-Thirring precession \citep{dai2013,guillochon2015,hayasaki2016-spin} and on the gas cooling efficiency, which determines the geometry of the newly-formed disc and the amount of escaping radiation \citep{jiang2016,bonnerot2017-stream}. The theoretical understanding of this phase of evolution remains however limited by the fact that its study in the most general case is numerically challenging. A few tens of TDE candidates have been discovered so far. They have been observed in various electromagnetic bands, in particular at optical/UV \citep{gezari2009,gezari2012,van_velzen2011,arcavi2014,holoien2014,holoien2016-14li,holoien2016-15oi,blagorodnova2017,hung2017-16axa} and X-ray wavelengths \citep{bade1996,komossa2004,komossa1999-ngc5905,esquej2008,maksym2010,saxton2017} as flares lasting from a few months to several years. While the X-ray component almost certainly comes from gas accreting onto the black hole, the nature of the lower energy signal remains debated with the main possible sources being shocks from the disc formation process \mbox{\citep{lodato2012,piran2015-circ,bonnerot2017-stream}} and reprocessed accretion luminosity by a surrounding gaseous envelope \citep{loeb1997,guillochon2014-10jh,metzger2016,roth2016}. Recently, observations of variability lags and delayed emission in X-ray with respect to the lower energy optical and UV signals have been attributed to the disc formation process, providing the first observational signatures of this phase of evolution \citep{pasham2017,gezari2017-15oi}. However, one major issue when attempting to constrain theoretical models with observational data relates to the lack of emission prior to the onset of disc formation. Several mechanisms have been proposed that produce earlier radiation, but they appear to be either too dim or too short-lived to be easily detected. A faint optical flare can for example result from the recombination of hydrogen within the tidal stream \citep{kasen2010}. Alternatively, radiation can emerge from the strong stellar compression happening at pericenter if the star is disrupted on a deeply plunging trajectory. In this situation, an X-ray shock breakout signal can be emitted but the associated burst has a duration of only a few tens of seconds \citep{kobayashi2004,brassart2008,guillochon2009,brassart2010}. In addition, nuclear reactions can be triggered by the compression whose radioactive output results in an optical flare upon reprocessing by the expanding gas distribution, a phenomenon especially promising for white dwarf disruptions \citep{rosswog2008,rosswog2009,macleod2016}. Nevertheless, even for these initially dense objects, it is still debated whether the conditions required for nuclear burning are actually met \citep{tanikawa2017}. Finally, further radiation can originate from strong relativistic precession at pericenter that results in shocks between the leading and trailing edges of an elongated white dwarf, leading to prompt gas accretion \citep{haas2012,evans2015}. TDEs are generally thought to originate from encounters between stars surrounding the black hole that occasionally scatter one of them on a trajectory entering the tidal sphere. For this mechanism, the disruption rate per galaxy is predicted to be $\dot{N} \gtrsim 10^{-4} \, \rm yr^{-1}$ by standard two-body relaxation calculations \citep{magorrian1999,wang2004,stone2016-rates}. However, more exotic dynamical processes exist that are expected to produce a much higher rate of disruptions up to $\dot{N} \approx 1 \, \rm yr^{-1}$. For instance, if the galaxy contains a binary black hole with about a parsec separation, stars can be efficiently funnelled into the disruption radius of the primary through a combination of secular Kozai interactions and scattering by the secondary compact object \citep{chen2009,chen2011}.\footnote{The resulting TDEs would however not be affected by the presence of the secondary black hole, which only happens if the binary reaches separations smaller than around a milli-parsec \citep{coughlin2017,vigneron2018}.} A high rate of TDEs can also be caused by the presence of an eccentric nuclear disc whose stabilizing mechanism involves strong torques able to efficiently deflect stars into plunging trajectories \citep{madigan2017}. Finally, a TDE boost is expected if the stars evolve in a triaxial potential owing to the possibility of chaotic orbits \citep{merritt2004}. Some of the above mechanisms may account for the preference of optical TDEs for rare E+A galaxies \citep{french2016,french2017}. Two TDEs can also happen shortly one after the other when a stellar binary approaches a black hole on a nearly radial orbit. In this situation, \citet{mandel2015} showed that the binary separation can be followed by the sequential tidal disruptions of the two stars and estimated that this mechanism represents around 10\% of all TDEs.\footnote{Recently, \citet{coughlin2018} showed that the same type of double disruption can occur if a stellar binary encounters a binary black hole.} It was proposed that such events could be identified through a double-peaked lightcurve created by the fallback of the two debris streams. However, this feature is unlikely to be observationally distinguishable because the time delay between the disruptions is generally small compared to the timespan of each individual TDE. It remains possible that the overall lightcurve displays a change of slope, but only if either the properties or the amount of mass loss differ significantly between the two stars \citep{mainetti2016}. The above mechanisms produce tidal disruptions with a time delay between them that approaches the duration of a single TDE. This implies that the two events may not be completely independent. In this paper, we focus on such double TDEs and explore the possibility of collision between the two streams produced by each individual disruption \textit{before} they come back to the black hole. Staying agnostic about the mechanism at the origin of the double TDE, we analytically derive conditions on the stellar trajectories for streams collision to occur. If the two disruptions follow the tidal separation of a binary, we find that streams collision can happen with a probability of up to $44\%$. Using smoothed-particle-hydrodynamics (SPH) simulations, we confirm the validity of our analytical estimates and demonstrate that streams collision results in the formation of shocks that heat the gas. If radiation is able to promptly escape, this interaction could be detected as a burst of radiation with a luminosity $\sim 10^{43} \, \rm erg\, s^{-1}$ lasting for at least a few days. We argue that this signal could act as a precursor of the main flare of TDEs and therefore be used to get a better handle on the different phases of these events such as the accretion disc formation process from observations. The paper is organized as follows. Section \ref{collision} starts by presenting analytical conditions for streams collision to occur during a double TDE without specifying the mechanism that creates it. These conditions are then used in Section \ref{binary_likelihood} to compute the likelihood of this outcome for a specific mechanism involving the tidal separation of a stellar binary. In Section \ref{simulations}, hydrodynamical simulations of double TDEs are presented to test our analytical conditions and determine the impact of streams collision on the gas evolution. Finally, we discuss the results and present our conclusions in Section \ref{discussion}. \begin{figure*} \includegraphics[width=\textwidth]{fig1.pdf} \caption{Sketch illustrating the analytical treatment used to estimate the conditions for streams collision. The black hole position is marked by the big black dot. The first star to reach its pericenter is depicted by a blue point while the second star is represented by a red point. Their center of mass approaches the black hole along the black solid line following the dotted arrow. After the disruptions, the two streams revolve around the black hole. The blue and red lines show the trajectories of an element of the first and second stream, respectively. In the situation depicted here, the pericenter shift satisfies the condition $\Delta \theta\geq0$ for streams collision. If they evolve on the same plane, the two elements collide at the location of the orange star. This collision point is situated at a true anomaly $\theta_{\rm col}$ and a distance $R_{\rm col}$ from the black hole. If the streams evolve on different orbital planes, these planes intersect along a line that passes through the black hole at a true anomaly $\theta_{\rm int}$. This intersection line makes an angle $\psi = |\theta_{\rm int}-\theta_{\rm col}|$ with the direction connecting the black hole and the collision point. The collision point is located a distance $d$ from the intersection line, moving perpendicular to it. The inset in the upper right corner shows the trajectory of the two streams seen along the direction of the plane intersection line. The two orbital planes are inclined by an angle $i$ that determines the vertical offset $\Delta z$ at the collision point. The two elements collide only if $\Delta z<H$, where $H$ denotes the width of the streams.} \label{sketch_collision} \end{figure*} \section{Streams collision} \label{collision} The sequential tidal disruption of two stars results in two debris streams that revolve around the black hole. These streams may interact with each other before they return to pericenter likely resulting in a modification of their dynamics and the emission of an electromagnetic signal that are specific to double TDEs. Remaining agnostic about the origin of the two disruptions, we start by analytically estimating the conditions for such a collision to happen (Section \ref{conditions}) and then determine its spatio-temporal evolution (Section \ref{evolution}) based on the incoming stellar trajectories. For simplicity, we assume that the disrupted stars are identical, with the same mass $M_{\star} = \, \mathrm{M}_{\hbox{$\odot$}} \, m_{\star}$ and radius $R_{\star} = \, \mathrm{R}_{\hbox{$\odot$}} \, r_{\star}$. They also follow parabolic orbits with the same direction of rotation. The stars are tidally disrupted if they reach a distance from the black hole smaller than their common tidal disruption radius \begin{equation} R^{\rm dis}_{\rm t} = R_{\star} \left(\frac{M_{\rm h}}{M_{\star}} \right)^{1/3} = 0.47 \, \, r_{\star} \, M_6^{1/3} \, m_{\star}^{-1/3} \, \, \rm{au}, \label{tidal_disruption_radius} \end{equation} where $M_{\rm h} = 10^6 \, M_6 \, \mathrm{M}_{\hbox{$\odot$}}$ denotes the black hole mass.\footnote{The variable representing the tidal disruption radius has a superscript `dis' to differentiate it from the tidal separation radius defined in equation \eqref{tidal_separation_radius} and whose corresponding variable has a superscript `sep'.} The depth of each encounter is given by the penetration factor \begin{equation} \beta = R^{\rm dis}_{\rm t}/R_{\rm p} \geq 1, \end{equation} where $R_{\rm p}$ denotes the pericenter distance. This factor can differ for the two disruptions, taking two distinct values $\beta_1$ and $\beta_2$. Upon each disruption, an energy spread \begin{equation} \Delta \epsilon = \frac{G M_{\rm h}}{{R^{\rm dis}_{\rm t}}^2}R_{\star}, \label{energy_spread} \end{equation} is imparted to the debris. As a result, the gas distributions evolve into elongated streams within which roughly half of the debris is unbound and escapes on hyperbolic orbits while the rest remains bound on elliptical orbits. The energy of the debris varies between $-\Delta \epsilon$ for the most bound one and $\Delta \epsilon$ for the most unbound. The former has a semi-major axis \begin{equation} a_{\rm min} = \frac{G M_{\rm h}}{2 \Delta \epsilon} = 23 \, \, M_6^{2/3} m^{-2/3}_{\star} r_{\star} \, \rm{au}, \label{semi_major_axis} \end{equation} and returns to pericenter a time \begin{equation} t_{\rm min} = 2 \pi \left(\frac{a_{\rm min}^3}{G M_{\rm h}}\right)^{1/2} = 41 \, M^{1/2}_6 m^{-1}_{\star} r^{3/2}_{\star} \, \mathrm{d}, \label{fallback_time} \end{equation} after the disruption of the star as predicted by Kepler's third law. The energy spread given by equation \eqref{energy_spread} does not depend on the penetration factor of the disruption because it is set at the tidal disruption radius \citep{sari2010,stone2013}. As a result, the range of energies of the debris is the same for the two streams, independently of the values of $\beta_1$ and $\beta_2$. A dependence on the penetration factor is however present in the eccentricity of the debris, given by \begin{equation} e_{\rm min} = 1-\frac{2}{\beta}\left(\frac{M_{\rm h}}{M_{\star}}\right)^{-1/3}, \label{most_bound_eccentricity} \end{equation} \begin{equation} e_{\rm max} = 1+\frac{2}{\beta}\left(\frac{M_{\rm h}}{M_{\star}}\right)^{-1/3}, \label{most_unbound_eccentricity} \end{equation} for the most bound and most unbound, respectively. This means that these eccentricities differ between the two stars if they have different pericenter distances. Except when otherwise specified, we nevertheless assume that the penetration factors have the same value $\beta$ in this section. \subsection{Conditions for collision} \label{conditions} We now derive analytical conditions for the two debris streams produced by the disruptions to collide with each other before they come back to the black hole. Here, we assume that the two stars have the same penetration factor $\beta =1$ and defer the complication associated with different pericenter distances to Appendix \ref{varying_beta} for clarity. \subsubsection{Coplanar streams} As a first step, we assume that the two streams evolve in the same orbital plane with aligned angular momentum vectors. In this case, the condition for streams collision can be expressed in terms of two quantities, which we refer to as $\Delta t$ and $\Delta \theta$. The first, $\Delta t$, is positive and denotes the time delay between the passages of the two stars at pericenter. In the remaining of the paper, we define the `first' star as the one that reaches pericentre first. Similarly, the star that passes the second at pericenter is referred to as the `second' star. The associated streams are named in the same way. Using these definitions, the time delay can be written as \begin{equation} \Delta t = t_{\rm p,2} - t_{\rm p,1} \geq 0, \end{equation} where $t_{\rm p,1}$ and $t_{\rm p,2}$ are the times of arrival at pericenter of the first and second star, respectively. The second quantity, $\Delta \theta$, can either be positive or negative. It is an angle that measures the relative shift in the pericenter location of the two stars. It is computed from \begin{equation} \Delta \theta = \theta_2 - \theta_1, \label{dtheta} \end{equation} where the angles $\theta_1$ and $\theta_2$ measure the pericenter location of the first and second star, respectively, with respect to a reference direction. As a convention, we impose these angles to increase in the common direction of rotation of the stars. The angle given by equation \eqref{dtheta} is also the pericenter shift between any element of the first stream and any element of the second stream since the debris follows ballistic orbits.% \begin{figure*} \includegraphics[width=0.47\textwidth]{fig2a.pdf} \hspace{0.02\textwidth} \includegraphics[width=0.47\textwidth]{fig2b.pdf} \caption{Conditions for streams collision shown in the $\Delta \theta$-$\Delta t$ (left panel) and $\psi$-$i$ (right panel) planes. If the two streams evolve on the same plane with $\Delta \theta \geq0$, collision can still be avoided if the second stream is able to pass between the most bound debris of the first stream and the black hole. For $\beta_1=\beta_2=1$, this happens inside the grey area of the left panel delimited by the black thick solid line while the red thick solid line corresponds to a collision between the most bound debris of each stream. For penetration factors both varying between 1 and 3, the black and red lines move inside the hatched regions of the same colour. The characteristics of each line interior to these areas are explained in Appendix \ref{varying_beta}. If the streams evolve on different orbital planes, they can pass on top of each other instead of colliding. For streams confined by self-gravity ($m=1/4$), collision is avoided in this way inside the grey area of the right panel delimited by the black thick solid line if the collision radius is $R_{\rm col} = 2 \, a_{\rm min}$. The boundary of this region moves downward to the dotted black line if the collision radius is $R_{\rm col} = 10 \, a_{\rm min}$ and upward to the dashed black line if the streams expand homologously ($m=1$). The blue region in the left panel is covered for two disruptions resulting from a previous binary separation in the Keplerian case. The initial conditions of our simulations are indicated in the two planes by the orange circle (model R$\alpha$0) and purple triangles (model R$\alpha$0.1).} \label{planes} \end{figure*} A first condition for the two streams to interact is $\Delta \theta \geq 0$. This means that the pericenter location of the second star is further in the direction of motion than that of the first star. Equivalently, the second stream has a major axis more rotated than that of the first one in this forward direction. As a result, the second stream can catch up with the first stream resulting in a collision between a fraction of their elements before they come back to pericenter.\footnote{In fact, streams collision can also happen for $\Delta \theta<0$ in some special circumstances. One possibility is that the second star passes closer to the black hole than the first. In this case, the shorter time spent close to pericenter by the second stream can cause it to get ahead of the first stream even if the second star was initially delayed. This type of collisions is however less likely than the ones with $\Delta \theta\geq0$ and not considered in the following of the paper.} This is illustrated in Fig. \ref{sketch_collision} where the trajectories of elements of the first and second stream are shown with blue and red arrows, respectively. Since $\Delta \theta \geq 0$, these two trajectories cross at the location of the orange star. Additionally, it takes longer for the first element to reach that position than for the second. These two elements are therefore able to reach the collision point at the same time owing to $\Delta t\geq0$. Assuming that the inequality $\Delta \theta \geq 0$ is verified, streams collision can still be avoided if the pericenter shift $\Delta \theta$ is so large that the second stream is able to pass between the black hole and the most bound debris of the first stream.\footnote{In this case, a collision can still happen very close to pericenter when both streams come back to the black hole vicinity. We do not consider this type of collisions in the rest of the paper since it is unlikely to change the overall dynamics of the streams. However, it could affect the formation of an accretion disc from this gas.} This is only possible if this most bound element has not yet fallen back to pericenter when the second star is disrupted, that is if $\Delta t \leq t_{\rm min}$. The borderline case is a situation where the most bound debris of the first stream interacts with the most unbound debris of the second. This corresponds to a region of the $\Delta \theta$-$\Delta t$ plane delimited by a function parametrized by $\theta_{\rm col}$, which denotes the true anomaly at which the collision happens, measured from the pericenter location of the first star (see Fig. \ref{sketch_collision}) and varying between 0 anf $2 \pi$. The associated pericenter shift $\Delta \theta_{\rm nc}$ is obtained by imposing that the most unbound element of the second stream reaches the same radial position as the most bound element of the first stream. This condition can be written $(1+e_{\rm min})/(1+e_{\rm min} \cos \theta_{\rm col}) = (1+e_{\rm max})/(1+e_{\rm max} \cos (\theta_{\rm col}-\Delta \theta_{\rm nc}))$, which uses the assumption of equal penetration factors for the two stars and the fact that the collision happens at a true anomaly $\theta_{\rm col}$ for the first stream and $\theta_{\rm col} - \Delta \theta_{\rm nc}$ for the second. The solution of the above condition is \begin{equation} \Delta \theta_{\rm nc} = \theta_{\rm col} - \arccos \bigg\{\frac{1}{e_{\rm max}} \left[\frac{(1+e_{\rm max})(1+e_{\rm min} \cos \theta_{\rm col})}{1+e_{\rm min}} -1\right]\bigg\}. \label{dthetanc} \end{equation} For pericenter shifts larger than this critical value, there is no collision. The solution can be written as in equation \eqref{dthetanc} because the true anomaly of the element of the second stream satisfies the condition $\theta_{\rm col}-\Delta \theta_{\rm nc}<\pi$ due to the fact that it is unbound and only moves outwards. The corresponding time delay is computed by imposing that the two stream elements reach the collision point at the same time, which gives \begin{equation} \Delta t_{\rm nc} = t_1(-\Delta \epsilon,\theta_{\rm col}) - t_2(\Delta \epsilon,\theta_{\rm col} - \Delta \theta_{\rm nc}), \label{dtnc} \end{equation} where $t_1 (\varepsilon,\theta)$ and $t_2 (\varepsilon,\theta)$ represent the time needed for a gas element of energy $\epsilon$ to reach a position on its orbit corresponding to a true anomaly $\theta$ if it belongs to the first and second stream, respectively. The parametric function defined by equations \eqref{dthetanc} and \eqref{dtnc} traces a line in the $\Delta \theta$-$\Delta t$ plane. It is represented by the thick solid black curve that delimits the grey area in the left panel of Fig. \ref{planes}. Inside this region, the streams do not collide. Outside the grey region, a collision takes place between the two streams. Its outcome then depends on the location in the $\Delta \theta$-$\Delta t$ plane. To understand how, it is first instructive to examine the situation where the most bound debris of the two streams collide with each other. It corresponds to a line in the plane given by a parametric function that can be derived in the same way as above. The condition for the two elements to reach the same radial position at the collision point reduces to $\cos \theta_{\rm col} = \cos (\theta_{\rm col}-\Delta \theta_{\rm mb})$ as obtained by replacing $e_{\rm max}$ by $e_{\rm min}$ in equation \eqref{dthetanc}. As before, $\theta_{\rm col}$ denotes the true anomaly of the first stream element at the collision point. The solution of this equality is \begin{equation} \Delta \theta_{\rm mb} = 2(\theta_{\rm col}-\pi), \label{dthetamb} \end{equation} that is the value of the pericenter shift for which the most bound part of each stream collide together. The solution takes this form because the collision occurs while the element of the first stream moves inwards with $\theta_{\rm col}>\pi$ and that of the second streams moves outwards with $\theta_{\rm col}-\Delta \theta_{\rm mb}>\pi$. The associated time delay is \begin{equation} \Delta t_{\rm mb} = t_1(-\Delta \epsilon,\theta_{\rm col}) - t_2(-\Delta \epsilon,\theta_{\rm col}-\Delta \theta_{\rm mb}), \label{dtmb} \end{equation} which is obtained by imposing that the two most bound elements reach the collision point at the same time. The function defined by equations \eqref{dthetamb} and \eqref{dtmb} is shown with a thick solid red line in the left panel of Fig. \ref{planes}. Along this curve, a collision takes place between the most bound debris of the streams. It is now possible to estimate the outcome of the streams collision as a function of $\Delta t$ and $\Delta \theta$. One particularly important characteristic is the collision strength that depends on the fraction of streams involved and their relative speed when they collide. It varies with the position in the $\Delta \theta$-$\Delta t$ plane of Fig. \ref{planes} with respect to the red line determined above. Along this line, the two most bound elements collide with each other implying that the bound fraction of streams involved in collision is maximized at fixed $\Delta \theta$. Above it, part of the the first stream bound debris avoids the collision because, owing to the larger $\Delta t$, it has already passed the collision region before the second stream arrives. Below the line, the opposite happens and some of the second stream bound elements do not interact. Depending on the location on that line, different regimes of collision also exist. On the right-hand side, where $\Delta t \approx t_{\rm min}$ and $\Delta \theta \approx 0.3 \pi$, the collision occurs between a still compact second stream that passes through a tenuous and extended first stream. The interaction therefore happens at high velocity but remains weak due to the large density ratio. On the left-hand side, where $\Delta t \ll t_{\rm min}$ and $\Delta \theta \ll \pi$, the two streams are moving on very similar trajectories. The collision therefore involves a large fraction of the two streams, but the relative velocity is small. This qualitative analysis suggests that the strongest collisions are to be expected close to the red line and in between these two regimes. \subsubsection{Effect of inclination} We now treat the more general case where the two streams do not evolve on the same orbital plane. In this situation, it is possible that they pass on top of each other instead of colliding. To estimate the condition for interaction, we compare the vertical offset induced by the orbital plane inclination to the width of the streams at the collision point. Note that we keep referring to this location at the `collision point' even though the collision may not happen due to the vertical offset between the streams. The different variables used to perform this estimate are shown in Fig. \ref{sketch_collision}. For orbital planes inclined by a small angle $i$, the vertical offset is given by $\Delta z = d i$ where $d$ is the distance of the collision point to the intersection line moving perpendicular to it. Estimating this distance requires to know the position of the collision point with respect to the intersection line of the two planes. The collision happens at a true anomaly $\theta_{\rm col}$. In addition, we define the true anomaly $\theta_{\rm int}$ of the plane intersection line, which is possible since it passes through the black hole. As for $\theta_{\rm col}$, this true anomaly is measured from the location of the first star pericenter and increases in the direction of motion.\footnote{Since the orbital planes of the streams are different, the true anomalies $\theta_{\rm col}$ and $\theta_{\rm int}$ can be measured on either planes. However, these two choices give essentially the same value because the planes are inclined by a small angle.} Using these definitions yields $d = R_{\rm col} \sin \psi$ where \begin{equation} \psi = |\theta_{\rm int}-\theta_{\rm col}|, \end{equation} is the positive angle that the plane intersection line makes with the direction connecting the collision point and the black hole. It is then possible to compute the vertical offset as \begin{equation} \Delta z = R_{\rm col} \, i \sin \psi. \end{equation} The next step is to evaluate the width of the streams at the collision point. This width can be estimated as \begin{equation} H = R_{\star} \left(\frac{R_{\rm col}}{R^{\rm dis}_{\rm t}}\right)^m, \end{equation} where $m$ depends on which mechanism sets it. If the width is determined by hydrostatic equilibrium between gas pressure and self-gravity, the slope is $m=1/4$. If it is instead set by tidal forces, the evolution is homologous with $m=1$ \citep{kochanek1994,coughlin2016-structure}. While hydrostatic equilibrium is maintained in most of the stream for weak encounters with $\beta \approx 1$, an homologous evolution is expected if thermal energy is injected into the gas during the disruption, which requires $\beta \gtrsim 3$. The ratio of vertical offset to streams width is then \begin{equation} \begin{split} \frac{\Delta z}{H} & = \left(\frac{M_{\rm h}}{M_{\star}} \right)^{(2-m)/3} \left(\frac{R_{\rm col}}{2 \, a_{\rm min}} \right)^{1-m} i \, \sin \psi \\ & = \begin{cases} \, M_6^{-1/3} \, m_{\star}^{1/3} \,\, \frac{i \sin \psi}{0.001 \pi^2}, & m=1\\ \, M_6^{-7/12} \, m_{\star}^{7/12} \,\, \left(\frac{R_{\rm col}}{2 \, a_{\rm min}}\right)^{3/4} \frac{i \sin \psi}{3 \times 10^{-5} \pi^2}. & m=1/4 \end{cases} \end{split} \label{condition_inclination} \end{equation} For $m =1$, the collision radius $R_{\rm col}$ cancels out. For $m =1/4$, the numerical estimate assumes that the collision happens close to the apocenter of the streams most bound debris, that is $R_{\rm col} \approx 2 \, a_{\rm min}$ using equation \eqref{semi_major_axis}. The borderline case $\Delta z=H$ is represented in the $\psi$-$i$ plane shown in the right panel of Fig. \ref{planes} by the black thick solid line for $m=1/4$ and assuming $R_{\rm col}=2\,a_{\rm min}$. It delimits the grey area inside which the collision is avoided if the streams expansion is confined by self-gravity. If $R_{\rm col} =10 \, a_{\rm min}$, this area extends downwards to the black dotted line. The boundary of that region is indicated by the dashed black line if the stream expands homologously with $m=1$. The hatched area denotes its location for an intermediate case with $1/4<m<1$ that is obtained for example when one stream evolves homologously while the other one is confined by self-gravity. As expected, the streams are more likely to interact if their width evolves homologously than if it is confined by self-gravity. Nevertheless, this interaction is weakened by the vertical offset between the streams because it prevents part of the gas from colliding. In summary, we have derived three conditions for streams produced by double disruptions to collide with each other. The first condition requires that the pericenter shift $\Delta \theta$ between the stars is positive in order for the second stream to be able to catch up with the first one. The second condition imposes that this shift is lower than the critical value $\Delta \theta_{\rm nc}$ if the time delay is smaller than $t_{\rm min}$ to prevent the second stream from entirely passing between the first stream and the black hole. The third condition applies if the two streams evolve on inclined orbital planes, in which case the vertical offset induced by this inclination must be smaller than the streams width for a collision to happen. \subsection{Evolution of the collision point} \label{evolution} One can also get insight into the spatio-temporal evolution of the collision point that is continuously reached by different parts of each stream. This analysis assumes that the most bound debris of each stream collide with each other, which corresponds to the red line in the $\Delta \theta$-$\Delta t$ plane of Fig. \ref{planes}. Away from this line, it nevertheless provides a good estimate for the location of the collision point. Typically, the first collision happens between the most bound elements while the last one involves the most unbound element of the second stream. The true anomaly at the location of these collisions can be estimated as \begin{equation} \theta_{\rm col}^{\rm mb} \approx \pi+\Delta \theta/2, \label{thetacolmb} \end{equation} \begin{equation} \theta_{\rm col}^{\rm mu} \approx \pi +\Delta \theta - \gamma_{\rm mu}, \label{thetacolmu} \end{equation} for the first and last one, respectively. The first angle is obtained by reverting equation \eqref{dthetamb}. The second angle uses the fact that the second stream most unbound element reaches the collision point while its trajectory is already approximately straight. This trajectory is inclined with respect to the major axis of the second stream by a angle $\gamma_{\rm mu} = \arccos(1/e_{\rm max})$ given by \begin{equation} \gamma_{\rm mu} \approx 2 \, \beta^{-1/2} \left(\frac{M_{\rm h}}{M_{\star}}\right)^{-1/6} \approx 0.064 \, \pi \, \beta^{-1/2} \, M_6^{-1/6} \, m_{\star}^{1/6}, \label{gammamu} \end{equation} making use of the small angle approximation and equation \eqref{most_unbound_eccentricity}. As long as $\Delta \theta<2\gamma_{\rm mu} \approx 0.13 \pi$, the true anomaly of the collision point is therefore constrained to the interval \begin{equation} \theta_{\rm col}^{\rm mu} \leq \theta_{\rm col} \leq \theta_{\rm col}^{\rm mb}. \label{thetacol_range} \end{equation} For a pericenter shift $\Delta \theta \ll \pi$, equation \eqref{thetacolmb} implies that the most bound elements collide with each other a time \begin{equation} \frac{t_{\rm col}^{\rm mb}}{t_{\rm min}} \approx 0.5, \label{minimal_collision_time} \end{equation} after the disruption of the first star and at a distance from the black hole $R_{\rm col}^{\rm mb} \approx 2 \, a_{\rm min}$, that is close to the apocenter of the streams most bound debris. The associated emission could therefore act as a \textit{precursor} of the main flare accompanying the gas fallback at pericenter. The most unbound debris of the second stream collides when it reaches an element of the first stream. As long as $\Delta \theta \leq \gamma_{\rm mu}$, this element is also unbound but escapes with a slower velocity that allows the second stream to catch up with it. The collision happens after a time $t_{\rm col}^{\rm mu}$ given by the condition $\Delta v (t_{\rm col}^{\rm mu}-\Delta t) = v^{\rm col}_1 t_{\rm col}^{\rm mu}$ imposing that the two elements reach the same radial position. Here, $\Delta v$ and $v^{\rm col}_1$ denote the velocity of the second stream most bound debris and the colliding element of the first stream, respectively. Approximating these velocities by their value at infinity, they are related to their respective energies by $v^{\rm col}_1/\Delta v \approx (\epsilon^{\rm col}_1/\Delta \epsilon)^{1/2}$. This energy ratio can be computed from the fact that the two colliding elements must follow the same straight line as they escape from the black hole. Denoting by $e^{\rm col}_1$ the eccentricity of the first stream element, this condition translates into $\arccos(1/e^{\rm col}_1)=\arccos(1/e^{\rm max}_2)-\Delta \theta$ that gives $1-(\epsilon^{\rm col}_1/\Delta \epsilon)^{1/2} \approx (1/2) \beta^{1/2} \Delta \theta (M_{\rm h}/M_{\star})^{1/6}$ using the small angle approximation. The time at which the most unbound element of the second stream collides is therefore \begin{equation} \begin{split} \frac{t_{\rm col}^{\rm mu}}{t_{\rm min}} & = \frac{\Delta t}{t_{\rm min}} \left( 1- \frac{v^{\rm col}_1}{\Delta v} \right)^{-1} \\ & \approx \frac{2}{\Delta \theta \beta^{1/2}} \frac{\Delta t}{t_{\rm min}} \left( \frac{M_{\rm h}}{M_{\star}} \right)^{-1/6} \\ & \approx 6.4 \, \beta^{-1/2} \, M_6^{-1/6} \, m_{\star}^{1/6} \left( \frac{\Delta \theta}{10^{-2}\pi} \right)^{-1} \frac{\Delta t}{t_{\rm min}}, \end{split} \label{maximal_collision_time} \end{equation} which corresponds to a distance from the black hole of $R_{\rm col}^{\rm mu} \approx \Delta v \, t_{\rm col}^{\rm mu} = 40 \, a_{\rm min} \, \beta^{-1/2} \, M_6^{-1/6} \, m_{\star}^{1/6} (\Delta \theta/10^{-2} \pi)^{-1} \Delta t/t_{\rm min}$. This calculation demonstrates that streams collision can be sustained for a long duration and happen far away from the black hole. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{fig3.pdf} \vspace{0.2cm} \caption{Configuration of the binary at the moment of separation. The first star is represented by a blue point and the second star by a red point. The pitch angle $\alpha$ and yaw angle $\delta$ fix the orbital plane of the two stars while the phase angle $\phi$ defines the position of the first star in this plane. The vector $\vect{e}_{\parallel}$ indicates the direction of motion of the binary center of mass.} \label{sketch_binary} \end{figure} \section{Binary tidal separation} \label{binary_likelihood} We now want to evaluate the likelihood of streams collision in the particular case where the two disruptions are due to the tidal separation of a stellar binary. This is done by first evaluating the values of $\Delta t$, $\Delta \theta$, $i$ and $\psi$ and their dependence on the binary properties. The likelihood of streams collision is then obtained by using the conditions derived in the previous section and summarized in Fig. \ref{planes}. The two identical stars considered in Section \ref{collision} are now part of a binary that we assume for simplicity to be circular. These two binary components are separated by the tidal force of the black hole at the tidal separation radius \begin{equation} R^{\rm sep}_{\rm t} = a \left(\frac{M_{\rm h}}{2 \,M_{\star}} \right)^{1/3} = 370 \, \, a_3 \, M_6^{1/3} \, m_{\star}^{-1/3} \, \rm{au}, \label{tidal_separation_radius} \end{equation} where $a= 10^3 \, a_3 \, \mathrm{R}_{\hbox{$\odot$}}$ is the binary separation. It is possible to estimate the range of values that this separation can take. A lower limit is given by $a \gtrsim 2 R_{\star}$ to prevent the two stars from colliding with each other. An upper limit is set by ionization of the binary through two-body encounters with surrounding stars. To estimate this limit, we adopt the properties of the Milky Way nuclear star cluster described by \citet{antonini2011} in their section 2. In particular, we use the same density profile and velocity dispersion. The critical radius from which most tidally separated binaries originate can be evaluated using the loss cone theory, described for instance by \citet{syer1999} (section 3) in the context of stellar tidal disruptions. We find this radius to be $R_{\rm crit} \gtrsim \, \rm kpc$ for separations $a \gtrsim 10^3 \, \mathrm{R}_{\hbox{$\odot$}}$, which is much larger than for single tidally disrupted stars. From equation (7.173) of \citet{binney2008}, the ionization timescale $t_{\rm ion}$ at that distance can be shown to be longer than a stellar lifetime, meaning that binaries survive ionization. Another possibility is that the binary is ionized while it is already on a highly eccentric orbit grazing the tidal separation radius. In this situation, the upper limit on the binary separation is reached when the integral $\int_{R_{\rm crit}}^{R^{\rm sep}_{\rm t}} \diff t/t_{\rm ion}$ increases to a value close to 1. This condition imposes that the binary is ionized after a few near-radial oscillations between the critical radius and the tidal separation radius. It yields an upper limit of $a \lesssim 10^5 \, \mathrm{R}_{\hbox{$\odot$}}$ for the binary separation. In the remaining of this section, we adopt $a=10^3 \, \mathrm{R}_{\hbox{$\odot$}}$ in numerical estimates as a typical binary separation value. The binary center of mass reaches the tidal separation radius given by equation \eqref{tidal_separation_radius} following a parabolic orbit with a pericentre $R_{\rm p} = R^{\rm dis}_{\rm t}/\beta$, where $\beta \geq 1$ is the penetration factor of the binary with respect to the tidal disruption radius of its components (equation \ref{tidal_disruption_radius}). The individual penetration factors of the stars $\beta_1$ and $\beta_2$ are set by the change in angular momentum imparted by the separation. This variation of angular momentum becomes similar to that of the center of mass only for wide binaries with $a/R_{\star} \gtrsim \beta^{-1} (M_{\rm h}/M_{\star})^{2/3} \approx 10^4 \beta^{-1} \, M_6^{2/3} \, m_{\star}^{-2/3}$, for which the orbit of one of the two stars can flip. Otherwise, the stars retain the same direction of rotation with similar penetration factors. The energy of the stars also get modified by the separation process. Nevertheless, this variation is smaller than the energy spread $\Delta \epsilon$ induced by the tidal disruptions (equation \ref{energy_spread}) by a factor $\sim a/R_{\star} = 10^3 \, a_3 \, r_{\star}^{-1}$. It therefore has a negligible influence on the dynamics of the debris streams and we can consider the two stars to be disrupted on parabolic orbits as assumed in Section \ref{conditions}. \subsection{Keplerian case} Assuming that the two stars follow perfectly Keplerian orbits, the quantities involved in the conditions for streams collision are entirely determined by the change in trajectory experienced by the stars during binary separation. This process happens on a time-span very short compared to the binary period. It can therefore be approximated as an instantaneous deflection of the two stars on new trajectories happening at the tidal separation radius. The position and velocity vectors of the binary with respect to the black hole at the moment of separation are given by \begin{equation} \vect{R} = R^{\rm sep}_{\rm t} \vect{e}_{\rm r}, \label{position_com} \end{equation} \begin{equation} \vect{v} = \left( \frac{2GM_{\rm h}}{R^{\rm sep}_{\rm t}} \right)^{1/2} \vect{e}_{\parallel}, \label{velocity_com} \end{equation} respectively. Here, $\vect{e}_{\rm r}$ and $\vect{e}_{\parallel}$ are unit vectors shown in Fig. \ref{sketch_collision} that indicate the radial direction of the binary center of mass and its direction of motion, respectively. The change in trajectory suffered by the first star is dictated by the displacement and velocity kick \begin{equation} \Delta \vect{R} = \frac{a}{2} \, \vect{u_{\rm r}}, \label{position_kick} \end{equation} \begin{equation} \Delta \vect{v} = \left(\frac{G M_{\star}}{2 \, a} \right)^{1/2} \vect{u_{\rm t}}, \label{velocity_kick} \end{equation} with respect to the center of mass trajectory. They are simply given by the position and velocity of this star with respect to the binary center of mass at the moment of separation. The second star experiences a displacement and velocity kick of the same magnitude but opposite direction, given by $-\Delta \vect{R}$ and $-\Delta \vect{v}$. As depicted in Fig. \ref{sketch_binary}, the directions of the unit vectors $\vect{u_{\rm r}}$ and $\vect{u_{\rm t}}$ are determined by three random angles $\alpha$, $\delta$ and $\phi$. The pitch angle $\alpha$ and yaw angle $\delta$ set the orientation of the plane in which the two stars rotate while the phase angle $\phi$ fixes the position of the first star in that plane. For example, $\alpha =0$ corresponds to a situation where the two stars move on a plane that is identical to that of the binary center of mass around the black hole. These two planes are instead perpendicular for $\alpha = \pi/2$. The fact that $|\Delta \vect{R}|/|\vect{R}| \approx |\Delta \vect{v}|/|\vect{v}| \approx (M_{\rm h}/M_{\star})^{-1/3} \ll 1$ implies that the change of trajectory induced by the separation process is small. This variation can therefore legitimately be computed at first order in the displacement and velocity kick, which we will do in the following. Using these definitions, we first derive the time delay $\Delta t$ given by the separation process. To reach pericenter first, the first star must be closer to the black hole than the second one at the moment of separation. The associated time delay can be evaluated as $\Delta t \approx a \eta /|\vect{v}|$, where $\eta =\vect{u_{\rm r}} \cdot \vect{e}_{\parallel}$ corresponds to the projection of the separation on the binary orbital plane. The ratio of this delay to the fallback time given by equation \eqref{fallback_time} is then \begin{equation} \begin{split} \frac{\Delta t}{t_{\rm min}} & \approx \frac{1}{2^{1/6} \, \pi} \, \eta \left( \frac{a}{R_{\star}} \right)^{3/2} \left( \frac{M_{\rm h}}{M_{\star}} \right)^{-5/6} \\ & = 0.09 \,\, \eta \, a_3^{3/2} \, r_{\star}^{-3/2} \, M_6^{-5/6} \, m_{\star}^{5/6}, \end{split} \label{time_delay} \end{equation} where $\eta = \cos \alpha \cos \delta \cos \phi - \sin \delta \sin \phi$ satisfies $0 \leq \eta \leq 1$. Note that, as expected, $\eta$ is independent of the direction of rotation of the two stars around each other. The upper limit $a \lesssim 10^5 \, \mathrm{R}_{\hbox{$\odot$}}$ on the binary separation leads to $\Delta t \lesssim 90 \, t_{\rm min}$. Note that the lower limit is irrelevant since $\eta$ can reach 0. This implies that the whole vertical extent of the $\Delta \theta$-$\Delta t$ plane is covered, as indicated with the blue region shown in the left panel of Fig. \ref{planes}. Evaluating the rest of the parameters requires to know the variation in angular momentum and eccentricity vectors undergone by the binary components upon separation. The binary center of mass angular momentum and eccentricity vectors are \begin{equation} \vect{j} = \vect{R} \times \vect{v} = \left( 2 G M_{\rm h} R_{\rm p} \right)^{1/2} \vect{e}_{\rm z}, \end{equation} \begin{equation} \vect{e} = \frac{\vect{v} \times \vect{j}}{G M_{\rm h}} - \frac{\vect{R}}{|\vect{R}|} = \vect{e}_{\rm x}, \end{equation} respectively, where $\vect{e}_{\rm x}$ and $\vect{e}_{\rm z}$ are unit vectors shown in Fig. \ref{sketch_collision} that form a orthogonal basis with the unit vector $\vect{e}_{\rm y}$. The small displacement and velocity kick given by equations \eqref{position_kick} and \eqref{velocity_kick} translate into variations in the angular momentum and eccentricity vectors of the first star during the separation process. To first order, they are given by \begin{equation} \Delta \vect{j} \approx \Delta \vect{R} \times \vect{v} + \vect{R} \times \Delta \vect{v}, \end{equation} \begin{equation} \Delta \vect{e} \approx \frac{1}{G M_{\rm h}} \left( \Delta \vect{v} \times \vect{j} + \vect{v} \times \Delta \vect{j} \right) - \frac{\Delta \vect{R}}{|\vect{R}|}. \end{equation} The second star undergoes kicks $-\Delta \vect{j}$ and $-\Delta \vect{e}$ as required to conserve the angular momentum of the system. We now describe how to obtain $\Delta \theta$, $i$ and $\psi$. These quantities are computed at lowest order in $R_{\rm p}/R^{\rm sep}_{\rm t} = 10^{-3} \beta^{-1} \, a_3^{-1} \, r_{\star}$, which is valid except for very compact binaries and allows us to clearly reveal the dependence on the physical parameters of the problem. Physically, this simplification results from the fact that the parabolic trajectory of the binary is a straight line far from pericenter. This corresponds to the lowest order approximation in $R_{\rm p}/R^{\rm sep}_{\rm t}$ while higher order terms would take into account the curvature of the trajectory. The pericenter shift $\Delta \theta$ results from the change in eccentricity vector experienced by the stars. It is given by the angle between the eccentricity vector of the first star $\vect{e}+\Delta \vect{e}$ and that of the second one $\vect{e}-\Delta \vect{e}$, considering only the components of $\Delta \vect{e}$ along the orbital plane of the binary around the black hole. To first order, this results in \begin{equation} \begin{split} \Delta \theta & \approx 2 \left( \Delta \vect{e} \times \vect{e} \right) \cdot \vect{e}_{\rm z} \\ & \approx 2^{1/3} \, \xi \left( \frac{M_{\rm h}}{M_{\star}} \right)^{-1/3} \\ & = 0.0069 \, \pi \, (\xi/\sqrt{3}) \, M_6^{-1/3} \, m_{\star}^{1/3}, \end{split} \label{pericenter_shift} \end{equation} where $\xi = \cos \delta (\sin \phi \mp \sqrt{2} \cos \phi) + \cos \alpha \sin \delta ( \cos \phi \pm \sqrt{2} \sin \phi)$ obeys $-\sqrt{3} \leq \xi \leq \sqrt{3}$. Here and in the remaining of this section, the upper signs correspond to a binary rotating in the prograde direction compared to its motion around the black hole while the lower signs correspond to the retrograde case.\footnote{Going from the prograde to the retrograde case is equivalent to making the substitutions $\alpha \rightarrow \pi -\alpha$, $\delta \rightarrow \delta+\pi$ and $\phi \rightarrow -\phi$.} If the second star has its pericenter further in the direction of motion than the first star, $\Delta \vect{e}$ is directed approximately along $-\vect{e}_{\rm y}$ and the first equality of equation \eqref{pericenter_shift} gives $\Delta \theta >0$ as expected. The pericenter shift obeys $\Delta \theta/\pi \lesssim 0.0069$, implying that it is limited to the leftmost region of the $\Delta \theta$-$\Delta t$ plane, shown in blue in the left panel of Fig. \ref{planes}. This angle is small enough to stay outside the grey region of the plane, except for near-contact binaries with $\Delta t \lesssim 10^{-5} t_{\rm min}$. Streams collision is therefore unlikely to be avoided in this way. $\Delta \theta$ is largely independent of $\alpha$ because the first term of $\xi$ dominates as long as $\tan \delta \cos \alpha \lesssim 1$. One can therefore study the sign of $\Delta \theta$ for $\alpha=0$ without loss of generality. The condition $\Delta \theta \geq 0$ then translates into $\phi+\delta \geq \pm\arctan(\sqrt{2}) \approx \pm 0.3 \pi$. For the first star to be closer to the black hole than the second, this effective phase angle must additionally belong to the interval $-\pi/2\leq\phi+\delta\leq\pi/2$. The condition of positive pericenter shift is realized for less and for more than half of this allowed interval in the prograde and retrograde case, respectively. Taking into account the random distribution of the phase angle, $\Delta \theta \geq 0$ is satisfied with a probability of $0.5\pm\arctan(\sqrt{2})/\pi$. However, since the binary is as likely to be prograde as it is to be retrograde, the two contributions cancel out such that the overall probability of this condition reduces to exactly $50\%$. The orbital plane inclination $i$ is simply the angle between the angular momentum vector $\vect{j}+\Delta \vect{j}$ of the first star and that $\vect{j}-\Delta \vect{j}$ of the second. It is therefore given by \begin{equation} \begin{split} i & \approx 2\frac{|\Delta \vect{j} \times \vect{j}|}{\vect{j}^2} \\ & = 2^{-5/6} \chi |\sin \alpha| \, \beta^{1/2} \left( \frac{a}{R_{\star}} \right)^{1/2} \left( \frac{M_{\rm h}}{M_{\star}} \right)^{-1/3}\\ & = 0.14 \, \pi \, (\chi/\sqrt{6}) |\sin \alpha| \, \beta^{1/2} \, a_3^{1/2} \, r_{\star}^{-1/2} \, M_6^{-1/3} \, m_{\star}^{1/3}, \end{split} \label{inclination} \end{equation} where $\chi = \sqrt{3 + \cos (2 \phi) \pm 2 \sqrt{2} \sin (2 \phi)} \leq \sqrt{6}$. As expected, the two orbital planes are aligned if $\alpha =0$, which corresponds to the coplanar case, where the two stars rotate in the same plane as that of the binary around the black hole. Finally, we evaluate the sinus of $\psi = |\theta_{\rm int}-\theta_{\rm col}|$. It is more practical to compute the true anomalies with the origin set at the pericenter location of the binary center of mass. These angles are denoted by $\bar{\theta}_{\rm int}$ and $\bar{\theta}_{\rm col}$ and the relation $\psi = |\bar{\theta}_{\rm int}-\bar{\theta}_{\rm col}|$ holds. The tangent of the true anomaly at the plane intersection line is \begin{equation} \begin{split} \tan \bar{\theta}_{\rm int} & = \frac{(\Delta \vect{j} \times \vect{j})\cdot \vect{e}_{\rm y}}{(\Delta \vect{j} \times \vect{j})\cdot \vect{e}_{\rm x}} \\ & = 2^{1/6} \zeta \, \beta^{-1/2} \left(\frac{a}{R_{\star}}\right)^{-1/2}, \\ & = 0.011 \pi \, \zeta \, \beta^{-1/2} \, a_3^{-1/2} \, r_{\star}^{1/2}, \end{split} \label{intersection_line} \end{equation} where $\zeta = 2(\cos \phi \pm \sqrt{2} \sin \phi)/(2\cos \phi \pm \sqrt{2} \sin \phi)$. This factor obeys $\zeta \approx 1 >0$ for most values of the phase angle, meaning that the intersection line passes through the lower-left and upper-right quadrants with respect the black hole. In this case, the true anomaly can also be safely approximated by $\bar{\theta}_{\rm int} \approx \pi + \tan \bar{\theta}_{\rm int}$. The relation $\tan \bar{\theta}_{\rm int} \approx (R_{\rm p}/R^{\rm sep}_{\rm t})^{1/2}$ shows that the angle $\bar{\theta}_{\rm int}$ is similar to the true anomaly at which the binary gets separated, computing it at lowest order in $R_{\rm p}/R^{\rm sep}_{\rm t}$. The plane intersection line therefore passes most of the time near the point of binary separation. This is unsurprising because the two stars are close together at that location such that their orbital planes are likely to cross in the vicinity. However, there also exists a small range of values for the phase angle $\phi$ for which this line can take any other direction. In particular, $\tan \theta_{\rm int} = 0$ for $\phi = \mp\arctan(1/\sqrt{2}) \approx \mp 0.2 \pi$ meaning that the intersection line is directed along $\vect{e}_{\rm x}$. To determine the true anomaly at the collision point, we use the fact that it belongs to the interval given by equation \eqref{thetacol_range} to write \begin{equation} \begin{split} \bar{\theta}_{\rm col} & =\theta_{\rm col}^{\rm mb}-f(\theta_{\rm col}^{\rm mb}-\theta_{\rm col}^{\rm mu})-\Delta \theta/2\\ & \approx \pi -f(\gamma_{\rm mu}-\Delta \theta/2), \end{split} \label{collision_angle} \end{equation} where the $-\Delta \theta/2$ term in the first line accounts for the origin of $\bar{\theta}_{\rm col}$ at the pericenter location of the binary center of mass. The location of the collision point is parametrized by $f$ that satisfies $0\leq f \leq 1$. This parameter takes its lowest value $f=0$ for streams collision involving the most bound stream elements and increases to $f=1$ when the most unbound element of the second stream collides. It is then possible to compute the sinus of $\psi$ using \begin{equation} \sin \psi \approx |\cos \bar{\theta}_{\rm int} \sin \bar{\theta}_{\rm col} - \sin \bar{\theta}_{\rm int} \cos \bar{\theta}_{\rm col}|, \label{sinpsi} \end{equation} in combination with equations \eqref{intersection_line} and \eqref{collision_angle} making use of the relation $|\sin \bar{\theta}_{\rm int}| = |(\Delta \vect{j} \times \vect{j}) \cdot \vect{e}_{\rm y}| / |\Delta \vect{j} \times \vect{j}| \approx 2^{7/6} (\tilde{\zeta}/\chi) \beta^{-1/2} (a/R_{\star})^{-1/2}$ where $\tilde{\zeta} = |\cos \phi \pm \sqrt{2} \sin \phi| \leq \sqrt{3}$ is the numerator of $\zeta$ in absolute value. Because $\gamma_{\rm mu} > \Delta \theta/2$, the collision point is the closest to the plane intersection line for $f=0$, which corresponds to a collision involving the most bound stream elements. In this situation, the angle $\psi$ reaches its lowest value $\psi_{\rm min}$. The quantity involved in the condition $\Delta z \leq H$ for streams collision is then also minimum and given by \begin{equation} \begin{split} i \sin \psi_{\rm min} & = i \, |\sin \bar{\theta}_{\rm int} | \\ & \approx 2^{1/3} \tilde{\zeta} \, |\sin \alpha| \left(\frac{M_{\rm h}}{M_{\star}}\right)^{-1/3}\\ & = 0.0022 \, \pi^2 (\tilde{\zeta}/\sqrt{3}) \, |\sin \alpha| \, M_6^{-1/3} \, m_{\star}^{1/3}, \end{split} \label{sinpsimin} \end{equation} according to equation \eqref{sinpsi}. Remarkably, this term is independent of the binary separation that cancels out in the product. Its numerical value can be injected in equation \eqref{condition_inclination} to evaluate whether a collision takes place between the streams most bound parts. One can already see that the condition $\Delta z < H$ is satisfied for $m=1$ as long as $\sin \alpha \lesssim 0.5$, meaning that the streams can collide with a significant likelihood for homologously expanding streams. However, a much smaller value of $\sin \alpha$ is required to reach this condition if $m=1/4$, implying that collisions are much less likely for streams confined by self-gravity. A quantitative estimate of the likelihood of streams collision can be obtained from an integral over the random binary angles restricted to a domain where the three conditions for streams collision are all satisfied. In particular, the condition $\Delta z \leq H$ is evaluated for a value of the parameter $f$ that minimizes the vertical offset, which amounts to considering the element of the second stream that is the most likely to collide with the first stream. This choice is legitimate since streams collision occurs if at least one element of each stream collides with each other. The collision probability is then \begin{equation} P_{\rm col}= \frac{1}{8\pi^2} \int_{D_{\rm col}} \cos \alpha \diff \alpha \diff \delta \diff \phi, \label{collision_probability} \end{equation} where $D_{\rm col}$ denotes the domain of integration described above. This integral can be calculated numerically. For simplicity, we compute $\Delta \theta_{\rm nc}$ assuming $\beta_1=\beta_2=1$ (grey region in the left panel of Fig. \ref{planes}) even though the actual penetration factors of the stars can differ from unity. The resulting probability is shown with the solid black line in Fig. \ref{probability} as a function of binary separation for a mass ratio set to $M_{\rm h}/M_{\star} = 10^6$. As expected, it is lower than 50\% due to the upper bound imposed by the requirement of a positive pericenter shift. The probability is in addition primarily constrained by the condition $\Delta z \leq H$ that leads to a further decrease to $P_{\rm col} \approx 36 \%$ for homologously expansing streams ($m=1$) as indicated by the horizontal purple dotted line. A more drastic decrease happens if the streams are confined by self-gravity ($m=1/4$) that leads a significantly smaller collision probability. This is consistent with the predictions made based on equation \eqref{sinpsimin}, which corresponds to the value of the product $i \sin \psi$ giving the minimal vertical offset. The fact that this quantity is independent on $a$ also justifies the fact that the probability reduction is the same at all binary separations. The suppression for $a\lesssim 10 R_{\star}$ is due to the fact that $\Delta \theta > \Delta \theta_{\rm nc}$ for some values of the binary angles. This is expected from the $\Delta \theta$-$\Delta t$ plane of Fig. \ref{planes} because the blue and grey regions intersect if extrapolated downwards to such low binary separations, corresponding to $\Delta t\lesssim 10^{-5} t_{\rm min}$. The likelihood of collisions is therefore high as long as the binary is not close to contact and the streams are homologously expanding. \begin{figure} \includegraphics[width=0.47\textwidth]{fig4.pdf} \caption{Probability of streams collision as a function of binary separation for the Keplerian (black solid line) and relativistic (black dashed line) calculations assuming $\beta=1$ and that the streams expand homologously ($m=1$). The purple arrows represent the reduction of the Keplerian probability from the upper limit of $50\%$ due to the conditions $\Delta z \leq H$ (purple dotted line) and $\Delta \theta\leq\Delta \theta_{\rm nc}$. The green arrows show the same for the relativistic probability, for which the reduction results from the conditions $\Delta \theta_{\rm rel}\leq\Delta \theta_{\rm nc}$ (green dotted line) and $\Delta z_{\rm rel} \leq H$. The two segments indicate the ranges of binary separation for which $\psi_{\rm rel} \approx 0$ (blue) and $\Delta t \geq t_{\rm min}$ (red) while the grey arrow indicates $\Delta \omega_{\rm d} \approx \Delta \theta$. The relativistic probability is also shown for $\beta=2$ (dashed orange line) and assuming that the streams width is confined by self-gravity ($m=1/4$, dashed brown line) for a collision point at $R_{\rm col} = 2 a_{\rm min}$.} \label{probability} \end{figure} \subsection{Relativistic corrections} So far, the conditions for streams collision have been derived assuming perfectly Keplerian trajectories. At pericenter, the gas can in fact approach the gravitational radius of the black hole, implying that relativistic corrections must be accounted for. The main effect to deal with is relativistic apsidal precession that causes a rotation in the direction of motion of the major axis of each star when it passes at pericenter. To include it in our calculation, it is convenient to decompose the angle by which the stars precess into a net and differential component. The net component is the precession angle for a pericenter equal to that of the binary center of mass. It is given by \begin{equation} \Delta \omega_{\rm n} \approx \frac{3 \pi G M_{\rm h} }{R^{\rm dis}_{\rm t} c^2} \beta \approx 0.064 \pi \, \beta \, M_6^{2/3} \, m_{\star}^{1/3} \, r_{\star}^{-1}, \end{equation} using the first order approximation of the relativistic precession angle \citetext{equation 10.8 of \citealt{hobson2006}} for a nearly-parabolic orbit. Differential precession is due to the fact that the stars have distinct pericenter distances that make them precess by different amounts. This pericenter variation is induced by the angular momentum kick experienced by the stars during the binary separation. The associated change in penetration factor $\Delta \beta = \beta_2 - \beta_1$ is \begin{equation} \begin{split} \frac{\Delta \beta}{\beta} & \approx 4 \, \frac{\vect{j} \cdot \Delta \vect{j}}{\vect{j}^2} \\ & = 2^{2/3} \kappa \, \beta^{1/2} \left(\frac{M_{\rm h}}{M_{\star}}\right)^{-1/3} \left(\frac{a}{R_{\star}}\right)^{1/2} \\ & \approx 0.87 (\kappa/\sqrt{3}) \, \beta^{1/2} \, M_6^{-1/3} \, m_{\star}^{1/3} \, a_3^{1/2} \, r_{\star}^{-1/2}, \end{split} \label{beta_variation} \end{equation} where $\kappa=\cos \alpha \sin \delta (\sqrt{2} \cos \phi \pm \sin \phi) + \cos \delta (\sqrt{2} \sin \phi \mp \cos \phi)$ satisfies $-\sqrt{3} \leq \kappa \leq \sqrt{3}$. This yields a differential precession angle of \begin{equation} \begin{split} \Delta \omega_{\rm d} & \approx \Delta \omega_{\rm n} \frac{\Delta \beta}{\beta}\\ & \approx 0.055 \pi \, (\kappa/\sqrt{3}) \, \beta^{3/2} \, M_6^{1/3} \, m_{\star}^{2/3} \, a_3^{1/2} \, r_{\star}^{-3/2}, \label{differential_precession} \end{split} \end{equation} which is by convention positive if the second star precesses more. This precession angle is therefore the largest for wide binaries. Net and differential relativistic precessions modify the pericenter shift angle and the true anomaly of the collision point. The relativistic versions of these quantities are given by \begin{equation} \Delta \theta_{\rm rel} = \Delta \theta+\Delta \omega_{\rm d}, \label{pericenter_shift_relativistic} \end{equation} \begin{equation} \bar{\theta}^{\rm rel}_{\rm col} = \pi -f(\gamma_{\rm mu}-\Delta \theta_{\rm rel}/2) + \Delta \omega_{\rm n}, \label{collision_angle_relativistic} \end{equation} which replace equations \eqref{pericenter_shift} and \eqref{collision_angle}, respectively. The location of the intersection line remains the same as in the Keplerian calculation since apsidal precession does not modify the orbital planes of the streams. The relativistic version of the angle $\psi$ is therefore given by $\psi_{\rm rel} = |\theta_{\rm col}^{\rm rel} - \theta_{\rm int}|$ where $\theta_{\rm int}$ is still that of equation \eqref{intersection_line} and the resulting vertical offset is denoted $\Delta z_{\rm rel}$. The pericenter shift is only affected by differential precession. Like $\xi$ in equation \eqref{pericenter_shift}, the function $\kappa$ of equation \eqref{differential_precession} is largely independent of $\alpha$. Adopting $\alpha =0$ without loss of generality, $\Delta \theta$ and $\Delta \omega_{\rm d}$ have different signs only if the effective phase angle obeys $\mp \arctan(1/\sqrt{2}) \leq \phi+\delta \leq \mp \arctan(\sqrt{2})$. In this interval, the pericenter shift $\Delta \theta$ is negative and positive in the prograde and retrograde case, respectively. This means that, in the prograde case, it is possible to have $\Delta \theta_{\rm rel}\geq0$ while $\Delta \theta\leq0$, making the relativistic condition of positive pericenter shift slightly more likely than the Keplerian one. The contrary is true in the retrograde case, where the relativistic condition is less likely. However, these opposite contributions cancel out when the equal likelihood of a binary to be prograde and retrograde is accounted for. As a result, the overall probability of $\Delta \theta_{\rm rel}\geq0$ remains of exactly 50\%, like for the Keplerian calculation. More importantly, the increase in pericenter shift can result in $\Delta \theta_{\rm rel} \geq \Delta \theta_{\rm nc}$ that prevents streams collision if the time delay additionally satisfies $\Delta t \leq t_{\rm min}$ (grey region in the $\Delta \theta$-$\Delta t$ plane of Fig. \ref{planes}). The true anomaly of the collision point given by equation \eqref{collision_angle_relativistic} increases with both differential and net precession. This affects the condition $\Delta z_{\rm rel} \leq H$ for streams collision by changing the angle $\psi_{\rm rel}$. Interestingly, there exists a range of binary separations for which relativistic precession is such that this angle reaches a minimum of $\psi_{\rm rel} = 0$ for specific values of $f$. This means that the collision point is located exactly on the plane intersection line for certain elements of the second stream. For these elements, the condition $\Delta z_{\rm rel} \leq H$ is satisfied despite the inclination of the orbital planes making streams collision more likely. Note that this effect is not to be expected in the Keplerian regime where the minimal value of $\psi$, obtained when $f=0$, is always larger than zero (equation \ref{sinpsimin}). These relativistic effects modify the probability of streams collision defined in equation \eqref{collision_probability} by changing the domain of integration. This relativistic probability is shown in Fig. \ref{probability} with a black dashed line for $\beta=1$ and a streams width evolving homologously ($m=1$). The condition of positive pericenter shift still imposes an upper bound of $50\%$. In addition, it is mostly constrained by the condition $\Delta \theta < \Delta \theta_{\rm nc}$ that leads to the reduction indicated by the green dotted line. Its evolution with $a$ originates mostly from the fact that the differential precession angle increases with binary separation as $\Delta \omega_{\rm d} \propto a^{1/2}$ (equation \ref{differential_precession}). For $a \lesssim 100 R_{\star}$, this precession does not affect the pericenter shift since $\Delta \omega_{\rm d} \lesssim \Delta \theta$. This shift is therefore limited to the blue region in the $\Delta \theta$-$\Delta t$ plane of Fig. \ref{planes}. Because the size of the grey region decreases with increasing $a \propto \Delta t^{2/3}$, the condition $\Delta \theta_{\rm rel} \approx \Delta \theta \leq \Delta \theta_{\rm nc}$ for streams collision becomes more likely, making $P_{\rm col}$ larger. The probability reaches a peak at $P_{\rm col} \approx 44\%$ but starts to decrease again for $a \gtrsim 100 R_{\star}$. This is due to $\Delta \omega_{\rm d} \gtrsim \Delta \theta$, which implies that the pericenter shift is not limited to the blue region anymore. Consequently, $\Delta \theta_{\rm rel}>\Delta \theta_{\rm nc}$ for some binary angles that decreases $P_{\rm col}$. This decrease stops for $a \gtrsim 5000 R_{\star}$ (red segment) where the collision probability reaches a plateau at $P_{\rm col}\approx 26\%$. This is because $\Delta t \geq t_{\rm min}$ that makes the condition $\Delta \theta_{\rm rel} \leq \Delta \theta_{\rm nc}$ irrelevant, as can be seen from a sharp increase of the green dotted line. On top of this overall evolution, the relativistic probability features two breaks at the edges of the interval $20 \lesssim a/R_{\star} \lesssim 1000$ (blue segment). Inside this interval, it coincides with the green dotted line, meaning that the condition $\Delta z_{\rm rel} \leq H$ is satisfied for all binary angles. This strong reduction of the vertical offset results from $\psi_{\rm rel}\approx 0$, for which, as mentioned above, the location of the collision point determined by relativistic precession coincides with that of the plane intersection line. For binary separations $a \gtrsim 1000R_{\star}$, this condition becomes more constraining because relativistic precession makes the collision point move away from the plane intersection line, increasing $\psi_{\rm rel}$. Fig. \ref{probability} also shows the evolution of the relativistic probability for $\beta=2$ (dashed orange line) and a streams width confined by self-gravity ($m=1/4$, dashed brown line) keeping the other parameters fixed. Increasing the penetration factor leads to a global decrease of the probability since the condition $\Delta \theta_{\rm rel}>\Delta \theta_{\rm nc}$ becomes more constraining owing to an increase of the differential precession angle as $\Delta \omega_{\rm d} \propto \beta^{3/2}$ (equation \ref{differential_precession}). A similar decrease is seen for $m=1/4$ because the condition $\Delta z_{\rm rel} \leq H$ for streams collision is less likely to be satisfied owing to the thinner profile of the streams. Nevertheless, the probability is larger than in the Keplerian calculation owing to the reduction of $\psi_{\rm rel}$ by apsidal precession. In both cases, the probability also features two peaks that are due to $\Delta \omega_{\rm d} \approx \Delta \theta$ and $\Delta t\geq t_{\rm min}$ at short and wide binary separations, respectively. The probability of streams collision is therefore significant except for near-contact binaries and can be as high as $P_{\rm col} \approx 44\%$ in the most favourable configuration. \begin{table*} \begin{threeparttable} \centering \caption{Parameters of the different models and values of the quantities involved in the conditions for streams collision. The range of ratios $\Delta z/H$ corresponds to the parameter $f$ covering the interval $0\leq f\leq1$.} \begin{tabular}{@{}lclcrccc@{}} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{$\alpha/\pi$} & \multirow{2}{*}{Rotation} & \multirow{2}{*}{$\Delta t/t_{\rm min}$} & \multirow{2}{*}{$\Delta \theta/\pi$} & $\Delta z/H$ & $\Delta z/H$ \\ & & & & & ($m=1$) & ($m=1/4$) & \\ \hline R$\alpha$0 & 0 & retrograde & 0.09 & 0.0057 & 0 & 0 \\ P$\alpha$0 & 0 & prograde & 0.09 & $-$0.0057 & 0 & 0 \\ R$\alpha$0.1 & 0.1 & retrograde & 0.085 & 0.0057 & 0.39-2.4 & 12-78 \\ \hline \end{tabular} \label{param} \end{threeparttable} \end{table*} \section{Numerical simulations} \label{simulations} We now present numerical simulations carried out in order to demonstrate the validity of the conditions for streams collision derived in Section \ref{conditions} and to study the hydrodynamics of the interaction. The simulations focus on a double TDE produced by the previous tidal separation of a binary star, for which the initial conditions leading to a collision have been determined in Section \ref{binary_likelihood}. For simplicity, we assume that the gas evolves in a Keplerian gravity and do not investigate the relativistic effects presented above. \subsection{Setup} We simulate a double TDE produced by a previous binary separation. The two stars have solar masses and radii. The black hole has a mass $M_{\rm h}=10^6\, \mathrm{M}_{\hbox{$\odot$}}$ and the binary separation is $a=1000\, \mathrm{R}_{\hbox{$\odot$}}$. The numerical simulation is initialized with the binary center of mass located at the tidal separation radius\footnote{A proper treatment of the tidal separation process would require to start the numerical calculation far away from $R^{\rm sep}_{\rm t}$ where the tidal force on the binary is negligible compared to the gravitational attraction between the stars. Instead, the binary center of mass is initially positioned exactly at the tidal separation radius. This choice is made so that the binary angles defined in Section \ref{binary_likelihood} can be used directly to initialize the calculation, which facilitates the comparison between our analytical predictions and the numerical computation.} and following a parabolic orbit with penetration factor $\beta=2$. This choice of pericenter is made so that the two stellar components enter the tidal disruption radius despite the small angular momentum kick experienced during their separation. The binary angles and the direction of rotation specify the initial positions and velocities of each star that are computed using equations \eqref{position_com}, \eqref{velocity_com}, \eqref{position_kick} and \eqref{velocity_kick}. The yaw and phase angles are kept to $\delta=\phi=0$ for all the models. The first two models have both a pitch angle of $\alpha=0$, implying that the streams produced by the disruptions move on the same plane. These two models only differ by the direction of rotation, which is retrograde for model R$\alpha$0 and prograde for model P$\alpha$0. The time delay between the passage at pericenter of the stars is $\Delta t = 0.09 \, t_{\rm min}$ for both models according to equation \eqref{time_delay}. The pericenter shift is of $\Delta \theta = 0.0057 \pi >0$ for model R$\alpha$0 and $\Delta \theta = -0.0057 \pi <0$ for model P$\alpha$0 as obtained using equation \eqref{pericenter_shift}. A collision between streams is therefore only expected in the former case. The last model R$\alpha$0.1 also assumes a retrograde rotation. Additionally, it has $\alpha=0.1 \pi$ that implies two different orbital planes for the streams. The pericenter shift is $\Delta \theta = 0.0057 \pi >0$, that is the same as for model R$\alpha$0 due to the fact that $\delta =0$. Due to the positive $\alpha$, the time delay is slightly reduced to $\Delta t = 0.085 \, t_{\rm min}$. The ratio of vertical offset to streams width induced by the inclination of orbital planes satisfies $12 \leq \Delta z/H \leq 78$ for $m=1/4$ and $0.39 \leq \Delta z/H \leq 2.4$ for $m=1$ according to equations \eqref{condition_inclination}, \eqref{inclination} and \eqref{sinpsi}. This range of values corresponds to different $f$, the lowest one being reached for $f=0$ and the largest for $f=1$. The fact that $\Delta z > H$ for most values of $f$ indicates that streams collision is expected to be weakened for model R$\alpha$0.1 compared to model R$\alpha$0 due to the passage of a large fraction of one stream above the other. Table \ref{param} summarizes the parameters used in each model along with the values of the quantities involved in the condition for streams collision. These quantities are also indicated in the $\Delta \theta$-$\Delta t$ plane of Fig. \ref{planes} for models R$\alpha$0 (orange circle) and R$\alpha$0.1 (purple triangle). They are located in the red hatched region implying that, if a collision takes place, the most bound parts of each stream are expected to interact with each other. In the $\psi$-$i$ plane, the purple line corresponds to model R$\alpha$0.1 for the parameter $f$ varying between 0 (leftmost triangle) and 1 (rightmost triangle). The leftmost triangle is below the dashed line indicated that the most bound part of the second stream is expected to collide with the first stream if the width evolves homologously, which is consistent with the fact that $\Delta z/H = 0.39<1$ for $f=0$ and $m=1$ \begin{figure*} \centering \includegraphics[width=\textwidth]{fig5.pdf} \caption{Snapshots showing the gas evolution for model R$\alpha$0 at different times $t/t_{\rm min} =$ 0, 0.07, 0.15, 0.3, 0.4 and 0.5 during the two stellar disruptions and subsequent evolution of the debris streams. The colours show the gas column density, increasing from blue to yellow as indicated in the colour bar. The black hole is represented by the white dot on the right-hand side of each panel. In the first three panels, the location of the first and second star or stream is indicated with blue and red arrows, respectively. The direction of motion of the second star is shown with a dashed white arrow in the first two panels. All panels use the same scale, indicated by the segment in the first panel that corresponds to ten tidal disruption radii. After the disruptions, the second stream catches up with the first one owing to its positive pericenter shift. This results in a streams collision at $t/t_{\rm min} \approx 0.3$ and an associated gas expansion at later times.} \label{snap-times} \end{figure*} The trajectories of the two stars is followed with a three-body calculation performed with the code \textsc{rebound} using the IAS15 integrator \citep{rein2012,rein2015} until the first one reaches a distance of $3R^{\rm dis}_{\rm t}$ from the black hole.\footnote{\textsc{rebound} can be downloaded freely at \url{http://github.com/hannorein/rebound}.} The positions and velocities of the stars at this point are used to initialize a hydrodynamical simulation, carried out with the SPH code \textsc{phantom} \citep{price2017}. The stars are modelled by polytropic spheres with exponent $\gamma=5/3$ containing $10^5$ SPH particles that we create using the same procedure as \citet{lodato2009}. The black hole gravity is modelled with an external Keplerian potential. Self-gravity is included through a k-D tree algorithm \citep{gafton2011}. Direct summation is used to treat short-range interactions with a critical value of 0.5 in the opening angle criterion. An adiabatic equation of state is assumed for the gas thermodynamical evolution. Shocks are handled with a standard artificial viscosity prescription combined with a switch that strongly reduces its value away from shocks \citep{cullen2010}. Our simulations aim at investigating only the first revolution of the streams around the black hole. For this reason, we remove the SPH particles that come back after the disruptions within a radius of $30 R^{\rm dis}_{\rm t}$ from the black hole. The size of this region is set such that any particle that falls back enters it before reaching pericenter. Note that this area extends significantly further than the tidal disruption radius, at which gas elements are expected to come back according to angular momentum conservation. This is necessary because streams collision can increase the angular momentum of a fraction of the debris resulting in an increased pericenter distance. Nevertheless, the mass of accreted gas is always negligible within the duration of the simulations. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig6.pdf} \caption{Gas distribution at $t/t_{\rm min}=0.3$ for models R$\alpha$0 (upper panel), P$\alpha$0 (middle panel) and R$\alpha$0.1 (lower panel) shown along a line of sight perpendicular to the initial binary orbital plane. The colours represent the gas column density, increasing from blue to yellow as indicated in the colour bar. In the two lowermost panels, the first and second stream are indicated by blue and red arrows, respectively. Streams collision occurs around that time for model R$\alpha$0 but is avoided for models P$\alpha$0 and R$\alpha$0.1.} \label{snap-plane} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{fig7.pdf} \caption{Gas distribution at $t/t_{\rm min}=0.3$ for models R$\alpha$0 (upper panel) and R$\alpha$0.1 (lower panel) shown along a line of sight parallel to the initial binary orbital plane. The colours represent the gas column density, increasing from blue to yellow as indicated in the colour bar. In the lowermost panel, the first and second stream are indicated by a blue and red arrow, respectively. Most of the streams avoid collision for model R$\alpha$0.1 due to the vertical offset induced by their orbital plane inclination.} \label{snap-vertical} \end{figure*} \subsection{Results} We now present the results of the SPH simulations for the three models considered.\footnote{Movies of the simulations presented in this paper are available at \url{http://www.tapir.caltech.edu/~bonnerot/double-tdes.html}.} The gas evolution is shown in Fig. \ref{snap-times} for model R$\alpha$0. The black hole is represented by a white dot on the right-hand side of each panel. The blue and red arrows in the first three panels indicate the first and second star or stream, respectively. They are otherwise difficult to identify due to their compactness. The first star is initially closer to the black hole and gets disrupted earlier than the second. The disruptions happen with penetration factors $\beta_1 \approx 1.5$ and $\beta_2 \approx 2.9$ for the first and second star. As expected, these factors differ slightly from that $\beta=2$ of the binary center of mass due to the angular momentum kick given during the separation process. At $t/t_{\rm min} = 0.07$, both stars have been disrupted and the streams start their revolution around the black hole. The second stream is still lagging behind the first at $t/t_{\rm min} = 0.15$ but is catching up with it owing to the positive pericenter shift. A large fraction of the two streams collide at $t/t_{\rm min} \approx 0.3$ leading to an expansion of the gas distribution. At later times, the two streams have partially merged and keep orbiting the black hole, with the bound gas falling back in its vicinity while the unbound part escapes. The difference in the streams evolution between the models can be understood by looking at Fig. \ref{snap-plane}, which shows the gas distribution at a fixed time $t/t_{\rm min} = 0.3$ for models R$\alpha$0 (upper panel), P$\alpha$0 (middle panel) and R$\alpha$0.1 (lower panel). As explained above, the two gas streams collide around that time for model R$\alpha$0. The streams remain instead far apart for model P$\alpha$0 and the collision does not happen. This is a consequence of the negative pericenter shift that prevents the second stream from catching up with the first one. For model R$\alpha$0.1, the second stream is able to catch up thanks to the positive pericenter shift. However, the streams do not strongly collide due to the fact that they evolve on different planes. This situation can be seen more clearly in Fig. \ref{snap-vertical} which shows the gas distribution in the vertical direction for models R$\alpha$0 (upper panel) and R$\alpha$0.1 (lower panel) at the same time of $t/t_{\rm min} = 0.3$. The streams collision makes the gas expand vertically for model R$\alpha$0. For model R$\alpha$0.1, the second stream passes above the first stream that prevents most of the debris from interacting. Nevertheless, the most bound parts of the streams undergo a mild encounter due to their smaller offset as can be seen from the lower panel of Fig. \ref{snap-vertical} in the right-hand side. As explained above, this interaction is expected from the fact that $\Delta z/H \approx 0.39 <1$ for the second stream most bound element if the streams evolve homogously. This homologous evolution is caused by the deep disruption of the second star with $\beta_2 \approx 2.9$ that heats the gas at pericenter. A collision between streams is expected to heat the gaseous debris. This effect can be evaluated from Fig. \ref{internal-energy} that shows the internal energy evolution for models R$\alpha$0 (solid black line), P$\alpha$0 (red dashed line) and R$\alpha$0.1 (blue long-dashed line). The early evolution is similar for all models with two sharp drops in thermal energy corresponding to the sequential disruptions. This evolution differs for $t/t_{\rm min} \gtrsim 0.2$ where the thermal energy increases for both models R$\alpha$0 and R$\alpha$0 while it keeps decreasing for model P$\alpha$0. The thermal energy increase results from the formation of shocks during the streams collision where a fraction of the gas kinetic energy is dissipated. It peaks at $E_{\rm int} \approx 10^{48} \, \rm erg$ for R$\alpha$0 but at a lower value of $E_{\rm int} \approx 10^{46} \, \rm erg$ for model R$\alpha$0.1 owing to the smaller amount of gas involved in the collision in the latter case (see lower panel of Fig. \ref{snap-vertical}). At $t/t_{\rm min} \gtrsim 0.3$, the thermal energy decreases as the gas expands and cools. For model P$\alpha$0, there is no sharp increase in thermal energy since the streams avoid collision (see middle panel of Fig. \ref{snap-plane}). However, a slow gain in thermal energy can be seen for $t/t_{\rm min} \gtrsim 0.3$ until the end of the simulation. This is due to an interaction of the two streams near pericenter. However, the associated shocks are weak since the streams are smoothly joining each other with a small collision angle. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig8.pdf} \caption{Gas internal energy evolution for models R$\alpha$0 (black solid line), P$\alpha$0 (red dashed line) and R$\alpha$0.1 (blue long-dashed line).} \label{internal-energy} \end{figure} \section{Discussion and conclusion} \label{discussion} Several dynamical mechanisms predict TDEs happening at high rates such that two subsequent disruptions may not be independent of each other. In this paper, we investigate the possibility of streams collision resulting from such a double TDE before the debris comes back to pericenter. We start by analytically deriving three conditions for such a collision to happen remaining agnostic about the mechanism at the origin of the TDEs. If the two streams evolve in the same orbital plane, a necessary condition for collision is a positive shift between the pericenter location of the stars, that allows the second stream to catch up with the first one despite its original time delay. However, this pericenter shift must also be lower than a critical value $\Delta \theta_{\rm nc}$ if the time delay is shorter than $t_{\rm min}$. Otherwise, collision can be avoided with the second stream passing between the most bound part of the first stream and the black hole. If the orbital planes of the streams are inclined, the collision can also be prevented with one stream passing above the other. In this case, an additional condition for streams collision is that the vertical offset induced by the plane inclination is smaller than the streams width. Using this analytical study, we compute the likelihood of streams collision for a double TDE resulting from a binary separation, treating this process as instantaneous. The collision probability is significant as long as the binary is not near-contact and reaches $P_{\rm col}\approx 44\%$ in the most favourable configuration due to an enhancement induced by relativistic precession. We then perform numerical simulations of a double TDE produced by a binary separation that confirm our analytical conditions for streams collision. If the streams collide, shocks form that result in a sharp increase in thermal energy and a subsequent expansion of the gas distribution. The fact that streams collision can start before the fallback of the most bound debris at pericenter (equation \ref{minimal_collision_time}) implies that the associated emission represents a precursor to the main flare from TDEs. This early emission could be used to better constrain theoretical models observationally. For example, its detection can help pinpoint the beginning of gas fallback at pericenter in order to get a better handle on the efficiency of disc formation. It is possible to estimate the properties of this signal from our simulations. According to Fig. \ref{internal-energy}, the strongest collision (model R$\alpha$0) leads to a burst of radiation, most likely in the optical band, lasting a few days with a luminosity of $\sim 10^{43} \, \rm erg\, s^{-1}$ if the internal energy is promptly radiated. This signal could however last up to a few months if the ratio $\Delta t/\Delta \theta$ is increased (equation \ref{maximal_collision_time}). According to \citet{mandel2015}, double tidal disruptions resulting from binary separation represent $\sim$10\% of all TDEs. Since the resulting streams have a collision probability of a few tens of percent (see Fig. \ref{probability}), we expect that a precursor is powered through this mechanism in a few percent of TDEs. Furthermore, double disruptions can be produced by other processes that increases the chance of producing this early emission. The black hole spin has been neglected in our treatment of streams collision. Its main effect is Lense-Thirring precession that causes the angular momentum of the streams to precess around the direction of the black hole spin. Like for relativistic apsidal precession, it is convenient to decompose the Lense-Thirring precession into a net and differential component with associated precession angles given by $\Delta \Omega_{\rm n} \approx 2^{1/2} \pi \, a_{\rm h} \, \beta^{3/2} (G M_{\rm h} / R^{\rm dis}_{\rm t} c^2)^{3/2} \approx 0.0044 \pi \, a_{\rm h} \, \beta^{3/2} \, M_6 \, m_{\star}^{1/2} \, r_{\star}^{-3/2}$ \citetext{equation 4.220a of \citealt{merritt2013}} and $\Delta \Omega_{\rm d} = (3/2) \Delta \Omega_{\rm n} \Delta \beta/\beta$. Here, $a_{\rm h}$ denotes the black hole spin parameter. Net precession modifies the position of the plane intersection line. However, the fact that $\Delta \Omega_{\rm n} \ll \Delta \omega_{\rm n}$ implies that the impact on the angle $\psi_{\rm rel}$ is negligible compared to that induced by apsidal precession. Differential precession changes the inclination angle between the two orbital planes. This modification is nevertheless unable to significantly change the maximal collision probability since it corresponds to $\psi_{\rm rel} \approx 0$ as imposed by apsidal precession. This analysis shows that streams collision in double TDEs following binary separation are likely less sensitive to Lense-Thirring precession than stream self-crossing shocks occurring after the gas falls back to pericenter. Our prediction of a bright TDE precursor associated to streams collision encourages observational attempts to search back for emission, most likely in the optical, during the weeks to months preceding a TDE detection. Such a discovery would unprecedentedly set the scale of times for the phases following the stars disruption, uniquely constraining long standing questions. \section*{Acknowledgments} CB and EMR acknowledge the help from NOVA. The research of CB was funded in part by the Gordon and Betty Moore Foundation through Grant GBMF5076. We also thank Yuri Levin, Giuseppe Lodato, Ilya Mandel and Ree'm Sari for insightful discussions. Finally, we acknowledge the use of SPLASH \citep{price2007} for generating the figures of Section \ref{simulations}. \bibliographystyle{mnras}
2024-02-18T23:40:30.244Z
2018-05-25T02:00:19.000Z
algebraic_stack_train_0000
2,591
15,149
proofpile-arXiv_065-12594
\section{Counting zigzag covers} \label{sec:countingzigzags} In this section we discuss existence and asymptotic behaviour of zigzag covers. We start with a simple observation. \begin{proposition} \label{prop:necessaryconditionexistence} If $Z_g(\lambda,\mu) > 0$, then the number of odd elements which appear an odd number of times in $\lambda$ plus the number of elements which appear an odd number of times in $\mu$ is $0$ or $2$. \end{proposition} \begin{proof} It follows immediately from the three types of tails allowed in a zigzag cover that this number is at most $2$. Since it is even, the statement follows. \end{proof} For existence statements we need slightly stronger assumptions. It is useful to introduce some notation for partitions first. The number of parts in a partition is called the \emph{length} and denoted by $l(\lambda)$. The sum of the parts is denoted by $|\lambda|$. A partition $\lambda = (\lambda_1, \dots, \lambda_n)$ is called \emph{even} or \emph{odd} if all parts $\lambda_i$ are even or odd numbers, respectively. We denote \begin{align*} 2 \lambda &:= (2 \lambda_1, \dots, 2 \lambda_n), \\ \lambda^2 &:= (\lambda_1, \lambda_1, \dots, \lambda_n, \lambda_n), \\ (\lambda, \mu) &:= (\lambda_1, \dots, \lambda_n, \mu_1, \dots, \mu_m), \end{align*} where $\mu = (\mu_1, \dots, \mu_m)$ is a second partition. Any partition $\lambda$ can be uniquely decomposed into \[ \lambda = (2 \lambda_{2e}, 2 \lambda_{2o}, \lambda_{o,o}^2, \lambda_0) \] such that \begin{itemize} \item the partition $\lambda_{2e}$ is even, \item the partitions $\lambda_{2o}$ and $\lambda_{o,o}$ are odd, \item the partition $\lambda_{0}$ is odd and does not have any multiple entries. \end{itemize} We call this the \emph{tail decomposition} of $\lambda$. Note that $l(\lambda_0) \equiv |\lambda| \mod 2$. In terms of this notation, the necessary condition of \autoref{prop:necessaryconditionexistence} can be stated as $l(\lambda_0, \mu_0) = 0,2$. Note that $l(\lambda_0, \mu_0)$ is even since $l(\lambda_0, \mu_0) \equiv |\lambda| + |\mu| = 2d \mod 2$. \begin{proposition} \label{prop:sufficientconditionexistence} If $l(\lambda_0, \mu_0) \leq 2$ and $(\lambda_{o,o}, \mu_{o,o}) \neq \emptyset$, then there exist zigzag covers of that type, i.e.\ $Z_g(\lambda,\mu) > 0$. \end{proposition} \begin{remark} \label{rem:combconstructionofcovers} In the following proof, we use a more combinatorial description of tropical covers. Let $C$ be a graph of genus $g$ with only one- and three-valent vertices. Fix a orientation on $C$ with no oriented loops and pick positive integer weights for the edges of $\Gamma$ such that the balancing condition holds (i.e., for each inner vertex, the sum of outgoing weights is equal to the sum of incoming weights). Finally, fix a set ${\mathcal P} = \{x_1 < \dots < x_r\} \subset {\mathbf R}$, where $r$ is the number of inner vertices of $C$. Then for any choice of total order $v_1, \dots, v_r$ on the inner vertices of $C$ extending the partial order induced by the orientation, there exists a unique tropical cover $\varphi : C \to \mathbf{TP}^1$ such that for each edge the orientation agrees with the orientation of $\mathbf{TP}^1$ (from $-\infty$ to $+\infty$) under $\varphi$, the given weights agree with the ones induced by $\varphi$, and $\varphi(v_i) = x_i$. \end{remark} \begin{proof} We distinguish the two cases $l(\lambda_0, \mu_0) = 0$ and $l(\lambda_0, \mu_0) = 2$. Case $l(\lambda_0, \mu_0) = 2$: We first construct the underlying abstract graph of a zigzag cover. We start from a string graph $S$ and attach tails to $S$: A tail of the first, second or third type for each part of $(\lambda_{o,o}, \mu_{o,o})$, $(\lambda_{2o}, \mu_{2o})$ or $(\lambda_{2e}, \mu_{2e})$, respectively. The order of appearance of these tails on $S$ can be chosen arbitrarily. Moreover, since $(\lambda_{o,o}, \mu_{o,o}) \neq \emptyset$, there is at least one tail of the first type, on which we place $g$ balanced cycles. We obtain a graph $C$ of genus $g$. and the next step is to equip $C$ with an orientation and weights. By construction the leaves of $C$ are labelled by parts of $\lambda$ and $\mu$ (the leaves of $S$ are assigned to the parts of $(\lambda_0, \mu_0)$). We use each part as weight of the corresponding leaf. Moreover, leaves associated to $\lambda$ are oriented towards the inner vertex while leaves associated to $\mu$ are oriented towards the end. By the balancing condition, there is a unique extension of the orientation and the weight function to all of $C$. Note that indeed all these weights on $S$ turn out to be odd (and, in particular, non-zero). Then, by \autoref{rem:combconstructionofcovers}, any choice of total order on the inner vertices extending the orientation order gives rise to a zigzag cover $\varphi : C \to \mathbf{TP}^1$. Case $l(\lambda_0, \mu_0) = 0$: If $l(\lambda_{o,o}, \mu_{o,o}) > 1$, we proceed as before with the following changes. We remove an arbitrary part $\alpha$ from $(\lambda_{o,o}, \mu_{o,o})$ and use it as weight for both leaves of $S$ (instead of a tail of weight $(\alpha, \alpha)$). If $l(\lambda_{o,o}, \mu_{o,o}) = 1$, we need to distinguish two subcases. If $g=0$, we can proceed as in the previous step. This could leave us with tails only of the third type, but since no balanced cycles have to be added, this is not a problem. If $g>0$, we instead do the following. We construct the abstract graph $C$ as before, but now we glue the two ends of $S$ to a single inner edge $e$, obtaining a graph $C'$ with a (closed) string $S'$. We assign weights to the tail leaves as before. Then again by the balancing condition any choice of an odd weight $\omega(e)$ (and orientation) for the gluing edge $e$ fixes an orientation and the weights on all of $C'$. Moreover, there is a (finite) range of choices for $\omega(e)$ such that $S'$ does not turn into an oriented loop and we can proceed as before. \end{proof} \begin{remark} \label{rem:exactconditionexistence} The exact conditions for existence of zigzag covers are as follows. We have $Z_g(\lambda,\mu) > 0$ if and only if $l(\lambda_0, \mu_0) \leq 2$ and none of the following three cases occurs: \begin{itemize} \item $g=0$, $(\lambda_{o,o}, \lambda_0, \mu_{o,o}, \mu_0) = \emptyset$ and $l(\lambda_{2e}, \lambda_{2o}, \mu_{2e}, \mu_{2o}) > 3$. \item $g=1$, $(\lambda_{2o}, \lambda_{o,o}, \mu_{2o}, \mu_{o,o}) = \emptyset$ and $(\lambda_0, \mu_0) \neq \emptyset$. \item $g>1$, $(\lambda_{2o}, \lambda_{o,o}, \mu_{2o}, \mu_{o,o}) = \emptyset$. \end{itemize} \end{remark} Our next goal is to give lower bounds for $Z_g(\lambda,\mu)$. Let $\varphi$ be a zigzag cover. We denote by \\ \begin{tabular}{rp{.8\textwidth}} $a_l, a_r$ & the number of tails of type $o,o$ to the left and right, \\ $b_l, b_r$ & the number of bends/orientation changes in $S$, with the peak of the bend pointing to the left and right, \\ $c$ & the number of unbent vertices of $S$, \\ $g_l, g_r$ & the number of symmetric cycles located on tails to the left and right, \end{tabular} \\ respectively. If $S$ is a vertex, we use the values $b_l = b_r = 0$, $c=1$. \begin{definition} \label{def:unmixed} A zigzag cover $\varphi$ is \emph{unmixed} if its simple branch points $x_1 < \dots < x_s$, grouped in segments of length $a_l, 2g_l, b_l, c, b_r, 2g_r, a_r$, occur as images of \begin{itemize} \item the symmetric fork vertices of tails of type $o,o$ to the left, \item the vertices of symmetric cycles located on tails to the left, \item bends of $S$ with peaks to the left, \item unbent vertices of $S$, \item the symmetric pattern for bends/tails to the right. \end{itemize} \end{definition} \newfigure{unmixedzigzag} {A schematic picture of an unmixed zigzag cover with its various groups of branch points in ${\mathbf R}$. Any permutation of the vertices on top of the branch points belonging to $a_l, b_l, b_r, a_r$, respectively, gives rise to another unmixed zigzag cover.} \begin{lemma} \label{prop:extendpartialorder} Given a weighted and oriented graph $C$ as constructed in \autoref{prop:sufficientconditionexistence}, there are at least \[ a_l! \cdot a_r! \cdot b_l! \cdot b_r! \] possibilities to turn $C$ into an unmixed zigzag cover $\varphi : C \to \mathbf{TP}^1$. \end{lemma} \begin{proof} We are interested in the number of total orders on the vertices of $C$ which extend the partial order given by the orientation and, when grouped from least to greatest in segments of length $a_l, 2g_l, b_l, c, b_r, 2g_r, a_r$, produces the groups of vertices described in \autoref{def:unmixed}. It is obvious that such orders exist. Moreover, since the partial order restricts to the empty order on the subgroups of vertices corresponding to $a_l, b_l, b_r, a_r$, respectively, any permutation on these subgroups provides another valid ``unmixed'' order. This proves the statement. \end{proof} If $\lambda = (\lambda_1, \lambda_2)$, we write $\lambda_1 = \lambda \setminus \lambda_2$. We write $k \in \lambda$ if $k$ is a part of the partition $\lambda$. Given a number $k \in {\mathbf N}$ and two partitions $\lambda, \mu$, consider all (finite) sequences whose first entry is $k$ and all subsequent entries are obtained by either adding a part of $\lambda$ or subtracting a part of $\mu$, using each part exactly once. We denote by $\Bends(k, \lambda, \mu)$ the maximal number of sign changes that occur in such a sequence. \begin{theorem} \label{prop:lowerbound} Fix $\lambda, \mu, g$ such that $l(\lambda_0, \mu_0) \leq 2$ and $(\lambda_{o,o}, \mu_{o,o}) \neq \emptyset$. We set $\lambda_{\text{tail}} = (\lambda_{2e}, \lambda_{2o}, \lambda_{o,o})$ and $\mu_{\text{tail}} = (\mu_{2e}, \mu_{2o}, \mu_{o,o})$. \begin{enumerate} \item If $k \in \lambda_0$, then \[ Z_g(\lambda,\mu) \geq l(\lambda_{o,o})! \cdot l(\mu_{o,o})! \cdot \lfloor \Bends/2 \rfloor! \cdot \lceil \Bends/2 \rceil!, \] where $\Bends = \Bends(k, 2 \lambda_{\text{tail}}, 2 \mu_{\text{tail}})$. \item If $(\lambda_0, \mu_0) = \emptyset$, $g=0$ and $k \in \lambda_{o,o}$, then \[ Z_g(\lambda,\mu) \geq (l(\lambda_{o,o})-1)! \cdot l(\mu_{o,o})! \cdot \lfloor \Bends/2 \rfloor! \cdot \lceil \Bends/2 \rceil!, \] where $\Bends = \Bends(k, 2 (\lambda_{\text{tail}} \setminus (k)), 2 \mu_{\text{tail}})$. \item If $(\lambda_0, \mu_0) = \emptyset$, $g>0$, then \[ Z_g(\lambda,\mu) \geq l(\lambda_{o,o})! \cdot l(\mu_{o,o})! \cdot (\Bends/2)!^2, \] where $\Bends = \Bends(1, 2 \lambda_{\text{tail}}, 2 \mu_{\text{tail}})$. \end{enumerate} \end{theorem} \begin{proof} The three cases are in correspondence with the three types of constructions in \autoref{prop:sufficientconditionexistence}. Here, we may assume by symmetry that $\lambda_0 \neq \emptyset$ whenever $(\lambda_0, \mu_0) \neq \emptyset$ and $\lambda_{o,o} \neq \emptyset$ whenever $(\lambda_{o,o}, \mu_{o,o}) \neq \emptyset$. To recall, we summarize the three cases in terms of $S$: \begin{itemize} \item The leaves of $S$ are weighted by $(\lambda_0, \mu_0)$. \item The leaves of $S$ are weighted by a part $k \in \lambda_{o,o}$, \item $S$ is a loop. \end{itemize} In each case, $\Bends$ appearing in the statement is the maximal number of bends we can create in $S$ in the corresponding construction. In the first two cases this is straightforward. In the loop case, consider a weighted oriented graph $C$ with maximal number of bends on $S$. Among the edges of $S$ choose an edge of minimal weight. Following $S$ in the direction of $e$ we subtract $\omega(e) - 1$ from the weights of the edges oriented coherently, and add $\omega(e) - 1$ to the edges oriented oppositely. In this way we obtain a new balanced weight function on $C$ with the same number of bends, but an edge of weight $1$. Hence the maximal number of bends is given by $\Bends(1, 2 \lambda_{\text{tail}}, 2 \mu_{\text{tail}})$. Back to all three cases, we pick a graph $C$ reaching the maximal number of bends $\Bends$. It follows that $b_l = \lfloor \Bends/2 \rfloor$ and $b_r = \lceil \Bends/2 \rceil$, respectively. One should note here that $\Bends$ is even if $S$ is a loop, or otherwise, at least one of the ends of $S$ maps to $-\infty$ by our convention from above. Moreover, the number of tails of type $o,o$ is $a_l = l(\lambda_{o,o})$ and $a_r = l(\mu_{o,o})$, except for the second case, where the ends of $S$ occupy a pair of tail weights and hence $a_l = l(\lambda_{o,o})-1$. The statement then follows from \autoref{prop:extendpartialorder}. \end{proof} The lower bounds from \autoref{prop:lowerbound} can be used to derive statements about the asymptotic growth of the numbers $Z_g(\lambda,\mu)$. \begin{definition} Given $g \in {\mathbf N}$ and partitions $\lambda, \mu$ with $|\lambda| = |\mu|$, we set \begin{align*} z_{\lambda, \mu, g}(m) &= Z_g((\lambda, 1^{2m}),(\mu, 1^{2m})), \\ h^{\mathbf C}_{\lambda, \mu, g}(m) &= H^{\mathbf C}_g((\lambda, 1^{2m}),(\mu, 1^{2m})). \end{align*} \end{definition} \begin{proposition} \label{prop:asymptoticszigzag} Fix $g \in {\mathbf N}$, partitions $\lambda, \mu$ with $|\lambda| = |\mu|$ and assume that $l(\lambda_0, \mu_0) \leq 2$. Then there exists $m_0 \in {\mathbf N}$ such that \[ z_{\lambda, \mu, g}(m) \geq (m-m_0)!^4 \] for all $m > m_0$. \end{proposition} \begin{proof} Let $\lambda', \mu'$ be some even partitions of the same integer and $k$ an odd integer. We consider the sequence \[ \Bends(m) = \Bends(k, (\lambda', 1^{2m}),(\mu', 1^{2m})). \] We claim that there exists $m_0 \in {\mathbf N}$ such that $\Bends(m) \geq 2(m-m_0)$ for $m \geq m_0$. Indeed, for sufficiently large $m_0$ we can assume that there exists a maximal sequence (for $B(m_0)$) containing an entry $\pm 1$. For $m = m_0 + 1$ we insert a piece of the form $\pm 1 \to \mp 1 \to \pm 1$ at the position of $\pm 1$, and so on, showing that $\Bends(m) \geq 2(m-m_0)$. It follows that $\lfloor \Bends(m)/2 \rfloor, \lceil \Bends(m)/2 \rceil \geq m-m_0$ for $m \geq m_0$. Note that $l((\lambda, 1^{2m})_{o,o}) = l(\lambda_{o,o}) + m$ and $l((\mu, 1^{2m})_{o,o}) = l(\mu_{o,o}) + m$. By use of \autoref{prop:lowerbound} and the previous argument we conclude that \begin{align} \nonumber z_{\lambda, \mu, g}(m) &\geq m! \cdot m! \cdot (m - m_0)! \cdot (m - m_0)! \\ &\geq (m-m_0)!^4 \nonumber \end{align} for all $m \geq m_0$. \end{proof} \begin{theorem} \label{thm:logasymptotics} Fix $g \in {\mathbf N}$, partitions $\lambda, \mu$ with $|\lambda| = |\mu|$ and assume that $l(\lambda_0, \mu_0) \leq 2$. Then $z_{\lambda, \mu, g}(m)$ and $h^{\mathbf C}_{\lambda, \mu, g}(m)$ are logarithmically equivalent. More precisely, we have \[ \log z_{\lambda, \mu, g}(m) \sim 4 m \log m \sim \log h^{\mathbf C}_{\lambda, \mu, g}(m). \] \end{theorem} \begin{remark} Let $d$ and $r$ be the degree and number of simple branch points, respectively, of the covers contributing to $z_{\lambda, \mu, g}(m)$. Note that the ratio of growth is $4m \sim 2d \sim r$ and therefore \[ 4m \log m \sim 2d \log d \sim r \log r. \] The variables $d,r$ are more commonly used in the literature, e.g.\ \cite{ItZv, HiRa}. \end{remark} \begin{remark} When choosing all simple branch points to lie on the positive half axis (i.e., for $p=r$), the logarithmic growth of real double Hurwitz numbers like $H^{\mathbf R}_0(\lambda,\mu; r)$ can be computed from \cite[Section 5, e.g.\ Theorem 5.7]{GuMaRa}. \end{remark} \begin{proof} In consideration of \autoref{thm:zigzaglowerbounds}, it suffices to show that $\log z_{\lambda, \mu, g}(m)$ grows at least as fast and $\log h^{\mathbf C}_{\lambda, \mu, g}(m)$ grows at most as fast as $4 m \log m$, respectively. The estimate regarding $\log z_{\lambda, \mu, g}(m)$ follows from \autoref{prop:asymptoticszigzag} since $\log ((m-m_0)!) \sim m \log m$. The estimate for $\log h^{\mathbf C}_{\lambda, \mu, g}(m)$ (probably classical) can be deduced from the following argument. Let \[ H^{\mathbf C}_g(d) := H^{\mathbf C}_g((1^d), (1^d)) \] be the complex Hurwitz numbers associated to covers with only simple branch points. The asymptotics of these numbers is computed in \cite[Equation 5]{DuYaZa} as \[ H^{\mathbf C}_g(d) \sim C_g \left( \frac{4}{e} \right)^d d^{2d - 5 + \frac{9}{2} g}. \] Here, $C_g$ is a constant only depending on $g$. It follows that \[ \log H^{\mathbf C}_g(d) \sim 2 d \log d. \] We finish by showing $H^{\mathbf C}_g(\lambda', \mu') \leq H^{\mathbf C}_g(d)$ for arbitrary partitions $\lambda', \mu'$ of $d$. Then \[ \log h^{\mathbf C}_{\lambda, \mu, g}(m) \leq \log H^{\mathbf C}_g(|\lambda| + 2m) \sim 4 m \log m \] and the claim follows. To prove $H^{\mathbf C}_g(\lambda', \mu') \leq H^{\mathbf C}_g(d)$ we can use monodromy representations. Let ${\mathcal G}$ and ${\mathcal H}$ be the set of monodromy representations corresponding to $H^{\mathbf C}_g(d)$ and $H^{\mathbf C}_g(\lambda', \mu')$. In particular, $|{\mathcal G}| = d! \cdot H^{\mathbf C}_g(d)$ $|{\mathcal H}| = d! \cdot H^{\mathbf C}_g(\lambda', \mu')$. Let ${\mathcal F} \subset {\mathcal G}$ be the subset of representations such that \begin{itemize} \item the product of the first $d - l(\lambda')$ transpositions gives a permutation of cycle type $\lambda'$, \item the product of the last $d - l(\mu')$ transpositions gives a permutation of cycle type $\mu'$. \end{itemize} We consider the map ${\mathcal F} \to {\mathcal H}$ given by using as \enquote{special} permutations the products of transpositions as suggested by the definition of ${\mathcal F}$. Since any permutation of cycle type $\lambda'$ and $\mu'$ can be factored into a product of $d - l(\lambda')$ and $d - l(\mu')$ transpositions, respectively, the map ${\mathcal F} \to {\mathcal H}$ is surjective and hence $|{\mathcal H}| \leq |{\mathcal G}|$. This proves the claim. \end{proof} The argument can be adapted to prove analogous statements for different types of asymptotics. For example, set \begin{align*} z'_{\lambda, \mu, g}(m) &= Z_g((\lambda, 2^{m}),(\mu, 1^{2m})), \\ z''_{\lambda, \mu, g}(m) &= Z_g((\lambda, 2^{m}),(\mu, 2^{m})), \end{align*} and assume in the $z''$ case that $(\lambda_{o,o}, \lambda_0, \mu_{o,o}, \mu_0) \neq \emptyset$ or $g>0$. A straightforward adaption of \autoref{prop:asymptoticszigzag} shows the following. \begin{proposition} \label{prop:asymptoticszigzag2} Under the above assumptions, there exists $m_0 \in {\mathbf N}$ such that \begin{align*} z'_{\lambda, \mu, g}(m) &\geq (m-m_0)!^3, \\ z''_{\lambda, \mu, g}(m) &\geq (m-m_0)!^2, \end{align*} for all $m > m_0$. \end{proposition} The corresponding series of complex Hurwitz numbers are denoted by \begin{align*} h'_{\lambda, \mu, g}(m) &= H^{\mathbf C}_g((\lambda, 2^{m}),(\mu, 1^{2m})), \\ h''_{\lambda, \mu, g}(m) &= H^{\mathbf C}_g((\lambda, 2^{m}),(\mu, 2^{m})). \end{align*} \begin{theorem} \label{thm:logasymptotics2} Under the above assumptions, we have \begin{align*} \log z'_{\lambda, \mu, g}(m) &\sim 3 m \log m \sim \log h'_{\lambda, \mu, g}(m), \\ \log z''_{\lambda, \mu, g}(m) &\sim 2 m \log m \sim \log h''_{\lambda, \mu, g}(m). \end{align*} \end{theorem} \begin{remark} The statements can be unified by the observation that the logarithmic growth is equal to \[ r \log r \] for all $z,z',z'',h,h',h''$, where $r$ denotes the number of simple branch points. \end{remark} \begin{proof} We can proceed exactly as for \autoref{thm:logasymptotics} replacing \autoref{prop:asymptoticszigzag} by \autoref{prop:asymptoticszigzag2}. It remains to prove that for fixed $k$ the logarithmic growth of \begin{align*} H'_g(m) := H^{\mathbf C}_g((1^k, 2^m), (1^{k +2m})), \\ H''_g(m) := H^{\mathbf C}_g((1^k, 2^m), (1^k,2^m)) \end{align*} is bounded by $3 m \log m$ and $2 m \log m$, respectively. Adapting the previous argument, let ${\mathcal G}, {\mathcal H}', {\mathcal H}''$ be the sets of monodromy representations corresponding to $H^{\mathbf C}_g(k+2m)$, $H'_g(m)$ and $H''_g(m)$, respectively. Let ${\mathcal F}',{\mathcal F}'' \subset {\mathcal G}$ denote the subsets of representations for which the first $m$ transpositions (the first $m$ and the last $m$ transpositions, respectively) are pairwise disjoint. We have surjections ${\mathcal F}' \to {\mathcal H}'$ and ${\mathcal F}'' \to {\mathcal H}''$ whose fibers have size $m!$ and $(m!)^2$, since the factors in product of $m$ pairwise disjoint transpositions can be permuted freely. Hence \begin{align*} \log H'_g(m) &= \log (H^{\mathbf C}_g(k+2m) / m!) \sim 4 m \log m - m \log m = 3 m \log m, \\ \log H''_g(m) &= \log (H^{\mathbf C}_g(k+2m) / m!) \sim 4 m \log m - 2m \log m = 2 m \log m, \end{align*} which proves the claim. \end{proof} \section{Introduction} \label{sec:introduction} Let $H^{\mathbf C}_g(\lambda, \mu)$ denote the \emph{complex} (i.e., usual) double Hurwitz numbers. They count holomorphic maps $\varphi$ from a compact Riemann surface $C$ of genus $g$ to $\mathbf{CP}^1$ with $r$ given simple branch points and two additional branch points of ramification profile $\lambda, \mu$. Here, $r = l(\lambda) + l(\mu) + 2g - 2$. A \emph{real structure} $\iota$ for $\varphi : C \to \mathbf{CP}^1$ is an anti-holomorphic involution on $C$ such that $\varphi \circ \iota = \text{conj} \circ \varphi$. The \emph{real} double Hurwitz numbers $H^{\mathbf R}_g(\lambda, \mu; p)$ count tuples $(\varphi, \iota)$ of holomorphic maps as above together with a real structure. Here, we assume that the two special branch points are $0, \infty \in \mathbf{RP}^1$, all branch points lie in $\mathbf{RP}^1$, and $0 \leq p \leq r$ denotes the number of simple branch points on the positive half axis of $\mathbf{RP}^1 \setminus \{0,\infty\}$. In this paper, we define numbers $Z_g(\lambda, \mu)$ such that \[ Z_g(\lambda,\mu) \leq H^{\mathbf R}_g(\lambda,\mu; p) \leq H^{\mathbf C}_g(\lambda,\mu) \] and \[ Z_g(\lambda,\mu) \equiv H^{\mathbf R}_g(\lambda,\mu; p) \equiv H^{\mathbf C}_g(\lambda,\mu) \mod 2 \] for all $0 \leq p \leq r$. The main results of this paper state that, under certain conditions, these lower bounds are non-zero and have logarithmic asymptotic growth equal to $H^{\mathbf C}_g(\lambda, \mu)$ (see \autoref{prop:sufficientconditionexistence}, \autoref{prop:lowerbound}, \autoref{thm:logasymptotics}). The definition of $Z_g(\lambda, \mu)$ is based on the tropical computation of real double Hurwitz numbers in \cite{MaRa:TropicalRealHurwitzNumbers}. Note that the real double Hurwitz numbers $H^{\mathbf R}_g(\lambda, \mu; p)$ indeed depend on $p$, or in other words, on the position of the branch points. This is the typical behaviour of enumerative problems over ${\mathbf R}$ (instead of ${\mathbf C}$). It is therefore of interest to find lower bounds for real enumerative problems (with respect to the choice of conditions, the branch points here) and use these bounds to prove existence of real solutions or to compare the number of real and complex solutions of the problem. Such investigations have been carried out e.g.\ for real Schubert calculus \cite{So:RealEnumerative, MuTa}, counts of algebraic curves in surfaces passing through points \cite{We, ItKhSh:LogarithmicEquivalence} (see also \cite{GeZi:Construction}) and counts of polynomials/simple rational functions with given critical levels \cite{ItZv, HiRa}. In most of these examples, a lower bound is constructed by defining a \emph{signed} count of the real solutions (i.e., each real solution is counted with $+1$ or $-1$ according to some rule) and showing that this signed count is invariant under change of the conditions. In this paper, we prove similar results for double Hurwitz numbers \emph{without} the explicit constructions of a signed count. We hope that this rather simple approach can be extended to other situations using sufficiently nice combinatorial descriptions of the counting problem. One way of defining $Z_g(\lambda, \mu)$ is as follows: It is the number of those tropical covers which contribute to the tropical count of $H^{\mathbf R}_g(\lambda, \mu; p)$ with \emph{odd} multiplicity. We prove in \autoref{thm:zigzaglowerbounds} that these numbers provide lower bounds for $H^{\mathbf R}_g(\lambda,\mu; p)$ as explained above. Next, we give exact numerical criteria on $\lambda, \mu$ in order for $Z_g(\lambda,\mu)$ to be non-zero, proving existence of real Hurwitz covers in these case (see \autoref{prop:necessaryconditionexistence}, \autoref{prop:sufficientconditionexistence} and \autoref{rem:exactconditionexistence}). We study the asymptotic behaviour of real Hurwitz numbers when the degree is increased and only simple ramification points are added. For example, consider the sequences \begin{align*} z_{\lambda, \mu, g}(m) &= Z_g((\lambda, 1^{2m}),(\mu, 1^{2m})), \\ h^{\mathbf C}_{\lambda, \mu, g}(m) &= H^{\mathbf C}_g((\lambda, 1^{2m}),(\mu, 1^{2m})), \end{align*} where $(\lambda, 1^{2m})$ stands for adding $2m$ ones to $\lambda$. We prove: \begin{theorem}[\autoref{thm:logasymptotics}] \label{thm:logasymptoticsIntro} Under the existence assumptions for zigzag covers, the two sequences are logarithmically equivalent, \[ \log z_{\lambda, \mu, g}(m) \sim 4 m \log m \sim \log h^{\mathbf C}_{\lambda, \mu, g}(m). \] \end{theorem} This is consistent with the best known results for Welschinger invariants and the Hurwitz-type counts of polynomials and rational functions mentioned before. For better comparison, let us recall the main asymptotic statements from \cite{ItZv, HiRa}. Let $S_{\text{pol}}(\lambda_1,\dots,\lambda_k)$ and $S_{\text{rat}}(\lambda_1,\dots,\lambda_k)$ denote the signed counts of real polynomials $f(x) \in {\mathbf R}[x]$ and real simple rational functions $\frac{f(x)}{x-p}$, $f \in {\mathbf R}[x], p \in {\mathbf R}$, respectively, with prescribed critical levels and ramification profiles as defined in \cite{ItZv, HiRa}. Set \begin{align*} s_\text{pol}(m) &= S_{\text{pol}}((\lambda_1, 1^{2m}),\dots,(\lambda_k, 1^{2m})), \\ s_\text{rat}(m) &= S_{\text{rat}}((\lambda_1, 1^{2m}),\dots,(\lambda_k, 1^{2m})). \end{align*} Denote by $h^{\mathbf C}_\text{pol}(m)$ and $h^{\mathbf C}_\text{rat}(m)$ the corresponding counts of complex polynomials/complex rational functions. \begin{theorem}[{\cite[Theorem 5]{ItZv}}] Assume that each partition $\lambda_i$ satisfies the properties: \begin{enumerate} \item[(O)] At most one odd number appears an odd number of times in $\lambda_i$. \item[(E)] At most one even number appears an odd number of times in $\lambda_i$. \end{enumerate} Then we have \[ \log s_\text{pol}(m) \sim 2 m \log m \sim \log h^{\mathbf C}_\text{pol}(m). \] \end{theorem} \begin{theorem}[{\cite[Theorem 1.3]{HiRa}}] Assume that each partition $\lambda_i$ satisfies (O) and (E) and that $d = |\lambda_i|$ is even (or, an extra parity condition). Then \[ \log s_\text{rat}(m) \sim 2 m \log m \sim \log h^{\mathbf C}_\text{rat}(m). \] \end{theorem} To compare this to our result, note that the non-vanishing assumption for \autoref{thm:logasymptoticsIntro} is satisfied if both $\lambda$ and $\mu$ satisfy (O). \paragraph*{Acknowledgements} The author would like to thank Renzo Cavalieri, Boulos El Hilany, Ilia Itenberg, Maksim Karev, Lionel Lang and Hannah Markwig for helpful discussions. Parts of this work were carried out at Institut Mittag-Leffler during my visit of the research program \enquote{Tropical Geometry, Amoebas and Polytopes}. Many thanks for the great hospitality and atmosphere! \section{Real double Hurwitz numbers} \label{sec:realDHN} Fix two integers $d > 0$, $g \geq 0$, and two partition $\lambda, \mu$ of $d$. Set $r := l(\lambda) + l(\mu) + 2g - 2$ and fix a collection of $r$ points ${\mathcal P} = \{x_1, \dots, x_r\} \subset \mathbf{CP}^1 \setminus \{0,\infty\}$. \begin{definition} A \emph{complex ramified cover} of genus $g$, type $(\lambda, \mu)$ and simply branched at ${\mathcal P}$ is a holomorphic maps $\varphi : C \to \mathbf{CP}^1$ of degree $d$ such that \begin{itemize} \item $C$ is a Riemann surface of genus $g$, \item the ramification profiles of $\varphi$ at $0$ and $\infty$ are $\lambda$ and $\mu$, respectively, \item the points in ${\mathcal P}$ are simple branch points of $\varphi$. \end{itemize} Let $\psi : D \to \mathbf{CP}^1$ be another complex ramified cover. An \emph{isomorphism} of complex ramified covers is an isomorphism of Riemann surfaces $\alpha : C \to D$ such that $\varphi = \psi \circ \alpha$. The \emph{complex double Hurwitz number} \[ H^{\mathbf C}_g(\lambda, \mu) = \sum_{[\varphi]} \frac1{|\Aut^{\mathbf C}(\varphi)|} \] is the number of isomorphism classes of complex ramified covers $\varphi$ of genus $g$, type $(\lambda, \mu)$ and simply branched at ${\mathcal P}$, counted with weight $1 / \Aut^{\mathbf C}(\varphi)$. \end{definition} \begin{definition} Given a complex ramified cover $\varphi : C \to \mathbf{CP}^1$, a \emph{real structure} on $\varphi$ is a antiholomorphic involution $\iota : C \to C$ such that \[ \varphi \circ \iota = \text{conj} \circ \varphi. \] The tuple $(\varphi, \iota)$ is a \emph{real ramified cover}. An isomorphism of real ramified covers $(\varphi : C \to \mathbf{CP}^1, \iota)$ and $(\psi : D \to \mathbf{CP}^1, \kappa)$ is an isomorphism of complex ramified covers $\alpha : C \to D$ such that \[ \alpha \circ \iota = \kappa \circ \alpha. \] \end{definition} \bigskip As above, fix $d > 0$, $g \geq 0$ and two partition $\lambda, \mu$ of $d$, and set $r := l(\lambda) + l(\mu) + 2g - 2$. We now fix a collection of $r$ \emph{real} points ${\mathcal P} \subset \mathbf{RP}^1 \setminus \{0,\infty\}$. We denote by ${\mathbf R}_+ \subset \mathbf{RP}^1 \setminus \{0,\infty\}$ the positive half of the real projective line and set $p := |{\mathcal P} \cap {\mathbf R}_+|$ the number of positive branch points. \begin{definition} The \emph{real double Hurwitz number} \[ H^{\mathbf R}_g(\lambda, \mu; p) = \sum_{[(\varphi, \iota)]} \frac1{|\Aut^{\mathbf R}(\varphi,\iota)|} \] is the number of isomorphism classes of real ramified covers $(\varphi, \iota)$ of genus $g$, type $(\lambda, \mu)$ and simply branched at ${\mathcal P}$, counted with weight $1/|\Aut^{\mathbf R}(\varphi,\iota)|$. \end{definition} \begin{remark} Let $\varphi : C \to \mathbf{CP}^1$ be a complex ramified cover. Let $\RS(\varphi)$ be the set of real structures for $\varphi$. The group $\Aut^{\mathbf C}(\varphi)$ acts on $\RS(\varphi)$ by conjugation. The orbits of this action are the isomorphism classes of real ramified covers supported on $\varphi$, and the stabilizer $\Stab(\iota)$ is isomorphic to the group of real automorphisms $\Aut^{\mathbf R}(\varphi,\iota)$. It follows $H^{\mathbf R}_g(\lambda, \mu; p)$ can be described alternatively as \[ H^{\mathbf R}_g(\lambda, \mu; p) = \sum_{[\varphi]} \frac{|\RS(\varphi)|}{|\Aut^{\mathbf C}(\varphi)|} \] where the sum runs through all isomorphism classes of \emph{complex} ramified covers. Hence $H^{\mathbf R}_g(\lambda, \mu; p)$ is the number of complex ramified covers which admit a real structure, counted with multiplicity $|\RS(\varphi)| / |\Aut^{\mathbf C}(\varphi)|$. Note also that we can construct an injection $\RS(\varphi) \hookrightarrow \Aut^{\mathbf C}(\varphi)$ by fixing any $\iota_0 \in \RS(\varphi)$ and setting $\iota \mapsto \iota \circ \iota_0$. Hence $0 \leq |\RS(\varphi)| / |\Aut^{\mathbf C}(\varphi)| \leq 1$. \end{remark} \begin{remark} \label{con:noautom} Non-trivial automorphisms only occur under rather particular circumstances. In fact, it is easy to check that the only complex ramified covers with $\Aut^{\mathbf C}(\varphi) \neq 1$ are $\varphi_d : z \mapsto z^d$ (with $\Aut^{\mathbf C}(\varphi_d) = {\mathbf Z} / d {\mathbf Z}$) and compositions $\varphi_d \circ \psi$ where $\psi : C \to \mathbf{CP}^1$ is a hyperelliptic map (here, $\Aut^{\mathbf C}(\varphi_d \circ \psi) = {\mathbf Z} / 2 {\mathbf Z}$). We can exclude this by assuming $r>0$ and $\{\lambda, \mu \} \not\subset \{(2k), (k, k)\}$. It follows $|\RS(\varphi)| = 0,1$, so in this case real and complex double Hurwitz numbers are actual counts of covers and, in particular, \[ 0 \leq H^{\mathbf R}_g(\lambda, \mu; p) \leq H^{\mathbf C}_g(\lambda, \mu). \] \end{remark} \begin{remark} For the sake of completeness, let us briefly discuss the cases with non-trivial automorphisms. \begin{enumerate} \item $g=0, \lambda=\mu=(d)$: The only ramified cover in this situation is given by $x \mapsto x^d$. Let $\chi$ be a primitve $d$-th root of unity. There are $d$ automorphisms generated by $x \mapsto \chi x$, hence $H^{\mathbf C}_0((d), (d)) = 1/d$. There are also $d$ real structures given by $x \mapsto \chi^i \bar{x}, i=0, \dots, d-1$. Hence $H^{\mathbf R}_0((d), (d); 0) = 1$. Note that for $d$ odd all real structures are isomorphic and do not have extra real automorphisms. If $d$ is even, the real structures fall into two isomorphism classes with real automorphism $x \mapsto - x$. \item $\{\lambda, \mu \} \subset \{(2k), (k, k)\}$ and $r > 0$: There exist exactly $k^{r-1}$ ramified covers with non-trivial automorphisms, and they can be described in the form $C = \{y^2 = f(x)\}, (x,y) \mapsto x^k$. The polynomial $f(x)$ is chosen such that, for any simple branch point $p_i \in {\mathbf C} \setminus \{0\}$, it has exactly one root among the $k$-th roots of $p_i$. This gives $k^r$ choices for $f$, but $k$ of them are isomorphic via $(x,y) \mapsto (\chi x,y)$ where $\chi$ is a $k$-th primitive root. Each of these covers has exactly one extra automorphism $(x,y) \mapsto (x,-y)$, hence the total contribution to $H^{\mathbf C}_g(\lambda, \mu)$ is $k^{r-1} / 2$. \\ If $k$ is odd, there is exactly one choice to make $f(x)$ real (choosing only real roots), and we have two real structures $(x,y) \mapsto (\bar{x},\bar{y})$ and $(x,y) \mapsto (\bar{x},-\bar{y})$ (alternatively, use $y^2 = \pm f(x)$). Hence, this cover contributes $1$ to $H^{\mathbf R}_g(\lambda, \mu; p)$. If $k$ is even, then $f(x)$ can be chosen real only if $p=0$ or $p=r$. Under this assumption, we now have $2^{r-1}$ choices for $f$ (choosing among the two real roots each time, up to switching all of them). There are two real structures as before and therefore the total contribution to $H^{\mathbf R}_g(\lambda, \mu; 0/r)$ is $2^{r-1}$. \end{enumerate} It follows that $H^{\mathbf R}_g(\lambda, \mu; p) \leq H^{\mathbf C}_g(\lambda, \mu)$ holds as long as we exclude: \begin{itemize} \item $g= 0, \{\lambda, \mu \} \subset \{(d), (\frac{d}{2}, \frac{d}{2})\}$, \item $g>0, \{\lambda, \mu \} \subset \{(d), (\frac{d}{2}, \frac{d}{2})\}$, and $d=2$ or $d=4$. \end{itemize} \end{remark} \section{Tropical double Hurwitz numbers} \label{sec:tropicalDHN} In this section we recall the tropical graph counts from \cite{CaJoMa:TropicalHurwitzNumbers, BeBrMi:TropicalOpenHurwitzNumbers, GuMaRa, MaRa:TropicalRealHurwitzNumbers} which compute complex resp.\ real Hurwitz numbers. We include this summary here for the reader's convenience, using the occasion to adapt the definitions to the case of double Hurwitz numbers and introducing a different convention regarding multiplicities of real tropical covers. Throughout the following, the word \emph{graph} stands for a finite connected graph $G$ without two-valent vertices. The one-valent vertices of $G$ are called \emph{ends}, the higher-valent vertices are \emph{inner vertices}. The edges adjacent to an end are called \emph{leaves}, other edges are \emph{inner edges}. We use the same letter $G$ to denote the the topological space obtained by gluing intervals $[0,1]$ according to the graph structure. We denote by $G^\circ$ the space obtained by removing the one-valent vertices, called the \emph{inner part} of $G$. The \emph{genus} of $G$ is the first Betti number $g(G) := b_1(G)$. A \emph{(smooth, compact) tropical curve} $C$ is a graph together with a length $l(e) \in (0, +\infty)$ assigned to any inner edge $e$ of $C$. This induces a complete inner metric on $C^\circ$ such that each half-open leaf is isometric to $[0, +\infty)$ and each inner edge $e$ is isometric to $[0, l(e)]$. There is one exception: The graph $\mathbf{TP}^1$ which consists of a single edge with two one-valent endpoints, in which case $C^\circ$ is isometric to ${\mathbf R}$. By use of this construction, the data of lengths $l(e)$ is equivalent to the data of an complete inner metric on $C^\circ$. A \emph{piecewise ${\mathbf Z}$-linear map} between two tropical curves $C,D$ is a continuous map $\varphi : C \to D$ such that for any edge $e \subset C$ and any pair $x,y \in e \cap C^\circ$ there exists $\omega \in {\mathbf Z}$ such that \[ \dist(\varphi(x), \varphi(y)) = \omega \dist(x, y). \] In particular, $\varphi(C^\circ) \subset D^\circ$. By continuity it follows that $\omega=:\omega(e)$ only depends on $e$, we call it the \emph{weight} of $e$ (under $\varphi$). An \emph{isomorphism} $\varphi : C \to D$ is a bijective piecewise ${\mathbf Z}$-linear map whose inverse is also piecewise ${\mathbf Z}$-linear. Equivalently, isomorphisms between $C$ and $D$ can be described by isometries between $C^\circ$ and $D^\circ$. Given a piecewise ${\mathbf Z}$-linear map, by momentarily allowing two-valent vertices, we can find subdivisions of $C$ and $D$ such that for any edge $e$ of $C$, $\varphi(e)$ is either a vertex or an edge of $D$. For any vertex $x$ of $C$ and edge $e'$ of $D$ such that $\varphi(x) \in e'$, we can define \begin{equation} \label{eq:localdegree} \deg_{e'}(\varphi,x) := \sum_{\substack{e \text{ edge of } C \\ x \in e, \varphi(e) = e'}} \omega(e). \end{equation} The map $\varphi$ is a \emph{tropical morphism} if $\text{deg}(\varphi,x) := \text{deg}_{e'}(\varphi,x)$ does not depend on $e'$, for $x$ fixed. In this case, the sum \[ \deg(\varphi) := \sum_{\substack{e \text{ edge of } C \\ \varphi(e) = e'}} \omega(e) \] is also independent on the choice of $e'$ is called the \emph{degree} of $\varphi$. Note that isomorphisms are tropical morphisms of degree one. Fix two integers $d > 0$, $g \geq 0$, and two partition $\lambda, \mu$ of $d$. Set $r := l(\lambda) + l(\mu) + 2g - 2$ and fix $r$ points ${\mathcal P} \subset {\mathbf R}$. We assume $r > 0$, i.e., we exclude the exceptional case $g=0, \lambda=\mu=(d)$. \begin{definition} A \emph{tropical cover} of genus $g$, type $(\lambda, \mu)$ and simply branched at ${\mathcal P}$ is a tropical morphism $\varphi : C \to \mathbf{TP}^1$ of degree $d$ such that \begin{itemize} \item $C$ is a tropical curve of genus $g$, \item $\varphi^{-1}({\mathbf R}) = C^\circ$, or equivalently, $f$ is non-constant on leaves, \item $\lambda = (\omega(e), e \in L_{-\infty})$ and $\mu = (\omega(e), e \in L_{+\infty})$, where $L_{\pm \infty}$ denotes the set of leaves such that $\pm \infty \in \varphi(e)$, respectively, \item each $x \in {\mathcal P}$ is the image of an inner vertex of $C$. \end{itemize} Let $\psi : D \to \mathbf{TP}^1$ be another such tropical cover. An \emph{isomorphism} of the tropical covers is an isomorphism $\alpha : C \to D$ of tropical curves such that $\varphi = \psi \circ \alpha$. The \emph{complex multiplicity} of $\varphi$ is \[ \mult^{\mathbf C}(\varphi) := \frac{1}{|\Aut(\varphi)|} \prod_{\substack{e \text { inner} \\ \text{edge of } C}} \omega(e). \] \end{definition} \bigskip The following properties are easy to check (see e.g.\ \cite[Section 5]{CaJoMa:TropicalHurwitzNumbers}). Given $g, (\lambda, \mu)$ and ${\mathcal P}$, there is a finite number of isomorphism classes of tropical covers of that type. Moving the points $x_1, \dots, x_r$ changes the metric structure of these covers, but not their combinatorial nor weight structure. Moreover, any tropical cover $\varphi : C \to \mathbf{TP}^1$ satisfies the following properties: \begin{itemize} \item The curve $C$ is three-valent, i.e., all vertices are either ends or three-valent. \item The map $\varphi$ is non-constant on every edge. In particular, $\mult^{\mathbf C}(\varphi) \neq 0$. \item The curve $C$ has $l(\lambda) + l(\mu)$ ends and $r$ inner vertices, one in the preimage of each $x \in {\mathcal P}$. \item All automorphisms of $\varphi$ are generated by symmetric cycles or symmetric forks of $\varphi$. A symmetric cycle\fshyp{}fork is a pair of inner edges\fshyp{}leaves, respectively, which share endpoints, have equal weights and the same image under $\varphi$. \end{itemize} \begin{theorem}[\cite{CaJoMa:TropicalHurwitzNumbers}] \label{thm:complexcorrespondence} The complex Hurwitz number $H^{\mathbf C}_g(\lambda, \mu)$ is equal to \[ H^{\mathbf C}_g(\lambda, \mu) = \sum_{[\varphi]} \mult^{\mathbf C}(\varphi), \] where the sum runs through all isomorphism classes $[\varphi]$ of tropical covers of genus $g$, type $(\lambda, \mu)$ and simply branched at ${\mathcal P} \subset {\mathbf R}$. \end{theorem} We will now present the corresponding statement for \emph{real} double Hurwitz numbers. It is convenient to use slightly different definitions than in \cite{MaRa:TropicalRealHurwitzNumbers}, see \autoref{rem_differenceOlderPaper} for a comparison. Let $\varphi : C \to \mathbf{TP}^1$ be a tropical cover. An edge is \emph{even} or \emph{odd} if its weight is even or odd, respectively. A symmetric cycle or fork $s$, we denote by $\omega(s)$ the weight of one of its edges, and $s$ is \emph{even} or \emph{odd} if $\omega(s)$ is even or odd, respectively. Moreover, will use the following notation. \begin{itemize} \item $\SCF(\varphi)$ is the set of symmetric cycles and symmetric \emph{odd} forks. \item $\SC(\varphi) \subset \SCF(\varphi)$ is the set of symmetric cycles. \item For $T \subset \SCF(\varphi)$, $C \setminus T^\circ$ is the subgraph of $C$ obtained by removing the interior of the edges contained in the cycles\fshyp{}forks of $T$. \item $\EI(\varphi)$ is the set of even inner edges in $C \setminus \SCF(\varphi)^\circ$, i.e., those which are not contained in a symmetric cycle. \end{itemize} \newfigure{PosVertices}{The four types of positive vertices. Odd edges are drawn in black, even edges in colours. Dotted edges are part of a symmetric fork or cycle contained in $T$.} \newfigure{NegVertices}{The four types of negative vertices.} \begin{definition} Let $\varphi : C \to \mathbf{TP}^1$ be a tropical cover. A \emph{colouring} $\rho$ of $\varphi$ consists of a subset $T_\rho:=T \in \SCF(\varphi)$ and the choice of a colour \emph{red} or \emph{green} for each component of the subgraph of even edges of $C \setminus T^\circ$. The tuple $(\varphi, \rho)$ is a \emph{real tropical cover}. An \emph{isomorphism} of real tropical covers is an isomorphism of tropical covers which respects the colouring. The \emph{real multiplicity} of a real tropical cover is \begin{equation} \label{eq:realmult} \mult^{\mathbf R}(\varphi, \rho) = 2^{|\EI(\varphi)| - |\SCF(\varphi)|} \prod_{s \in \SC(\varphi)} \mult_T(s), \end{equation} where \begin{equation} \label{eq:multsymmetriccyclefork} \mult_T(s) := \begin{cases} \omega(s) & s \in T, \\ 4 & s \notin T, s \text{ even}, \\ 1 & s \notin T, s \text{ odd}. \end{cases} \end{equation} Given a real tropical cover, a branch point $x_i \in {\mathcal P}$ is \emph{positive} or \emph{negative} if it is the image of a three-valent vertex as displayed in \autoref{PosVertices} or \autoref{NegVertices}, respectively, up to reflection along a vertical line. This induces a splitting of ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$ into positive and negative branch points. \end{definition} \bigskip Fix $d,g$, $(\lambda, \mu)$ and ${\mathcal P} \subset {\mathbf R}$ as before. We now additionally fix a sign for each point in ${\mathcal P}$. In other words, we fix $0 \leq p \leq r$ and choose a splitting ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$ with $|{\mathcal P}_+| = p$. \begin{theorem}[\cite{MaRa:TropicalRealHurwitzNumbers}] \label{thm:realcorrespondence} The real Hurwitz number $H^{\mathbf R}_g(\lambda, \mu; p)$ is equal to \[ H^{\mathbf R}_g(\lambda, \mu; p) = \sum_{[(\varphi,\rho)]} \mult^{\mathbf R}(\varphi,\rho), \] where the sum runs through all isomorphism classes $[(\varphi, \rho)]$ of real tropical covers of genus $g$, type $(\lambda, \mu)$, and with positive and negative branch points given by ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$. \end{theorem} \begin{remark} \label{rem_differenceOlderPaper} The present definitions differ from \cite{MaRa:TropicalRealHurwitzNumbers} where $T$ was allowed to contain even symmetric forks as well. Since this choice does not affect the sign of the adjacent vertex (see second and forth vertex in \autoref{PosVertices} resp.\ \autoref{NegVertices}), this leads to a factor of $2$ in the number of possible colouring for each such fork. We compensate this by multiplying the real multiplicity from \cite{MaRa:TropicalRealHurwitzNumbers} by the same factor. This follows from \[ |\Aut(\varphi)| = 2^{|S(\varphi)| + k}, \] where $k$ is the number of even symmetric fork. The present convention describes somewhat larger packages of real ramified covers and is more convenient in the discussion that follows. \end{remark} \section{Zigzag covers} \label{sec:zigzagcovers} In this section we will focus on real tropical covers with odd multiplicity $\mult^{\mathbf R}(\varphi)$ and use them to establish a lower bound for the numbers $H^{\mathbf R}_g(\lambda, \mu; p)$ for $0 \leq p \leq r$. We restict our attention to the automorphism-free case, i.e., we assume $r>0$ and $\{\lambda, \mu \} \not\subset \{(2k), (k, k)\}$ from now on (cf.\ \autoref{con:noautom}). The philosophy behind our approach is as follows: On one hand, in real enumerative geometry, lower bounds for the counts in question are typically established by introducing a \emph{signed} count and showing that this alternative count is invariant under change of the continuous parameters of the given problem (as long as chosen generically), see e.g.\ \cite{We,ItZv}. On the other hand, a real tropical cover $\varphi : C \to \mathbf{TP}^1$ corresponds to a certain package of $\mult^{\mathbf R}(\varphi, \rho)$ real ramified covers. Therefore, from the tropical point of view, the easiest conceivable notion of signs is to ask for maximal cancellation in the tropical packages, meaning that the signed count of the package of ramified covers tropicalizing to $\varphi$ is \[ \begin{cases} 0 & \mult^{\mathbf R}(\varphi, \rho) \text{ even}, \\ 1 & \mult^{\mathbf R}(\varphi, \rho) \text{ odd}, \end{cases} \] respectively. In the following, we show that this approach indeed provides an invariant and analyse under which conditions this lower bound is non-trivial. The first step is to prove some properties of $\mult^{\mathbf R}(\varphi, \rho)$ and express the condition of having odd multiplicity in combinatorial terms. \begin{definition} Let $\varphi : C \to \mathbf{TP}^1$ be a tropical cover. The tropical cover $\varphi^\text{red} : C^\text{red} \to \mathbf{TP}^1$ obtained by replacing each symmetric cycle or odd fork (i.e., all $s \in \SCF(\varphi)$) by an edge or leaf of weight $2 \omega(s)$, respectively, is the \emph{reduced} tropical cover of $\varphi$. \end{definition} In general, $\varphi^\text{red}$ has smaller branch locus ${\mathcal P}' \subset {\mathcal P}$. Note that $\SCF(\varphi^\text{red}) = \emptyset$ and hence $\mult^{\mathbf R}(\varphi^\text{red}) = 2^{|\EI(\varphi^\text{red})|}$ for any colouring. \begin{lemma} \label{multreducedcover} For any tropical cover $\varphi$ we have $|\EI(\varphi^\text{red})| = |\EI(\varphi)| - |\SCF(\varphi)|$. \end{lemma} \begin{proof} From our assumptions $r>0$ and $\{\lambda, \mu \} \not\subset \{(2k), (k, k)\}$ it follows that $C^\text{red} \neq \mathbf{TP}^1$. Therefore, $C^\text{red}$ contains at least one inner vertex and all its edges are isometric either to $[0,l]$ or $[0,\infty]$. Let $\varphi^\text{red} =: \varphi_0, \varphi_1, \dots, \varphi_n := \varphi$ denote the sequence of tropical covers obtained from reinserting the cycles\fshyp{}forks of $C$, one by one. Since in each step a new even inner edge is created, the difference $|\EI(\varphi_i)| - |\SCF(\varphi_i)|$ is constant throughout the process. Moreover $\SCF(\varphi^\text{red}) = \emptyset$ by construction, which proves the claim. \end{proof} \begin{lemma} \label{multintegeroddeven} For any real tropical cover $(\varphi, \rho)$ the multiplicity $\mult^{\mathbf R}(\varphi,\rho)$ is an integer whose parity is independent of the colouring $\rho$. \end{lemma} \begin{proof} By \autoref{eq:multsymmetriccyclefork} we have $\mult_T(s) \equiv \omega(s) \mod 2$ and, in particular, the parity of $\mult_T(s)$ does no depend on the colouring. Moreover $|\EI(\varphi)| - |\SCF(\varphi)| \geq 0$ by \autoref{multreducedcover}. Hence both claims follow from the definition of $\mult^{\mathbf R}(\varphi)$ in \autoref{eq:realmult}. \end{proof} Given a tropical curve $C$, a \emph{string} $S$ in $C$ is a connected subgraph such that $S \cap C^\circ$ is a closed submanifold of $C^\circ$. In other words, $S$ is either a simple loop or a simple path with endpoints in $C \setminus C^\circ$. \hyphenation{sub-graph} Let $\varphi$ be a tropical cover. Note that any connected component of the subgraph of odd edges is a string in $C$. This follows from the definition of tropical morphisms, cf.\ \autoref{eq:localdegree}, which implies that at each inner vertex of $C$ the number of odd edges is either $0$ or $2$. \begin{definition} A \emph{zigzag cover} is a tropical cover $\varphi : C \to \mathbf{TP}^1$ if there exists a subset $S \subset C \setminus \SCF(\varphi)$ such that \begin{itemize} \item $S$ is either a string of odd edges or consists of a single inner vertex, \item the connected components of $C \setminus S$ are of the form depicted in \autoref{zigzagends}. Here, all occurring cycles and forks are symmetric and of odd weight. \end{itemize} \end{definition} \newfigure{zigzagends}{The admissible tails for zigzag covers. The properties of $S$ (turning or not) are not important here. In the first two cases the number of symmetric cycles can be anything including zero. In the third case, no cycles or forks are allowed.} \begin{remark} In \autoref{zigzagends} as well as in the following, we will use the variables $o, o_1, o_2, \dots$ for odd integers and $e, e_1, e_2, \dots$ for even integers, respectively. \end{remark} \begin{remark} Obviously, the set $S$ in the definition of zigzag cover is unique. Moreover, note that the case where $S$ is a single vertex only occurs for special ramification profiles. More precisely, we need $\lambda \in \{(e), (o,o)\}$ and $\mu\in \{(e_1,e_2),(e_1, o_2, o_2),(o_1, o_1, o_2, o_2)\}$, or vice versa. \end{remark} \begin{proposition} \label{prop:zigzagcoversoddcovers} The real tropical cover $(\varphi, \rho)$ is of odd multiplicity if and only if $\varphi$ is a zigzag cover. Moreover, in this case the multiplicity can be expressed as \[ \mult^{\mathbf R}(\varphi,\rho) = \prod_{s \in \SC(\varphi) \cap T} \omega(s). \] \end{proposition} \begin{proof} By what has been said so far it follows that $\varphi$ has odd multiplicity if and only if $|\EI(\varphi)| - |\SCF(\varphi)| = 0$ and $\varphi$ does not contain even symmetric cycles. Zigzag covers obviously satisfy these properties, so it remains to show the other implication. By \autoref{multreducedcover} the assumption $|\EI(\varphi)| - |\SCF(\varphi)| = 0$ implies that $\varphi^\text{red}$ does not contain even inner edges. Since $C^\text{red}$ is connected, it follows that there is at most one connected component of odd edges in $C^\text{red}$. Therefore $C^\text{red}$ is either an even tripod (three even leaves meeting in one inner vertex) or consists of a single odd component $S$ with some even leaves attached to it. By constrcuction $\varphi$ can be obtained from $\varphi^\text{red}$ by inserting symmetric cycles and symmetric odd forks in the even leaves of $C^\text{red}$. Since $C$ does not contain even symmetric cycles, this leads exactly to the three types of tails displayed in \autoref{zigzagends}. \end{proof} \begin{proposition} \label{prop:zigzagworksforallsigns} Let $\varphi$ be a zigzag cover simply branched at ${\mathcal P}$ and choose an arbitrary splitting ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$ into positive and negative branch points. Then there exists a unique colouring $\rho$ of $\varphi$ such that the real tropical cover $(\varphi, \rho)$ has positive and negative branch points as required. \end{proposition} \begin{proof} Let $v \in S$ be the vertex from which a given tail $Y$ emanates. Note that if $S = \{v\}$, the vertex cannot be part of a symmetric fork since this would imply $\{\lambda, \mu\} = \{(2k), (k,k)\}$. Hence, the colour rules from \autoref{PosVertices} and \autoref{NegVertices} impose a unique colouring around $v$ as follows. All even edges are coloured in red if $\varphi(v) \in {\mathcal P}_-$ and the two edges on the same side of $v$ are both odd (i.e., the string $S$ bends at $v$), or if $\varphi(v) \in {\mathcal P}_+$ at least one of the two edges on the same side of $v$ is even. In the two opposite cases, all even edges around $v$ are coloured in green. The next vertex $w$ on $Y$, if it exists, splits the tail into an odd symmetric cycle or fork $s$. We include $s$ in $T$ if the incoming even edge is red and $\varphi(v) \in {\mathcal P}_+$ or if the incoming even edge is green and $\varphi(v) \in {\mathcal P}_-$. Otherwise, we set $s \notin T$. The next vertex $u$ on $Y$, if it exists, closes up an odd symmetric cycle $s$. We colour the outgoing even edge in red if $s \in T$ and $\varphi(v) \in {\mathcal P}_+$ or of $s \notin T$ and $\varphi(v) \in {\mathcal P}_-$. Again, in both cases this process describes the unique local colouring around $v$ compatible with ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$. Therefore, by continuing this process, we arrive at a compatible colouring of $Y$ and hence all of $C$, and uniqueness follows. \end{proof} \begin{definition} \label{def:zigzagnumber} The {zigzag number} $Z_g(\lambda,\mu)$ is the number of zigzag covers of genus $g$, type $(\lambda, \mu)$ and simply branched at ${\mathcal P} \subset {\mathbf R}$. \end{definition} It is easy to check that that $Z_g(\lambda,\mu)$ does not depend on the choice of ${\mathcal P} \subset {\mathbf R}$, see e.g.\ \autoref{rem:combconstructionofcovers}. \begin{theorem} \label{thm:zigzaglowerbounds} Fix $g$, $(\lambda, \mu)$ and $0 \leq p \leq r$ as before. Then the number of real ramified covers is bounded from below by the number of zigzag covers and they have the same parity, i.e.\ \[ Z_g(\lambda,\mu) \leq H^{\mathbf R}_g(\lambda,\mu; p) \leq H^{\mathbf C}_g(\lambda,\mu), \] \[ Z_g(\lambda,\mu) \equiv H^{\mathbf R}_g(\lambda,\mu; p) \equiv H^{\mathbf C}_g(\lambda,\mu) \mod 2. \] \end{theorem} \begin{proof} The statements involving $Z_g(\lambda,\mu)$ and $H^{\mathbf R}_g(\lambda,\mu; p)$ follow from \autoref{thm:realcorrespondence} in addition with \autoref{prop:zigzagcoversoddcovers} and \autoref{prop:zigzagworksforallsigns}. The inequality $H^{\mathbf R}_g(\lambda,\mu; p) \leq H^{\mathbf C}_g(\lambda,\mu)$ is explained in \autoref{con:noautom}, while $Z_g(\lambda,\mu) \equiv H^{\mathbf C}_g(\lambda,\mu) \mod 2$ is provided in \autoref{rem:complexmultodd} below for better reference. \end{proof} \begin{remark} \label{rem:complexmultodd} The first part of \autoref{prop:zigzagcoversoddcovers} holds analogously for complex multiplicities: A tropical cover $\varphi$ is of odd multiplicity $\mult^{\mathbf C}(\varphi)$ if and only if $\varphi$ is a zigzag cover. It follows that $Z_g(\lambda,\mu) \equiv H^{\mathbf C}_g(\lambda,\mu) \mod 2$ by \autoref{thm:complexcorrespondence}. To prove the claim, we copy the proof of \autoref{prop:zigzagcoversoddcovers} replacing (momentarily) $\varphi^\text{red}$ by the ``full reduction'' $\varphi^{\text{red}'}$ in which also \emph{even} symmetric forks are removed. Note that when reinserting a symmetric cycle or fork, the complex multiplicity changes by $\omega(s)^3$ or $\omega(s)$, respectively (a factor $2$ is cancelled by the automorphism). Hence, if $\mult^{\mathbf C}(\varphi)$ is odd, $C$ does not contain even symmetric cycles and forks and $\varphi^{\text{red}'}$ does not contain even inner edges. The remaining argument is as before. \end{remark} \begin{remark} This result is \enquote{optimal} in the following sense. \begin{itemize} \item In principle, we could count the zigzag covers with their multiplicities as calculated in \autoref{prop:zigzagcoversoddcovers}, but these multiplicities do depend on the colouring and hence on the signs ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$. In particular, there is one choice of signs for which $T = \emptyset$ and hence $\mult(\varphi,\rho) = 1$. \item There are no other covers which contribute to any sign distribution, as the next lemma shows. \end{itemize} \end{remark} \begin{remark} \label{rem:RefinedInvariants} It is possible to define (several) \emph{refined} invariants $R_g(\lambda, \mu) \in {\mathbf Z}[q^{\pm}]$ in the sense of \cite{BlGoeFockSpaces,BlGoe}. These are counts of tropical covers with polynomial multiplicities such that the specializations \begin{align*} R_g(\lambda, \mu)(1) &= H^{\mathbf C}_g(\lambda,\mu), \\ R_g(\lambda, \mu)(-1) &= Z_g(\lambda,\mu) \end{align*} hold. To understand the properties of these refined counts is work in progress together with Boulos El Hilany and Maksim Karev. \end{remark} \begin{lemma} Let $\varphi$ be a tropical cover which admits a colouring compatible with ${\mathcal P} = {\mathcal P}_+ \sqcup {\mathcal P}_-$ for all possible splittings ${\mathcal P}_+ \sqcup {\mathcal P}_-$. Then $\varphi$ is a zigzag cover. \end{lemma} \begin{proof} Let us consider of yet another version of reducing $C$, denoted by $\varphi^{\text{red}''}$, where only odd \emph{odd} symmetric cycles and odd symmetric forks are removed. By our previous considerations, zigzag covers can be equivalently described by the property that $\varphi^{\text{red}''}$ does not contain even inner edges. Let $\varphi$ be a non-zigzag cover and let $e$ be an even inner edge $e$ of $C^{\text{red}''}$ with endpoints $v_1, v_2$. In $C$, the edge $e$ corresponds to a sequence of even edges and odd symmetric cycles. We claim that the signs at the branch points of this sequence cannot be chosen independently. Indeed, fix the signs for all branch points except, say, $\varphi(v_2)$. Then the same process as in the proof of \autoref{prop:zigzagworksforallsigns}, starting at $v_1$, shows that there is a unique colouring of the sequence compatible with the chosen signs. In particular, the sign $\varphi(v_2)$ is already determined by this data. Hence $\varphi$ does not satisfy the condition of the statement, which proves the claim. \end{proof} \newfigure{CycleLift}{The four possible real structures corresponding of a symmetric cycle (up to switching colours). Here, the green and red curves represent parts of $\Fix(\iota)$ which are mapped to ${\mathbf R}_+$ and ${\mathbf R}_-$, respectively.} \begin{remark} Recall that a Riemann surface with real structure $(C,\iota)$ is called \emph{maximal} if $b_0(\Fix(\iota)) = g + 1$. Following the correspondence of real tropical covers and (classical) real ramified covers from \cite{MaRa:TropicalRealHurwitzNumbers}, it is easy to check that all real structures obtained from zigzag covers are maximal. Indeed, note that $S$ accounts for two connected components of the real part if it is a loop (mapping to ${\mathbf R}_+$ and ${\mathbf R}_-$, repsectively), and for one connected component otherwise. Additionally, each symmetric cycle produces another connected component as displayed in \autoref{CycleLift}. Hence all real ramified covers contributing to the zigzag count are maximal. It might also be interesting to specialize to the following type of covers: A real ramified cover $(\psi : C \to \mathbf{CP}^1, \iota)$ is called \emph{of weak Harnack type} if it is maximal and if there exists a single component $H \subset \Fix(\iota)$ such that $\psi^{-1}(\{0,\infty\}) \subset H$ (i.e., the fibers of $0$ and $\infty$ are totally real and lie in a single component of $\Fix(\iota)$ --- we do not impose a condition on the order of appearance of the ramification points in $H$, however). For zigzag covers, this corresponds to only allowing the upper right lifting shown in \autoref{CycleLift} (in particular, $T = \emptyset$ and $\mult^{\mathbf R}(\varphi,\rho) = 1$). This case occurs for example if $p=0,r$ and if all vertices of $S$ are bends. We plan to address these questions and possible connections to refined invariants (see \autoref{rem:RefinedInvariants}) in future work. \end{remark}
2024-02-18T23:40:30.325Z
2018-05-24T02:07:04.000Z
algebraic_stack_train_0000
2,594
10,110
proofpile-arXiv_065-12596
\subsection{Introduction} The fractional quantum Hall (FQH) state is a topological state of matter, and therefore it is described by universal and topological properties~\cite{Prange,Wen}. Two such properties are the Hall conductance and the thermal Hall conductance~\cite{Kane1995}. In Abelian FQH states, which are described by an integer valued symmetric matrix, termed the $K$ matrix, these two topological properties relate to the edge modes and the $K$ matrix in different ways. The Hall conductance, given by $\sigma_{H}=\nu\frac{e^{2}}{h}$, where $\nu$ is the filling fraction, is governed by the downstream charge mode~\cite{Wen,Kane1995,Stern2005,Kane1994}. However, the thermal Hall conductance, given by \begin{equation} \kappa_{H}=n_{\text{net}}\kappa_{0}T,\label{eq:K_xy} \end{equation} where $\kappa_{0}=\frac{\pi^{2}k_{B}^{2}}{3h}$ is the quantum of thermal conductance~\cite{Pendry1983} and $T$ is the temperature, is governed by the net number of edge modes, $n_{\text{net}}=n_{d}-n_{u}$, which is the difference between the number of downstream and upstream modes in the edge theory~\cite{Kane1997}. An interesting phenomenon occurs in hole-like states, which have $\frac{1}{2}<\nu<1$. Theory suggests that these states are characterized by $n_{u}\geq n_{d}$. In the case of $n_{u}>n_{d}$, when the modes are at equilibrium, charge and heat flow on different edges of the FQH liquid. Whereas, in the case of $n_{u}=n_{d}$, the heat flow is diffusive~\cite{Kane1997,Nosiglia2018,Protopopov}. The thermal Hall conductance has only recently been measured in IQH states~\cite{Jezouin2013,Banerjee2017}, in FQH states~\cite{Banerjee2017,Banerjee2018} and in the magnetic material $\text{\ensuremath{\alpha}-RuC\ensuremath{l_{3}}}$~\cite{Kasahara2018}. However, the current setups used in these experiments cannot determine which edge carries the heat current. Hence, the thermal Hall conductance still holds more information about the $K$ matrix, which was yet realized. Furthermore, it was shown experimentally that there is energy dissipation from the edge modes of a QH state~\cite{Venkatachalam2012a}. Energy dissipation can arise from different mechanisms. Electron-electron interaction, for example, which is accountable for the appearance of the charge and neutral modes~\cite{Meir1994,LeSueur2010,Altimiras2010,Dolev2011,Bid2011,Gross2012,Gurman2012,Wang2013,Grivnin2014,Inoue2014}, may cause inter-edge-modes energy relaxation~\cite{LeSueur2010,Altimiras2010}, but may also account for energy loss to puddles in the bulk. Electron-phonon interaction may lead to energy dissipation from the edge modes~\cite{LeSueur2010,Altimiras2012,Venkatachalam2012a}. Nonetheless the present theories~\cite{Kane1997,Simon2018,Feldman2018,Mross} neglect such contribution to the heat transport of the QH state. Such energy dissipation, which may alter the thermal Hall conductance of the state, should therefore be incorporated into the theory. In this manuscript we develop a phenomenological theory for the heat transport in the edge modes of a FQH state, which elaborates on the phenomenological equations derived in Ref.~\cite{Banerjee2017}, and on the theoretical analysis performed in Refs.~\cite{Nosiglia2018,Protopopov}, in order to include dissipation from the edge modes to an external thermal bath. We note that recent theoretical analysis of the observation of quantized thermal Hall conductance in the magnetic material $\text{\ensuremath{\alpha}-RuC\ensuremath{l_{3}}}$~\cite{Vinkler-Aviv2018,Balents2018}, take into account coupling to phonons. However, in this three dimensional system, and in the experimental set-up employed in Ref.~\cite{Kasahara2018}, energy transfered to the phonons is not lost, and is included in the measured heat current. In this manuscript, and in the experiment carried out in Refs.~\cite{Banerjee2017,Banerjee2018} energy transfered to phonons, or any other mode of dissipation, leaks out of the system and is not measured. By solving the heat transport rate equations for small temperature difference between the two sides of the FQH liquid, we find the temperature profiles of the edge modes, as a function of the equilibration length, $\xi_{e}$, between the modes, the dissipation length, $\xi_{d}$, for energy dissipation to the external bath, and the system size, $L$. We then define and calculate the two terminal thermal conductance, and show that its measurement may strongly depend on the dissipation length. Since we are interested in exploring the effect of dissipation on the topological thermal Hall conductance, it is required that $L\gg\xi_{e}$. While $L$ usually varies between tens to hundreds of $\mu m$, it was found experimentally that typical $\xi_{e}$ can vary between $3\mu m$~\cite{LeSueur2010} and $20\mu m$~\cite{Banerjee2017}, depending on the temperature, and that $\xi_{d}$ is bounded from below by $30\mu m$~\cite{LeSueur2010}. We find that for $\xi_{d}\gg\xi_{e}$ the two terminal conductance approaches the topological and universal value of Eq.~\eqref{eq:K_xy}, whereas for $\xi_{d}\ll\xi_{e}$ the two terminal thermal conductance is not universal anymore, and is sensitive to edge reconstruction processes. Furthermore, we propose an experimental setup to test this phenomenological theory, and to determine the sign of the thermal Hall conductance. This experimental setup relies on quantum dots (QDs) which are coupled to the edges of a FQH state. By exploiting a relation between the thermoelectric coefficient and the electric conductance of the QDs~\cite{Roura-Bas2017}, we point out that they can be used as local thermometers for electrons on the FQH edge state. The local temperatures of the edge states can be deduced from a measurement of the thermoelectric current through the QDs, and thus the temperature profiles can be measured. We show that the sign of the thermal Hall conductance may be determined from the measured temperature profiles. \subsection{Phenomenological heat transport theory in Abelian FQH states} \begin{figure} \centering{}\includegraphics[bb=170bp 230bp 630bp 390bp,clip,scale=0.5]{lower_edge_schematic_modes_picture.pdf}\caption{\textbf{\label{fig:two terminal schematic}}An illustration of the lower edge of a FQH liquid in a two terminal system, with $n_{d}$ downstream modes (solid lines) and $n_{u}$ upstream mode (dashed lines), propagating in the opposite direction. Temperature difference is applied between the right and left contacts, $\Delta T=T_{m}-T_{0}>0$. The black vertical arrows represent the equilibration current density between the edge modes on the same edge. The blue wiggly arrows represent the dissipation current density, to the external thermal bath. } \end{figure} Without loss of generality, we assume the directions of flow of the edge modes on the lower edge of a FQH state are as depicted in Fig. \eqref{fig:two terminal schematic}. On the upper edge $n_{u}\leftrightarrow n_{d}$ and the directions of flow of the edge modes are reversed. In this situation the FQH state has $n_{d}$ downstream modes and $n_{u}$ upstream modes, which for this analysis are assumed to obey $n_{d}\neq n_{u}$. We shall consider the case $n_{u}=n_{d}=1$ when we discuss the $\nu=\frac{2}{3}$ FQH state. The downstream modes on the lower edge are emanating from an Ohmic contact at position $x_{m}$ at temperature $T_{m}$, and the upstream modes are emanating from another Ohmic contact at position $x_{0}$ at temperature $T_{0}$. Both Ohmic contacts are at the same chemical potential. For $T_{m}>T_{0}$, the downstream modes are expected to be hotter than the upstream modes. Thus energy will be transferred from the downstream to the upstream modes, through a heat current density $j_{t}$, in order to achieve equilibration. In addition, in order to model dissipation, we assume the edge modes are coupled to an external thermal bath kept at temperature $T_{0}$, such that there are dissipation current densities $j_{b}^{d}$ and $j_{b}^{u}$ from the downstream and upstream modes to the bath. Assuming that energy is conserved in the system composed of the edge modes and the external bath, the heat currents flowing through the 1D downstream modes and the 1D upstream modes, denoted by $J_{d}$ and $J_{u}$ respectively, are described by the following rate equations: \begin{equation} \begin{aligned}J_{d}\left(x+\delta x\right) & =J_{d}\left(x\right)-j_{t}\left(x\right)\delta x-j_{b}^{d}\left(x\right)\delta x\\ J_{u}\left(x\right) & =J_{u}\left(x+\delta x\right)+j_{t}\left(x\right)\delta x-j_{b}^{u}\left(x\right)\delta x. \end{aligned} \label{eq:CurrentsEq} \end{equation} \subsubsection*{Temperature profiles} The temperature dependencies of the heat currents in Eq.~\eqref{eq:CurrentsEq} are modeled as follows. The heat current flowing in the 1D downstream and upstream edge modes is modeled as $J_{i}(x)=\frac{1}{2}\kappa_{0}n_{i}T_{i}^{2}(x)$~\cite{Kane1997}, where $i=d,u$. The equilibration current density $j_{t}$ is modeled by Newton's law of cooling, $j_{t}\left(x\right)=\frac{1}{2}\frac{\kappa_{0}}{\xi_{e}}\left[T_{d}^{2}\left(x\right)-T_{u}^{2}\left(x\right)\right]$, where $\xi_{e}$ is the relaxation length, similarly to Ref.~\cite{Banerjee2017}. The dissipation current to the external thermal bath is modeled by a temperature power law relative to the bath temperature: $j_{b}^{i}(x)=\frac{1}{2}\kappa_{0}n_{i}B(T_{i}^{\alpha}(x)-T_{0}^{\alpha})$. The exponent $\alpha$ has different values depending on the mechanism of dissipation. Energy transfer from electron to phonons, for example, may lead to $\alpha=5$, but also to smaller values depending on details~\cite{Wellstood1994}. Electron-electron interaction gives $1<\alpha<2$, depending on the extent to which impurities are involved~\cite{Imry}. To simplify the solution and further treatment, we write the equations using the dimensionless parameter: $\tau_{i}\left(x\right)=\frac{T_{i}^{2}\left(x\right)}{T_{0}^{2}}$, and we denote: $\beta=BT_{0}^{\alpha-2}$. Then, the equations can be written as a set of coupled differential equations for $\tau_{u}$ and $\tau_{d}$: \begin{equation} \begin{aligned}\frac{d\tau_{d}}{dx} & =-\frac{1}{n_{d}\xi_{e}}\left(\tau_{d}\left(x\right)-\tau_{u}\left(x\right)\right)-\beta\left(\tau_{d}^{\frac{\alpha}{2}}\left(x\right)-1\right)\\ \frac{d\tau_{u}}{dx} & =-\frac{1}{n_{u}\xi_{e}}\left(\tau_{d}\left(x\right)-\tau_{u}\left(x\right)\right)+\beta\left(\tau_{u}^{\frac{\alpha}{2}}\left(x\right)-1\right). \end{aligned} \label{eq:GeneralEq} \end{equation} The temperature dependence of the heat currents to the thermal bath and the exchange current are expected to hold for small temperature difference, $\Delta T=T_{m}-T_{0}$. The boundary conditions are: \begin{eqnarray} \tau_{d}\left(x_{m}\right)=\tau_{m}=\frac{T_{m}^{2}}{T_{0}^{2}} & \quad;\quad & \tau_{u}\left(x_{0}\right)=1=\frac{T_{0}^{2}}{T_{0}^{2}}.\label{eq:BoundaryC} \end{eqnarray} An analytic solution to Eqs.~\eqref{eq:GeneralEq}, with the boundary conditions given by Eq.~\eqref{eq:BoundaryC} can be obtained for small temperature difference from $T_{0}$, i.e. $\Delta T\ll T_{0}$, such that $\tau_{i}\left(x\right)=1+\delta\tau_{i}\left(x\right)$. Linearizing the equations, we find a new interaction parameter, $\frac{1}{\xi_{d}}=\frac{\beta\alpha}{2}$, which we call the dissipation length. Integrating the linearized differential equations with the appropriate boundary conditions, $\tau_{d}\left(x\right)$ and $\tau_{u}\left(x\right)$ of the lower edge are obtained: \begin{widetext}\begin{subequations} \begin{align} \tau_{d}^{\text{lower}}\left(x\right) & =1+\frac{\left(\frac{N}{2\bar{n}}+\frac{\xi_{e}}{\xi_{d}}\right)\sinh\left[\Lambda\left(x_{0}-x\right)\right]+\Lambda\xi_{e}\cosh\left[\Lambda\left(x_{0}-x\right)\right]}{\left(\frac{N}{2\bar{n}}+\frac{\xi_{e}}{\xi_{d}}\right)\sinh\left[\Lambda L\right]+\Lambda\xi_{e}\cosh\left[\Lambda L\right]}e^{-\frac{x-x_{m}}{2\bar{n}\xi_{e}}}\left(\tau_{m}-1\right)\label{eq:TemprofileTd}\\ \tau_{u}^{\text{lower}}\left(x\right) & =1+\frac{1}{n_{u}}\frac{\sinh\left[\Lambda\left(x_{0}-x\right)\right]}{\left(\frac{N}{2\bar{n}}+\frac{\xi_{e}}{\xi_{d}}\right)\sinh\left[\Lambda L\right]+\Lambda\xi_{e}\cosh\left[\Lambda L\right]}e^{-\frac{x-x_{m}}{2\bar{n}\xi_{e}}}\left(\tau_{m}-1\right),\label{eq:TemprofileTu} \end{align} \end{subequations}\end{widetext} where $L=x_{0}-x_{m}$, $\bar{n}=\frac{n_{u}n_{d}}{n_{u}-n_{d}}$, $N=\frac{n_{u}+n_{d}}{n_{u}-n_{d}}$ and $\Lambda=\frac{1}{2\bar{n}\xi_{e}}\sqrt{1+4\bar{n}^{2}\frac{\xi_{e}}{\xi_{d}}\left(\frac{N}{\bar{n}}+\frac{\xi_{e}}{\xi_{d}}\right)}$. To determine $\tau_{d/u}\left(x\right)$ on the upper edge, the number of edge modes needs to be interchanged, $n_{u}\leftrightarrow n_{d}$, and for consistency with the direction of chirality also $\tau_{d}\leftrightarrow\tau_{u}$, such that $\tau_{d}^{\text{lower}}\left(x;n_{d},n_{u}\right)=\tau_{u}^{\text{upper}}\left(x;n_{u},n_{d}\right)$. Numerically we can go beyond the linearized regime, however in doing so we found that small deviations from that regime do not change the qualitative picture. \subsubsection*{Normalized two terminal thermal conductance} Assuming that heat can be transported from the hot contact to the system only through the edge modes, the normalized two terminal thermal conductance, $\kappa$, is defined according to: \begin{equation} J_{Q}=\frac{1}{2}\kappa_{0}\kappa\left(T_{m}^{2}-T_{0}^{2}\right), \end{equation} where $J_{Q}$ is the total heat current emanating from the hot contact to the system, due to $\Delta T$. This $\kappa$ is composed of two parts, corresponding to the heat flowing along the upper and lower edges, which by assumption do not interact. Due to energy conservation, the sum of the heat flowing in the edge modes and the integrated heat dissipated to the thermal bath should not depend on the position along the edge. Therefore, the contribution of the lower edge to the two terminal thermal conductance is: \begin{equation} \kappa_{\text{lower}}=\frac{J_{d}\left(x\right)-J_{u}\left(x\right)-J_{p}+\intop_{x_{m}}^{x}\left[j_{b}^{d}\left(x'\right)+j_{b}^{u}\left(x'\right)\right]dx'}{\frac{1}{2}\kappa_{0}\left(T_{m}^{2}-T_{0}^{2}\right)},\label{eq:general_K} \end{equation} where $J_{p}=\frac{1}{2}\kappa_{0}\left(n_{d}-n_{u}\right)T_{0}^{2}$ is the persistent heat current in the system at equilibrium, which has no divergence because the upper edge has an opposite term. It is subtracted from both edges in order to expose the net current above the equilibrium current flowing in the system due to the chirality. The normalized two terminal thermal conductance of the system is obtained by summing the contributions from both edges. Plugging the temperature dependencies, given by Eqs.~\eqref{eq:TemprofileTd} and~\eqref{eq:TemprofileTu}, the two terminal thermal conductance is readily obtained: \begin{widetext} \begin{equation} \kappa\left(\xi_{d},\xi_{e},L\right)=\kappa_{\text{lower}}+\kappa_{\text{upper}}=\frac{1}{2\bar{n}}\frac{n_{u}e^{\Lambda L}+n_{d}e^{-\Lambda L}}{\Lambda\xi_{e}\cosh\left(\Lambda L\right)+\left(\frac{N}{2\bar{n}}+\frac{\xi_{e}}{\xi_{d}}\right)\sinh\left(\Lambda L\right)}+\left(n_{u}+n_{d}\right)\frac{\left(\Lambda\xi_{e}-\frac{1}{2\bar{n}}\right)\cosh\left(\Lambda L\right)+\frac{\xi_{e}}{\xi_{d}}\sinh\left(\Lambda L\right)}{\Lambda\xi_{e}\cosh\left(\Lambda L\right)+\left(\frac{N}{2\bar{n}}+\frac{\xi_{e}}{\xi_{d}}\right)\sinh\left(\Lambda L\right)}.\label{eq:2termThermalCond} \end{equation} \end{widetext} There are three competing length scales in our problem: the system size $L$, the equilibration length $\xi_{e}$, and the dissipation length $\xi_{d}$. Since we wish to discuss the thermal Hall conductance, defined for a fully equilibrated edge system, it is required that $L\gg\xi_{e}$, so that the edge modes are able to equilibrate over the length of the system. Let us now elaborate more on the temperature profiles and $\kappa$ of the hole like states, for both cases: (i) $n_{u}>n_{d}$ and (ii) $n_{d}=n_{u}=1$ (corresponding to the $\nu=\frac{2}{3}$ state). \begin{figure*} \centering{}\includegraphics[bb=0bp 120bp 792bp 470bp,clip,scale=0.55]{temperature_profiles_35_23.pdf}\caption{\textbf{\label{fig:Temperature-profiles}a. }Temperature profiles of the downstream and upstream modes on the upper and lower edges of a $\nu=\frac{3}{5}$ FQH state, described by $n_{d}=1$ and $n_{u}=2$. The temperature profiles are plotted for different values of $\xi_{d}$ relative to $L$ and $\xi_{e}$, where $L=300\mu m$ and $\xi_{e}=20\mu m$~\cite{Banerjee2017}. \textbf{b.} Temperature profiles of the downstream and upstream modes of a $\nu=\frac{2}{3}$ FQH state, described by $n_{d}=n_{u}=1$. The upper and lower edges exhibit the same temperature profile. } \end{figure*} \paragraph{Hole-like states with $n_{u}>n_{d}$ - \protect \\ } The temperature profiles of the edge modes are given by Eqs~\eqref{eq:TemprofileTd} and~\eqref{eq:TemprofileTu}, and $\kappa$ is given by Eq.~\ref{eq:2termThermalCond}. To illuminate the physics let us discuss the temperature profiles {[}Fig. (2.a){]} and $\kappa$ in the following regimes: \uline{Topological regime ($\xi_{d}\gg L\gg\xi_{e}$) -} The edge modes exchange energy with one another, and equilibrate to the temperature of the upstream modes. In this regime their dissipation of energy to the thermal bath is small. The normalized two terminal thermal conductance acquires the absolute value of the topological value~\cite{Kane1997} with two corrections to leading orders: $\kappa=\left(n_{u}-n_{d}\right)\left[1+2\frac{n_{d}}{n_{u}}e^{-\frac{L}{\bar{n}\xi_{e}}}\right]+4\bar{n}n_{d}\frac{\xi_{e}}{\xi_{d}}$. The first exponential correction is due to the finite system size $L$, and the second algebraic correction is due to dissipation to the bath, that happens all along the edge. \uline{Intermediate regime ($L\gg\xi_{d}\gg\xi_{e}$) -} Most energy is dissipated to the thermal bath before arrival to the cold contact, therefore the temperature profiles decrease to $T_{0}$ on both edges. However, the edge modes exchange energy before dissipating it all to the thermal bath. Thus, to leading order, $\kappa$ acquires the absolute value of the topological value, with an algebraic correction due to dissipation: $\kappa=\left(n_{u}-n_{d}\right)+4\bar{n}n_{d}\frac{\xi_{e}}{\xi_{d}}$. This correction can be of the order of $\left(n_{u}-n_{d}\right)$, so $\kappa$ is not universal in this case. \uline{Non-universal regime ($L\gg\xi_{e}\gg\xi_{d}$) -} The edge modes dissipate all their energy to the thermal bath and therefore the temperature profiles decrease to $T_{0}$ very close to the hot contact. The thermal conductance, $\kappa$, in this case is the total number of edge modes leaving the hot contact, $n=n_{u}+n_{d}$, with a correction due to a competition between $\xi_{e}$ and $\xi_{d}$: $\kappa=\left(n_{u}+n_{d}\right)-\frac{\xi_{d}}{\xi_{e}}$. This happens because the modes emanating from the hot contact on both edges dissipate all the energy to the external thermal bath, thus the heat conductance is limited by the total number of modes emanating the hot contact. The number $n=n_{u}+n_{d}$ is not universal, due to processes such as edge reconstruction \cite{Wan2003,Sabo2017}. This limit and the limit of a very short system, i.e. $L\ll\xi_{e}$, are qualitatively similar. \paragraph{$\nu=\frac{2}{3}$ state - \protect \\ } The temperature profiles of the edge modes of the $\nu=\frac{2}{3}$ state are obtained by taking the limit of $n_{u}\rightarrow n_{d}=1$ in Eqs.~\eqref{eq:TemprofileTd},~\eqref{eq:TemprofileTu}. Substituting the temperature profiles into Eq. \ref{eq:general_K} we obtain $\kappa$ for the $\nu=\frac{2}{3}$ state: \begin{equation} \kappa=2\left[1-\frac{1}{1+\frac{\xi_{e}}{\xi_{d}}+\Lambda_{\frac{2}{3}}\xi_{e}\coth\left(\Lambda_{\frac{2}{3}}L\right)}\right], \end{equation} where $\Lambda_{\frac{2}{3}}=\frac{1}{\xi_{e}}\sqrt{\frac{\xi_{e}}{\xi_{d}}\left(2+\frac{\xi_{e}}{\xi_{d}}\right)}$. To illuminate the physics, let us discuss the temperature profiles of the edge modes {[}Fig. (2.b){]} and $\kappa$ in the corresponding three regimes: \uline{The topological regime ($\xi_{d}\gg\nicefrac{L^{2}}{\xi_{e}}\gg\xi_{e}$) -} The system is diffusive, therefore the temperature profiles are linear along the edges, with a constant difference. The thermal conductance, $\kappa$, approaches the absolute value of the topological value~\cite{Kane1997} with a leading order algebraic correction, due to a competition between the equilibration length and the finite system size: $\kappa=\frac{2}{1+\frac{L}{\xi_{e}}}$. \uline{The intermediate regime ($\nicefrac{L^{2}}{\xi_{e}}\gg\xi_{d}\gg\xi_{e}$) -} The system dissipate energy to the thermal bath, therefore the temperature profiles are exponential, rather than linear. The thermal conductance, $\kappa$, approaches the absolute value of the topological value, with a leading order algebraic correction, due to the competition between the equilibration and dissipation lengths: $\kappa=\frac{2}{1+\sqrt{2\frac{\xi_{d}}{\xi_{e}}}}$. \uline{The non-universal regime ($\nicefrac{L^{2}}{\xi_{e}}\gg\xi_{e}\gg\xi_{d}$) -} The edge modes dissipate all their energy to the thermal bath, so the temperature profiles decrease to $T_{0}$ very close to the hot contact. The thermal conductance, $\kappa$, approaches the non-universal value of the total number of modes, with an algebraic correction due to the competition between equilibration and dissipation: $\kappa=2-\frac{\xi_{d}}{\xi_{e}}$. \subsection{Proposed experimental setup} This phenomenological theory may be tested by employing quantum dots (QDs) as thermometers~\cite{Hoffmann2007,Venkatachalam2012a,Viola2012,Maradan2014} for the temperature at various points along the edge. The proposed experimental setup, depicted in Fig. \eqref{fig:schematic pic}, couples QDs to the edges of FQH liquids, and is based on measuring the resulting thermoelectric current. To deduce the temperature profiles from the thermoelectric current, the thermoelectric coefficient needs to be known. Following Furusaki~\cite{Furusaki1998}, the thermoelectric coefficient, $G_{T},$ of a QD in a normal state, weakly coupled to two FQH liquids, can be calculated to linear order in the temperature difference between the two FQH states. In this regime, the thermoelectric coefficient is found to be related to the conductance of the QD~\cite{Roura-Bas2017} as \begin{equation} G_{T}=\frac{\epsilon}{e}G, \end{equation} where $G$ is the linear electric conductance of the QD, $e$ is the electron charge and $\epsilon$ is the energy difference between the many body ground state energies of $N+1$ electrons and $N$ electrons in the QD. Using this relation, the thermoelectric coefficient of the QDs can be measured without applying a temperature bias. Thus the temperature profiles can be deduced from the thermoelectric current through the QDs, upon introduction of temperature difference $\Delta T$. A measurement of the temperature profiles allows for the extraction of the dissipation length, the equilibration length and the sign of the thermal Hall conductance. For extraction of the latter, the system needs to be in the topological regime ($\xi_{d}\gg L\gg\xi_{e}$). In this regime, the edges are distinguished by their temperature profiles, such that the edge which is expected to carry the heat current, according to Ref.~\cite{Kane1997} is hotter {[}Fig. \eqref{fig:Temperature-profiles}{]}. \begin{figure} \centering{}\includegraphics[bb=220bp 180bp 565bp 450bp,clip,scale=0.45]{proposed_experimental_setup_QDs.pdf}\caption{\label{fig:schematic pic}A schematic picture of the proposed experimental setup. Two Ohmic contacts are connected to a Hall bar at a FQH state, with a chirality defined by the solid black arrow. A temperature difference $\Delta T=T_{m}-T_{0}$ is imposed between the contacts (Ref.~\cite{Banerjee2017,Jezouin2013}). Multiple number of QDs, depicted by the full circles, are coupled to both edges of the Hall bar. They are connected to Ohmic contacts, thus enabling measurement of the thermoelectric currents that passes through them.} \end{figure} \subsection{Conclusions} To conclude, the thermal Hall conductance is predicted to be a universal and topological property of a FQH state, and therefore can help determining the states in a more accurate way. Recent experiment has managed to measure the absolute value of the thermal Hall conductance of Abelian FQH states~\cite{Banerjee2017}, and consisted with the prediction of Kane and Fisher~\cite{Kane1997} regarding these states. It should be noted, however, that Ref~\cite{Kane1997} assumes the edge is a closed system with respect to energy, while it was shown experimentally that there can be energy dissipation from the edge \cite{Venkatachalam2012a}. In this paper we elaborated on the phenomenological picture of the temperature profiles of the edge modes of a FQH state with $n_{d}$ downstream mode and $n_{u}$ upstream modes described in Ref.~\cite{Banerjee2017}, by writing rate equations for heat transport through the edges, including a dissipation term to an external thermal bath. By solving the phenomenological equations, we found that the two terminal thermal conductance depends on the coupling strength to the external thermal bath, in such a way that when the coupling is extremely weak, the two terminal thermal conductance acquires the universal topological value, however, when the coupling is very strong the two terminal thermal conductance is not universal anymore, and is subject to the influence of edge reconstruction effects~\cite{Wan2003,Sabo2017}. Furthermore, we proposed to use QDs coupled to the edges of a FQH state to, first, test the above theory and measure the dissipation length and the equilibration length, and second, to determine the sign of the thermal Hall conductance. \begin{acknowledgments} We would like to thank Mitali Banerjee, Dima E. Feldman, Moty Heiblum, Tobias Holder, Gilad Margalit, David F. Mross, Amir Rosenblatt, Steven H. Simon, Kyrylo Snizhko and Vladimir Umansky for constructive discussions. We acknowledge support of the Israel Science Foundation; the European Research Council under the European Communitys Seventh Framework Program (FP7/2007-2013)/ERC Project MUNATOP; the DFG (CRC/Transregio 183, EI 519/7-1). YO acknowledges support of the Binational Science Foundation. AS acknowledges support of Microsoft Station Q. \end{acknowledgments} \bibliographystyle{apsrev4-1}
2024-02-18T23:40:30.332Z
2019-04-22T02:13:05.000Z
algebraic_stack_train_0000
2,595
4,438
proofpile-arXiv_065-12636
\section{Introduction} Let $q$ be a power of a prime $p$, $\mathbb{F}_{q^2}$ be the finite field with $q^2$ elements, and ${\mathcal X}$ be a projective, absolutely irreducible, non-singular algebraic curve defined over $\mathbb{F}_{q^2}$. The curve ${\mathcal X}$ is called $\mathbb{F}_{q^2}$-maximal if the number $|{\mathcal X}(\mathbb{F}_{q^2})|$ of its $\mathbb{F}_{q^2}$-rational points attains the Hasse-Weil upper bound $q^2+1+2gq$, where $g$ denotes the genus of ${\mathcal X}$. Maximal curves have been investigated for their applications in Coding Theory; see \cite{Sti,vdG2}. Surveys on maximal curves are found in \cite{FT,G,G2,GS,vdG} and \cite[Chapter 10]{HKT}. A well-known and important example of an $\mathbb{F}_{q^2}$-maximal curve is the Hermitian curve ${\mathcal H}_q$. It is defined as any $\mathbb{F}_{q^2}$-rational curve which is projectively equivalent to the plane curve $X^{q+1}+Y^{q+1}+Z^{q+1}=0$. For fixed $q$, the curve ${\mathcal H}_q$ has the largest genus $g({\mathcal H}_q)=q(q-1)/2$ that an $\mathbb{F}_{q^2}$-maximal curve can have. The automorphism group ${\rm Aut}({\mathcal H}_q)$ is defined over $\mathbb{F}_{q^2}$ and isomorphic to the projective general unitary group ${\rm PGU}(3,q)$, the group of projectivities of ${\rm PG}(2,q^2)$ commuting with the unitary polarity associated with ${\mathcal H}_q$. The automorphism group ${\rm Aut}({\mathcal H}_q)$ has size bigger than $16g({\mathcal H}_q)^4$, while any curve of genus $g\geq2$ not isomorphic to the Hermitian curve has less than $16g^4$. By a result commonly referred to as the Kleiman-Serre covering result (see \cite{KS} and \cite[Proposition 6]{L}) a curve ${\mathcal X}$ defined over $\mathbb{F}_{q^2}$ and $\mathbb{F}_{q^2}$-covered by an $\mathbb{F}_{q^2}$-maximal curve is $\mathbb{F}_{q^2}$-maximal as well. In particular, $\mathbb{F}_{q^2}$-maximal curves can be obtained as Galois $\mathbb{F}_{q^2}$-subcovers of an $\mathbb{F}_{q^2}$-maximal curve ${\mathcal X}$, that is, as quotient curves ${\mathcal X}/G$ where $G$ is a finite $\mathbb{F}_{q^2}$-automorphism group of ${\mathcal X}$. Most of the known maximal curves are Galois covered by the Hermitian curve; see for instance \cite{GSX,CKT2,GHKT2,MZ,MZRS} and the references therein. A challenging open problem is the determination of the spectrum $\Gamma(q^2)$ of genera of $\mathbb{F}_{q^2}$-maximal curves, for given $q$; some values in $\Gamma(q^2)$ arise from $\mathbb{F}_{q^2}$-maximal curves which are not covered or Galois covered by the Hermitian curve. The first example of a maximal curve not Galois covered by the Hermitian curve was discovered by Garcia and Stichtenoth \cite{GS3}. This curve is $\mathbb{F}_{3^6}$-maximal and it is not Galois covered by ${\mathcal H}_{27}$. It is a special case of the $\mathbb{F}_{q^6}$-maximal GS curve, which was later shown not to be Galois covered by ${\mathcal H}_{q^3}$ for any $q>3$, \cite{GMZ,Mak}. Giulietti and Korchm\'aros \cite{GK} provided an $\mathbb{F}_{q^6}$-maximal curve, nowadays referred to as the GK curve, which is not covered by the Hermitian curve ${\mathcal H}_{q^3}$ for any $q>2$, like some of its Galois subcovers \cite{GQZ,TTT}. Two generalizations of the GK curve were introduced by Garcia, G\"uneri and Stichtenoth \cite{GGS} and by Beelen and Montanucci in \cite{BM}. Both these two generalizations are $\mathbb{F}_{q^{2n}}$-maximal curves, for any $q$ and odd $n \geq 3$. Also, they are not Galois covered by the Hermitian curve ${\mathcal H}_{q^n}$ for $q>2$ and $n \geq 5$, see \cite{DM,BM}; the Garcia-G\"uneri-Stichtenoth's generalization is also not Galois covered by ${\mathcal H}_{2^n}$ for $q=2$, see \cite{GMZ}. Apart from the examples listed above, most of the known values in $\Gamma(q^2)$ have been obtained from quotients curves ${\mathcal H}_q/G$ of the Hermitian curve, which have beed investigated in many papers. Therefore, towards the determination of $\Gamma(q^2)$ it is important to study the genera $g({\mathcal H}_q/G)$ for all $G\leq{\rm Aut}({\mathcal H}_q)$. We list some known partial results: \begin{itemize} \item $g({\mathcal H}_q/G)$ for some $G$ fixing a point of ${\mathcal H}_q(\mathbb{F}_{q^2})$; see \cite{BMXY,GSX,AQ}. \item $g({\mathcal H}_q/G)$ when $G$ normalizes a Singer subgroup of ${\rm Aut}({\mathcal H}_q)$ fixing three non-collinear points of ${\mathcal H}_q(\mathbb{F}_{q^6})\setminus{\mathcal H}_q(\mathbb{F}_{q^2})$; see \cite{GSX, CKT1,CKT2}. \item $g({\mathcal H}_q/G)$ when $G$ has prime order; see \cite{CKT2}. \item $g({\mathcal H}_q/G)$ when $G$ fixes neither points nor triangles in ${\rm PG}(2,\bar{\mathbb{F}}_{q^2})$; see \cite{MZultimo}. \end{itemize} Only the following cases for $G\leq{\rm Aut}({\mathcal H}_q)$ still have to be analyzed: \begin{enumerate} \item $G$ fixes a self-polar triangle in ${\rm PG}(2,q^2)$, \item $G$ fixes an $\mathbb{F}_{q^2}$-rational point $P\notin{\mathcal H}_q$. \item $G$ fixes a point $P \in {\mathcal H}_q(\mathbb{F}_{q^2})$, $p=2$ and $|G|=p^\ell d$ where $p^\ell \leq q$ and $d \mid (q-1)$. \end{enumerate} We observe that some partial results are known in the literature for the cases 1 and 2. In this paper a complete analysis of Case 1 is given, as well as a complete analysis of Case 2 when $p=2$. More precisely, this paper is organized as follows. Section \ref{sec:preliminari} recalls necessary preliminary results on the Hermitian curve and its automorphism group. In Section \ref{sec:triangolo} a complete analysis of Case 1 is given. Section \ref{sec:polopolare} contains a complete analysis of Case 2 under the assumption $p=2$. Finally, Section \ref{sec:nuovigeneri} collects some new genera of maximal curves obtained for fixed values of $q$. \section{Preliminary results}\label{sec:preliminari} Throughout the paper $p$ is a prime number, $n$ is a positive integer, $q=p^n$, and $\mathbb{F}_q$ is the finite field with $q$ elements. The Deligne-Lusztig curves defined over $\mathbb{F}_q$ were originally introduced in \cite{DL}. Other than the projective line, there are three families of Deligne-Lusztig curves, named Hermitian curves, Suzuki curves and Ree curves. The Hermitian curve $\mathcal H_q$ arises from the algebraic group $^2A_2(q)={\rm PGU}(3,q)$ of order $(q^3+1)q^3(q^2-1)$, that is, ${\rm Aut}({\mathcal H}_q)={\rm PGU}(3,q)$. It has genus $q(q-1)/2$ and is $\mathbb F_{q^2}$-maximal. Three $\mathbb{F}_{q^2}$-isomorphic models of ${\mathcal H}_q$ are given by the following equations: \begin{equation} \label{M1} X^{q+1}+Y^{q+1}+Z^{q+1}=0; \end{equation} \begin{equation} \label{M2} Y^{q}Z+YZ^{q}- X^{q+1}=0; \end{equation} \begin{equation} \label{M3} XY^{q}-X^{q}Y+\omega Z^{q+1}=0, \end{equation} where $\omega\in\mathbb{F}_{q^2}$ satisfies $\omega^{q+1}=-1$. The curves \eqref{M1} and \eqref{M2} are known as the Fermat and the Norm-Trace model of the Hermitian curve, respectively. The action of ${\rm Aut}({\mathcal H}_q)$ on the set ${\mathcal H}_q(\mathbb{F}_{q^2})$ of all $\mathbb{F}_{q^2}$-rational points of ${\mathcal H}_q$ is equivalent to the $2$-transitive permutation representation of ${\rm PGU}(3,q)$ on the isotropic points with respect to the unitary form. The combinatorial properties of ${\mathcal H}_q(\mathbb{F}_{q^2})$ can be found in \cite{HP}. The size of ${\mathcal H}_q(\mathbb{F}_{q^2})$ is equal to $q^3+1$, and a line of $PG(2,q^2)$ has either $1$ or $q+1$ common points with ${\mathcal H}_q(\mathbb{F}_{q^2})$, that is, it is either a tangent line or a chord of ${\mathcal H}_q(\mathbb{F}_{q^2})$. The following classification of maximal subgroups of the projective special subgroup ${\rm PSU}(3,q)$ of ${\rm PGU}(3,q)$ goes back to Mitchell and Hartley; see \cite{M}, \cite{H}, \cite{HO}. Actually, in \cite{KL} it is possible to find a classification of maximal subgroups of ${\rm PSU}(n,q)$, $n\geq 3$, in the context of Aschbacher's classes of subgroups, after the theorem of classification for finite simple groups. Here we prefer to go back to the results of Mitchell and Hartley where it is more readable the relation between subgroups and action on ${\rm PG}(2,\bar{\mathbb{F}}_{q^2})$ and ${\mathcal H}_q$. \begin{theorem} \label{Mit} Let $d={\rm gcd}(3,q+1)$. Up to conjugacy, the subgroups below give a complete list of maximal subgroups of ${\rm PSU}(3,q)$. \begin{itemize} \item[(i)] the stabilizer of an ${\mathbb F}_{q^2}$-rational point of ${\mathcal H}_q$. It has order $q^3(q^2-1)/d$; \item[(ii)] the stabilizer of an ${\mathbb F}_{q^2}$-rational point off ${\mathcal H}_q$ $($equivalently the stabilizer of a chord of ${\mathcal H}_q(\mathbb{F}_{q^2}))$. It has order $q(q-1)(q+1)^2/d$; \item[(iii)] the stabilizer of a self-polar triangle with respect to the unitary polarity associated to ${\mathcal H}_q(\mathbb{F}_{q^2})$. It has order $6(q+1)^2/d$; \item[(iv)] the normalizer of a (cyclic) Singer subgroup. It has order $3(q^2-q+1)/d$ and preserves a triangle in ${\rm PG}(2,q^6)\setminus{\rm PG}(2,q^2)$ left invariant by the Frobenius collineation $\Phi_{q^2}:(X,Y,T)\mapsto (X^{q^2},Y^{q^2},T^{q^2})$ of ${\rm PG}(2,\bar{\mathbb{F}}_{q})$; {\rm for $p>2$:} \item[(v)] ${\rm PGL}(2,q)$ preserving a conic; \item[(vi)] ${\rm PSU}(3,p^m)$ with $m\mid n$ and $n/m$ odd; \item[(vii)] subgroups containing ${\rm PSU}(3,p^m)$ as a normal subgroup of index $3$, when $m\mid n$, $n/m$ is odd, and $3$ divides both $n/m$ and $q+1$; \item[(viii)] the Hessian groups of order $216$ when $9\mid(q+1)$, and of order $72$ and $36$ when $3\mid(q+1)$; \item[(ix)] ${\rm PSL(2,7)}$ when $p=7$ or $-7$ is not a square in $\mathbb{F}_q$; \item[(x)] the alternating group $\mathbf{A}_6$ when either $p=3$ and $n$ is even, or $5$ is a square in $\mathbb{F}_q$ but $\mathbb{F}_q$ contains no cube root of unity; \item[(xi)] an extension of order $720$ of $\mathbf{A}_6$, when $p=5$ and $n$ is odd; \item[(xii)] the alternating group $\mathbf{A}_7$ when $p=5$ and $n$ is odd; {\rm for $p=2$:} \item[(xiii)] ${\rm PSU}(3,2^m)$ with $m\mid n$ and $n/m$ an odd prime; \item[(xiv)] subgroups containing ${\rm PSU}(3,2^m)$ as a normal subgroup of index $3$, when $n=3m$ with $m$ odd; \item[(xv)] a group of order $36$ when $n=1$. \end{itemize} \end{theorem} We also need the classification of subgroups of the projective general linear group ${\rm PGL}(2,q)$: \begin{theorem}\label{Di}{\rm (see \cite[Chapt. XII, Par. 260]{D}, \cite[Kap. II, Hauptsatz 8.27]{Hup}, \cite[Thm. A.8]{HKT})} The following is the complete list of subgroups of ${\rm PGL}(2,q)$ up to conjugacy: \begin{itemize} \item[(i)] the cyclic group of order $h$ with $h\mid(q\pm1)$; \item[(ii)] the elementary abelian $p$-group of order $p^f$ with $f\leq n$; \item[(iii)] the dihedral group of order $2h$ with $h\mid(q\pm1)$; \item[(iv)] the alternating group $\mathbf{A}_4$ for $p>2$, or $p=2$ and $n$ even; \item[(v)] the symmetric group $\mathbf{S}_4$ for $16\mid(q^2-1)$; \item[(vi)] the alternating group $\mathbf{A}_5$ for $p=5$ or $5\mid(q^2-1)$; \item[(vii)] the semidirect product of an elementary abelian $p$-group of order $p^f$ by a cyclic group of order $h$, with $f\leq n$ and $h\mid\gcd(p^f-1,q-1)$; \item[(viii)] ${\rm PSL}(2,p^f)$ for $f\mid n$; \item[(ix)] ${\rm PGL}(2,p^f)$ for $f\mid n$. \end{itemize} \end{theorem} In our investigation it is useful to know how an element of ${\rm PGU}(3,q)$ of a given order acts on ${\rm PG}(2,\bar{\mathbb{F}}_{q^2})$, and in particular on ${\mathcal H}_q$. This can be obtained as a corollary of Theorem \ref{Mit}, and is stated in Lemma $2.2$ with the usual terminology of collineations of projective planes; see \cite{HP}. In particular, a linear collineation $\sigma$ of ${\rm PG}(2,\bar{\mathbb{F}}_{q^2})$ is a $(P,\ell)$-\emph{perspectivity} if $\sigma$ preserves each line through the point $P$ (the \emph{center} of $\sigma$), and fixes each point on the line $\ell$ (the \emph{axis} of $\sigma$). A $(P,\ell)$-perspectivity is either an \emph{elation} or a \emph{homology} according to $P\in \ell$ or $P\notin\ell$, respectively. A $(P,\ell)$-perspectivity is in ${\rm PGL}(3,q^2)$ if and only if its center and its axis are in ${\rm PG}(2,\mathbb{F}_{q^2})$. Lemma \ref{classificazione} gives a classification of nontrivial elements in ${\rm PGU}(3,q)$, we will refer to throughout the paper. \begin{lemma}\label{classificazione}{\rm (\!\!\cite[Lemma 2.2]{MZRS})} For a nontrivial element $\sigma\in{\rm PGU}(3,q)$, one of the following cases holds. \begin{itemize} \item[(A)] ${\rm ord}(\sigma)\mid(q+1)$ and $\sigma$ is a homology, whose center $P$ is a point of ${\rm PG}(2,q^2)\setminus{\mathcal H}_q$ and whose axis $\ell$ is a chord of ${\mathcal H}_q(\mathbb{F}_{q^2})$ such that $(P,\ell)$ is a pole-polar pair with respect to the unitary polarity associated to ${\mathcal H}_q(\mathbb{F}_{q^2})$. \item[(B)] $p\nmid{\rm ord}(\sigma)$ and $\sigma$ fixes the vertices $P_1$, $P_2$, $P_3$ of a non-degenerate triangle $T\subset{\rm PG}(2,q^6)$. \begin{itemize} \item[(B1)] ${\rm ord}(\sigma)\mid(q+1)$, $P_1,P_2,P_3\in{\rm PG}(2,q^2)\setminus{\mathcal H}_q$, and $T$ is self-polar with respect to the unitary polarity associated to ${\mathcal H}_q(\mathbb{F}_{q^2})$. \item[(B2)] ${\rm ord}(\sigma)\mid(q^2-1)$, ${\rm ord}\nmid(q+1)$, $P_1\in{\rm PG}(2,q^2)\setminus{\mathcal H}_q$, and $P_2,P_3\in{\mathcal H}_q({\mathbb F_{q^2}})$. \item[(B3)] ${\rm ord}(\sigma)\mid(q^2-q+1)$, and $P_1,P_2,P_3\in{\mathcal H}_q(\mathbb{F}_{q^6})\setminus{\mathcal H}_q(\mathbb{F}_{q^2})$. \end{itemize} \item[(C)] ${\rm ord}(\sigma)=p$ and $\sigma$ is an elation, whose center $P$ is a point of ${\mathcal H}_q({\mathbb F_{q^2}})$ and whose axis $\ell$ is tangent to ${\mathcal H}_q$ at $P$ such that $(P,\ell)$ is a pole-polar pair with respect to the unitary polarity associated to ${\mathcal H}_q(\mathbb{F}_{q^2})$. \item[(D)] wither ${\rm ord}(\sigma)=p$ with $p\ne2$, or ${\rm ord}(\sigma)=4$ with $p=2$; $\sigma$ fixes a point $P\in{\mathcal H}_q({\mathbb F_{q^2}})$ and a line $\ell$ which is tangent to ${\mathcal H}_q$ at $P$, such that $(P,\ell)$ is a pole-polar pair with respect to the unitary polarity associated to ${\mathcal H}_q(\mathbb{F}_{q^2})$. \item[(E)] ${\rm ord}(\sigma)=p\cdot d$, where $1\ne d\mid(q+1)$; $\sigma$ fixes two points $P\in{\mathcal H}_q({\mathbb F_{q^2}})$ and $Q\in{\rm PG}(2,q^2)\setminus{\mathcal H}_q$; $\sigma$ fixes the line $PQ$ which is the tangent to ${\mathcal H}_q$ at $P$, and another line through $P$ which is the polar of $Q$. \end{itemize} \end{lemma} From Function Field Theory we need the Riemann-Hurwitz formula; see \cite[Theorem 3.4.13]{Sti}. Every subgroup $G$ of ${\rm PGU}(3,q)$ produces a quotient curve ${\mathcal H}_q/G$, and the cover ${\mathcal H}_q\rightarrow{\mathcal H}_q/G$ is a Galois cover defined over $\mathbb{F}_{q^2}$. Also, the degree $\Delta$ of the different divisor is given by the Riemann-Hurwitz formula, namely $$\Delta=(2g({\mathcal H}_q)-2)-|G|(2g({\mathcal H}_q/G)-2).$$ On the other hand, $\Delta=\sum_{\sigma\in G\setminus\{id\}}i(\sigma)$ where $i(\sigma)\geq0$ is given by the Hilbert's different formula \cite[Thm. 3.8.7]{Sti}, namely \begin{equation}\label{contributo} \textstyle{i(\sigma)=\sum_{P\in{\mathcal H}_q(\bar{\mathbb F}_{q^2})}v_P(\sigma(t)-t),} \end{equation} where $v_P(\cdot)$ denotes the valuation function at $P$ and $t$ is a local parameter at $P$, i.e. $v_P(t)=1$. By analyzing the geometric properties of the elements $\sigma \in {\rm PGU}(3,q)$, it turns out that there are only a few possibilities for $i(\sigma)$. This is obtained as a corollary of Lemma \ref{classificazione}. \begin{theorem}\label{caratteri}{\rm (\!\!\cite[Theorem 2.7]{MZRS})} For a nontrivial element $\sigma\in {\rm PGU}(3,q)$, one of the following cases occurs. \begin{enumerate} \item If ${\rm ord}(\sigma)=2$ and $2\mid(q+1)$, then $\sigma$ is of type {\rm(A)} and $i(\sigma)=q+1$. \item If ${\rm ord}(\sigma)=3$, $3 \mid(q+1)$, and $\sigma$ is of type {\rm(B3)}, then $i(\sigma)=3$. \item If ${\rm ord}(\sigma)\ne 2$, ${\rm ord}(\sigma)\mid(q+1)$, and $\sigma$ is of type {\rm(A)}, then $i(\sigma)=q+1$. \item If ${\rm ord}(\sigma)\ne 2$, ${\rm ord}(\sigma)\mid(q+1)$, and $\sigma$ is of type {\rm(B1)}, then $i(\sigma)=0$. \item If ${\rm ord}(\sigma)\mid(q^2-1)$ and ${\rm ord}(\sigma)\nmid(q+1)$, then $\sigma$ is of type {\rm(B2)} and $i(\sigma)=2$. \item If ${\rm ord}(\sigma)\ne3$ and ${\rm ord}(\sigma)\mid(q^2-q+1)$, then $\sigma$ is of type {\rm(B3)} and $i(\sigma)=3$. \item If $p=2$ and ${\rm ord}(\sigma)=4$, then $\sigma$ is of type {\rm(D)} and $i(\sigma)=2$. \item If $p\ne2$, ${\rm ord}(\sigma)=p$, and $\sigma$ is of type {\rm(D)}, then $i(\sigma)=2$. \item If ${\rm ord}(\sigma)=p$ and $\sigma$ is of type {\rm(C)}, then $i(\sigma)=q+2$. \item If ${\rm ord}(\sigma)=p\cdot d$ where $d$ is a nontrivial divisor of $q+1$, then $\sigma$ is of type {\rm(E)} and $i(\sigma)=1$. \end{enumerate} \end{theorem} \begin{remark}\label{abuso} With a small abuse of notation, in the paper we denote by $\sigma$ the element $\bar{\sigma}=\sigma Z({\rm GU}(3,q))\in{\rm GU}(3,q)/Z({\rm GU}(3,q))={\rm PGU}(3,q)$. \end{remark} \section{$G$ stabilizes a self-polar triangle $T\subset {PG}(2,q^2)\setminus{\mathcal H}_q$}\label{sec:triangolo} In this section we take $G$ to be contained in the maximal subgroup $M$ of ${\rm PGU}(3,q)$ stabilizing a self-polar triangle $T=\{P_1,P_2,P_3\}$ with respect to the unitary polarity associated to ${\mathcal H}_q({\mathbb F_{q^2}})$. Recall that $M\cong (C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$, where $C_{q+1}\times C_{q+1}$ is the pointwise stabilizer of $T$ and $\mathbf{S}_3$ acts faithfully on $T$. We denote by $G_T=G\cap(C_{q+1}\times C_{q+1})$ the pointwise stabilizer of $T$ in $G$. We use the plane model \eqref{M1} of ${\mathcal H}_q$. Up to conjugation in ${\rm PGU}(3,q)$, we can assume that $T$ is the fundamental triangle with $P_1=(1,0,0)$, $P_2=(0,1,0)$, $P_3=(0,0,1)$. For each $\sigma\in C_{q+1}\times C_{q+1}$ we will consider as a representative of $\sigma$ the diagonal matrix having $(q+1)$-th roots of unity in the diagonal entries and $1$ in the third row and column. We can choose as complement $\mathbf{S}_3$ the monomial group made by matrices with exactly one nonzero entry equal to $1$ in each row and column; any other complement $\mathbf{S}_3$ has $(q+1)$-th roots of unity as nonzero entries. Partial results when the order of $G$ is coprime to $p$ were obtained in \cite{RobinHood} for the Fermat curve $\mathcal{F}_m : X^{m}+Y^{m}+Z^{m}=0$ with $p \nmid m$. Each result in Sections \ref{sec:punt} to \ref{sec:indice6} is stated as follows: firstly, sufficient numerical conditions for the existence of groups $G$ with a certain action on $T$ are given, and the genus of ${\mathcal H}_q/G$ is provided; secondly, it is shown that groups $G$ with that given action on $T$ satisfy those numerical conditions. In our investigation, the numerical conditions arise as necessary by the analysis of a putative group $G$ with a given action on $T$, and in a second moment they also appear as sufficient; neverthless, we choose to start the exposition of each result with the numerical conditions, showing firstly that they are sufficient and then that they are necessary, in order to make the proofs shorter and to highlight the purely numerical characterization that we obtained. \subsection{$G$ stabilizes $T$ pointwise}\label{sec:punt} In this section $G=G_T$, that is, $G\leq C_{q+1}\times C_{q+1}$. \begin{theorem}\label{fissatorepuntuale} Let $q+1=\prod_{i=1}^{\ell}p_i^{r_i}$ be the prime factorization of $q+1$. \begin{itemize} \item[(i)] For any divisors $a=\prod_{i=1}^{\ell}p_i^{s_i}$ and $b=\prod_{i=1}^{\ell}p_i^{t_i}$ of $q+1$ $($$0\leq s_i,t_i\leq r_i$$)$, let $c=\prod_{i=1}^{\ell}p_i^{u_i}$ be such that, for all $i=1,\ldots,\ell$, we have $u_i=\min\{s_i,t_i\}$ if $s_i\ne t_i$, and $s_i\leq u_i\leq r_i$ if $s_i=t_i$. Define $d=a+b+c-3$. Let $e=\frac{abc}{\gcd(a,b)}\cdot\prod_{i=1}^{\ell}p_i^{v_i}$, where for all $i$'s $v_i$ satisfies $0\leq v_i\leq r_i-\max\{s_i,t_i,u_i\}$. We also require that, if $p_i=2$ and either $2\nmid abc$ or $2\mid\gcd(a,b,c)$, then $v_i=0$. Then there exists a subgroup $G$ of $C_{q+1}\times C_{q+1}$ such that \begin{equation}\label{generefissatore} g({\mathcal H}_q/G)=\frac{(q+1)(q-2-d)+2e}{2e}. \end{equation} \item[(ii)] Conversely, if $G\leq C_{q+1}\times C_{q+1}$, then the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{generefissatore}, where $e=|G|$ and $d$ is the number of elements of type {\rm (A)} in $G$; $d,e$ satisfy the numerical assumptions in point {\it (i)}, for some $a,b,c$. \end{itemize} \end{theorem} \begin{proof} If $G\leq C_{q+1}\times C_{q+1}$ has order $e$ and contains exactly $d$ homologies, then any other $\sigma\in G$ satisfies $i(\sigma)=0$ by Theorem \ref{caratteri}; hence, Equation \eqref{generefissatore} follows from the Riemann-Hurwitz formula. Thus, the claim is proved once we characterize the possible values of $d$ and $e$. To this aim, we first study the subgroup $K$ of $G$ which is generated by the homologies of $G$; then we characterize the order of $G/K$ such that $G$ contains no homologies out of $K$. {\it (i)}: let the parameters $a,b,c,e$ be as in point {\it (i)}. Define $A=\{{\rm diag}(\lambda,1,1)\mid\lambda^{a}=1\}$, $B=\{{\rm diag}(1,\lambda,1)\mid\lambda^{b}=1\}$, and $C=\{{\rm diag}(\lambda,\lambda,1)\mid\lambda^{c}=1\}$. For any $i=1,\ldots,\ell$, let $m_i=\max\{s_i,t_i,u_i\}$, and $\lambda_i,\mu_i\in\mathbb{F}_{q^2}$ have order $o(\lambda_i)=p_i^{v_i}$ and $o(\mu_i)=p_i^{v_i+m_i}$. Define $H_i\leq C_{q+1}\times C_{q+1}$ by $$ H_i=\begin{cases} \langle{\rm diag}(\mu_i,\lambda_i,1)\rangle & \textrm{if }\; m_i=s_i>0; \\ \langle{\rm diag}(\lambda_i,\mu_i,1)\rangle & \textrm{if }\; m_i\ne s_i,\, m_i=t_i>0 ;\\ \langle{\rm diag}(\lambda_i\mu_i,\lambda_i^{-1}\mu_i,1)\rangle & \textrm{otherwise.} \end{cases} $$ Now choose $G=ABC\cdot\prod_{i=1}^{\ell}H_i$. It is not difficult to check that the homologies in $G$ are exactly the nontrivial elements of $(A\cup B\cup C)\setminus\{id\}$ and their number is $d=a+b+c-3$. Also, the order of $K=ABC$ is $\frac{abc}{\gcd(a,b)}$. The order of $\frac{G}{K}$ is $\prod_{i=1}^{\ell}p_i^{v_i}$, since $|H_i|=p_i^{v_i+m_i}$ and $|H_i\cap K|=p_i^{m_i}$. Hence, $G$ has order $e$ and contains exactly $d$ homologies; the first part of the claim is then proved. {\it (ii)}: let $G\leq C_{q+1}\times C_{q+1}$ with $|G|=e$. Define the subgroups $A=\{\sigma\in G\mid\sigma={\rm diag}(\lambda,1,1),\lambda^{q+1}=1\}$, $B=\{\sigma\in G\mid\sigma={\rm diag}(1,\mu,1),\mu^{q+1}=1\}$, and $C=\{\sigma\in G\mid\sigma={\rm diag}(\nu,\nu,1),\nu^{q+1}=1\}$ of $G$, of order $a=\prod_{i=1}^{\ell}p_i^{s_i}$, $b=\prod_{i=1}^{\ell}p_i^{t_i}$, and $c=\prod_{i=1}^{\ell}p_i^{u_i}$, respectively. Since $|A|$, $|B|$, and $|C|$ divide $q+1$, we have $s_i,t_i,u_i\leq r_i$. The homologies of $G$ are exactly the nontrivial elements of $A\cup B\cup C$, and their number is $d=a+b+c-3$. We have $\gcd(a,b)\mid c$, so that $\min\{s_i,t_i\}\mid u_i$. If $s_i\ne t_i$, say for instance $s_i<t_i$, then $u_i=s_i$; otherwise, $BC$ would contain a subgroup of $A$ of order $p_i^{\min\{t_i,u_i\}}>p_i^{s_i}$, a contradiction. Thus, $a,b,c$ satisfy the restrictions of point {\it (i)}. Consider the group $G/K$, where $K=ABC$ has order $\frac{abc}{\gcd(a,b)}$. Let $i\in\{1,\ldots,\ell\}$ be such that $v_i>0$, and $P_i$ be a Sylow $p_i$-subgroup of $G/K$, of order $|P_i|=p_i^{v_i}$. \begin{itemize} \item Suppose $p_i\nmid abc$ and $p_i\ne2$. Then $P_i$ is also a Sylow $p_i$-subgroup of $G$. If $v_i>r_i$, then $P_i$ is not cyclic and contains a subgroup $C_{p_i}\times C_{p_i}$; hence $|P_i\cap K|\geq p_i^2$, a contradiction. Hence, $v_i\leq r_i=r_i-\max\{s_i,t_i,u_i\}$. \item Suppose $p_i=2\nmid abc$. Then $P_i$ is a Sylow $2$-subgroup of $G$ having trivial intersection with $K$ and containing an involution $\sigma$. By Theorem \ref{caratteri}, $\sigma$ is a homology, a contradiction. \item Suppose $p_i\mid abc$ and $p_i\ne2$. Let $\alpha K\in G/K$ be a $p_i$-element, say $o(\alpha K)=p_i^{f}$. We can assume that $\alpha$ is a $p_i$-element. In fact, if $\alpha^{p_i^{f}}=\beta \gamma$ with $\beta$ a $p_i$-element and $p_i\nmid o(\gamma)$, then choose $\bar\gamma\in K$ with ${\bar\gamma}^{p_i^{f}}=\gamma^{-1}$ and replace $\alpha$ with $\alpha\bar\gamma$; now, $\alpha^{p_i^f}=\beta$ where $\beta\in K$ is a $p_i$-element. We show that $o(\beta)=p_i^{m_i}$, where $m_i=\max\{s_i,t_i,u_i\}$. Let $o(\beta)=p_i^{n_i}$, and suppose by contradiction that $n_i<m_i$. If $m_i=\min\{s_i,t_i,u_i\}$, that is $s_i=t_i=u_i$, then $K$ contains all $p_i$-elements of $C_{q+1}\times C_{q+1}$ of order smaller than or equal to $p_i^{m_i}$, a contradiction to $\alpha^{p_i^{f-1}}\notin K$. If $m_i>\min\{s_i,t_i,u_i\}$, then $K$ contains all $p_i$-elements of $C_{q+1}\times C_{q+1}$ of order smaller than or equal to $p_i^{\min\{s_i,t_i,u_i\}}$, and hence $m_i>n_i>\min\{s_i,t_i,u_i\}$. Let $\delta\in K$ have order $o(\delta)=p_i^{m_i}$. Then the elements $\delta^{p_i^{m_i-n_i-1}}\in K$ and $\alpha^{p_i^{f-1}}\notin K$ satisfy $o(\delta^{p_i^{m_i-n_i-1}})=o(\alpha^{p_i^{f-1}})=p_i^{n_i+1}$, $\delta^{p_i^{m_i-n_i-1}}\notin\langle\alpha^{p_i^{f-1}}\rangle$, and $\alpha^{p_i^{f-1}}\notin\langle\delta^{p_i^{m_i-n_i-1}}\rangle$. This implies that $\langle\delta^{p_i^{m_i-n_i-1}},\alpha^{p_i^{f-1}}\rangle\cong C_{p_i^{n_i+1}}\times C_{p_i^{n_i+1}}$ by direct counting on the number of elements of order $p_i^{n_i+1}$. Thus, $G$ contains all $p_i$-elements of order at most $p_i^{n_i+1}$, a contradiction to $n_i>\min\{s_i,t_i,u_i\}$. Therefore, $o(\beta)=p_i^{m_i}$. We show that $P_i$ is cyclic. Suppose by contradiction that this is not the case, so that $P_i$ has a subgroup $\alpha_1 K\times\alpha_2 K\cong C_{p_i}\times C_{p_i}$. As above, we can assume that $o(\alpha_1)=o(\alpha_2)=p_i^{m_i+1}$. Also, $\alpha_1\notin\langle\alpha_2\rangle$ and $\alpha_2\notin\langle\alpha_1\rangle$. Hence, $G$ has a subgroup $\langle\alpha_1,\alpha_2\rangle\cong C_{p_i^{m_i+1}}\times C_{p_i^{m_i+1}}$, a contradiction to $m_i+1>\min\{s_i,t_i,u_i\}$. Therefore, $P_i$ is cyclic. Let $\alpha$ be a generator of $P_i$. As shown above, $\alpha^{p_i^{v_i}}$ has order $p_i^{m_i}$. Since the Sylow $p_i$-subgroup of $C_{q+1}\times C_{q+1}$ has exponent $p_i^{r_i}$, this proves that $0\leq v_i\leq r_i-m_i$. \item Suppose $p_i=2\mid abc$ and $2\nmid\gcd(a,b,c)$. Then $2$ divides exactly one between $a$, $b$, and $c$. Arguing as in the previous point, one can show that the Sylow $2$-subgroup of $G$ is cyclic of order $2^{v_i+\max\{s_i,t_i,u_i\}}$ with $0\leq v_i\leq r_i-\max\{s_i,t_i,u_i\}$. \item Suppose $p_i=2\mid abc$ and $2\mid\gcd(a,b,c)$. Arguing as above, one can show that the Sylow $2$-subgroup $P_i$ of $G/K$ is cyclic and generated by $\alpha K$ where $\alpha\in G$ is a $2$-element; if $\alpha K$ has order $2^f$, then $\alpha^{2^{f}}$ has maximal order in the Sylow $2$-subgroup of $K$. Let $\gamma=\alpha^{2^{f-1}}={\rm diag}(\lambda,\mu,1)$. Suppose $s_i=t_i=u_i$. If $o(\mu)\leq 2^{s_i}$, then $\sigma={\rm diag}(1,\mu^{-1},1)\in K$ and $\gamma\sigma$ is a homology of order $2^{s_i+1}$, a contradiction; analogously if $o(\lambda)\leq 2^{s_i}$. Then $o(\lambda)=o(\mu)=2^{s_i+1}$, so that $\mu=\lambda^i$ with $i$ odd. Hence, $o(\lambda^{1-i})\leq 2^{s_i}$ and $\sigma={\rm diag}(1,\lambda^{1-i},1)\in K$; thus, $\gamma\sigma$ is a homology of order $2^{s_i+1}$, a contradiction. Suppose that $s_i,t_i,u_i$ are not equal, so that two of them are equal and smaller than the third one, say $s_i=t_i<u_i$. As shown above, this implies that $\gamma^2$ is a homology and $\gamma^2\in C$; otherwise, new homologies arise. Then $\lambda^2=\mu^2$ and hence $\mu=-\lambda$, as $\gamma$ is of type (B1). Since $\sigma={\rm diag}(1,-1,1)\in K$, $\gamma\sigma$ is a homology of order $2^{u_i+1}$ in $G$, a contradiction. The argument is analogous if $\max\{s_i,t_i,u_i\}=s_i$ or $\max\{s_i,t_i,u_i\}=t_i$. \end{itemize} \end{proof} \begin{remark} From the proof of Theorem {\rm \ref{fissatorepuntuale}} it follows that the group-theoretic structure of $G$ is uniquely determined by the parameters $e=|G|$, $a=|\{{\rm diag}(\lambda,1,1)\in G\;\textrm{for some}\;\lambda\}|$, $b=|\{{\rm diag}(1,\lambda,1)\in G\;\textrm{for some}\;\lambda\}|$, and $c=|\{{\rm diag}(\lambda,\lambda,1)\in G\;\textrm{for some}\;\lambda\}|$. In fact, write $G$ as a direct product $H_1\times\cdots\times H_{\ell}$ of its Sylow subgroups. If $p_i$ divides at most one between $a$, $b$, and $c$, then $H_i$ is cyclic, of order $p_i^{v_i+\max\{s_i,t_i,u_i\}}$. Otherwise, $p_i$ divides $a$, $b$, and $c$; in this case, $H_i$ is the direct product $C_1\times C_2$ of two cyclic groups of order $p_i^{v_i+\max\{s_i,t_i,u_i\}}$ and $p_i^{\min\{s_i,t_i,u_i\}}$. \end{remark} \subsection{The pointwise stabilizer of $T$ has index $2$ in $G$}\label{sec:indice2} In this section $[G:G_T]=2$. We consider separately the cases $q$ even and $q$ odd. \begin{proposition}\label{indice2pari} Let $q$ be even. \begin{itemize} \item[(i)] Let $a$, $c$, and $e$ be positive integers satisfying $e\mid(q+1)^2$, $c\mid(q+1)$, $a\mid c$, $ac\mid e$, $\frac{e}{a}\mid(q+1)$, and $\gcd\left(\frac{e}{ac},\frac{c}{a}\right)=1$. Then there exists a subgroup $G\leq (C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ of order $2e$ such that $|G_T|=e$ and \begin{equation}\label{genereindice2} g({\mathcal H}_q/G)=\frac{(q+1)\left(q-2a-c-\frac{e}{c}+1\right)+3e}{4e}. \end{equation} \item[(ii)] Conversely, if $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ and $G_T$ has index $2$ in $G$, then the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{genereindice2}, where: $e=|G|/2$; without loss of generality, $a-1$ is the number of homologies in $G$ with center $P_1$, which is equal to the number of homologies in $G$ with center $P_2$, and $c-1$ is the number of homologies in $G$ with center $P_3$; $a,c,e$ satisfy the numerical assumptions in point {\it (i)}. \end{itemize} \end{proposition} \begin{proof} {\it (i)}: let $a,c,e$ be positive integers satisfying the assumptions in point {\it (i)}. Choose $\lambda,\mu,\rho\in\mathbb{F}_{q^2}$ such that $o(\lambda)=a$, $o(\mu)=c$, and $o(\rho)=e/a$. Define $A=\langle{\rm diag}(\lambda,1,1)\rangle$, $C=\langle{\rm diag}(\mu,\mu,1)\rangle$, $D=\langle{\rm diag}(\rho,\rho^{-1},1)\rangle$, $S=ACD$, and $$ \beta=\begin{pmatrix} 0 & \gamma & 0 \\ \gamma^{-1} & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, $$ for some $\gamma\in\mathbb{F}_{q^2}$ with $\gamma^{q+1}=1$. Let $G=\langle S,\beta\rangle$. Note that $S\cong A\times CD$, $G\leq{\rm PGU}(3,q)$, $G$ stabilizes $T$, and $G_T=S$. By direct checking, the assumptions imply that $\beta$ normalizes $S$, so that $G\cong S\rtimes \langle\beta\rangle$. The order of $G$ is $2e$. We show that $g({\mathcal H}_q/G)$ is given by Equation \eqref{genereindice2}. The homologies of $S$ are contained in $K=A\times C$ and their number is $(a-1)+(a-1)+(c-1)$ (note that $S$ has the role of $G$ in Theorem \ref{fissatorepuntuale}, where $b=a$). By Theorem \ref{caratteri}, we have $2a+c-3$ elements $\sigma\in S$ such that $i(\sigma)=q+1$; for any other $\sigma\in S$, $i(\sigma)=0$. Let $\sigma\in S$, $\sigma={\rm diag}(\delta,\epsilon,1)$. If $\epsilon=\delta^{-1}$, then $o(\sigma\beta)=2$; by Theorem \ref{caratteri}, $i(\sigma\beta)=q+2$. If $\epsilon\ne\delta^{-1}$, then $o(\sigma\beta)=2d$ where $d$ is a nontrivial divisor of $q+1$; by Theorem \ref{caratteri}, $i(\sigma\beta)=1$. The number of elements $\sigma={\rm diag}(\lambda\mu,\mu,1)\in K$ such that $\mu=(\lambda\mu)^{-1}$ is equal to $a$; it follows that the number of elements $\sigma={\rm diag}(\delta,\epsilon,1)\in S$ such that $\epsilon=\delta^{-1}$ is equal to $a\cdot [S:K]=\frac{e}{c}$, because any element of $S/K$ is of type $\sigma K$ with $\sigma\in D$. Thus, by the Riemann-Hurwitz formula, $q^2-q-2=2e(2g({\mathcal H}_q/G)-2)+\Delta$, where \begin{equation}\label{boh} \Delta= (2a+c-3)(q+1) + \frac{e}{c}(q+2) + \left(e-\frac{e}{c}\right)\cdot1. \end{equation} Equation \eqref{genereindice2} now follows by direct computation. {\rm (ii)}: let $G\leq{\rm PGU}(3,q)$ stabilize $T$ with $[G:G_T]=2$. Let $e$ be the order of $G_T$. Since $G_T$ has odd order, $G_T$ has a complement in $G$, say $G=G_T\rtimes C_2$ with $C_2=\langle\beta\rangle$. Without restriction, we assume that $\beta$ stabilizes $P_3$ and interchanges $P_1$ and $P_2$, so that $$ \beta=\begin{pmatrix} 0 & \gamma & 0 \\ \gamma^{-1} & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, $$ with $\gamma^{q+1}=1$. For any $\sigma={\rm diag}(\delta,\epsilon,1)\in G_T$, we have $\beta^{-1}\sigma\beta={\rm diag}(\epsilon,\delta,1)\in G_T$. Hence, the subgroup $A\leq G_T$ of elements of type ${\rm diag}(\lambda,1,1)$ has the same order $a\mid(q+1)$ of the subgroup $B=\beta^{-1}A\beta\leq G_T$ made by the elements of type ${\rm diag}(1,\lambda,1)$. The order $c\mid(q+1)$ of the subgroup $C\leq G_T$ of elements of type ${\rm diag}(\lambda,\lambda,1)$ is a multiple of $a$; note that $C$ is the center of $G$. The number of homologies in $G_T$ is $d=2a+c-3$, and the homologies generate $K=ABC= A\times C$. Let $\bar p$ be a prime divisor of $|G_T/K|=\bar{p}^v f$ with $\bar{p}\nmid f$. As in the proof of Theorem \ref{fissatorepuntuale}, the Sylow $\bar p$-subgroup $\bar P$ of $G_T/K$ is cyclic; $\bar P=\langle\alpha K\rangle$, where $\alpha={\rm diag}(\delta,\epsilon,1)\in G_T$ is a $\bar p$-element and $o(\alpha^{\bar p^{v}})=\bar p^{m}$ is maximal in the Sylow $\bar p$-subgroup of $K$. We can also assume $o(\delta)=o(\alpha)=\bar p^{v+m}$ and $\epsilon=\delta^j$ for some $j\in\{2,\ldots,\bar p^{v+m}-1\}$; otherwise, $o(\epsilon)=o(\alpha)$, $\delta=\epsilon^j$, and analogous arguments hold. The element $\beta^{-1}\alpha\beta={\rm diag}(\delta^j,\delta,1)$ must be contained in $\langle\alpha\rangle$; otherwise, $\langle\alpha,\beta^{-1}\alpha\beta\rangle$ is a subgroup $C_{\bar p^{v+m}}\times C_{\bar p^{v+m}}$ of $G$ containing homologies out of $K$, a contradiction. Thus, $\beta^{-1}\alpha\beta=\alpha^k$ for some $k\in\{2,\ldots,\bar p^{v+m}-1\}$, that is, $\delta^{k}=\delta^{j}$ and $\delta^{jk}=\delta$. This implies $k=j=\bar p^{v+m}-1$ and $\alpha={\rm diag}(\delta,\delta^{-1},1)$. Hence, $\langle\alpha\rangle$ has order a divisor of $q+1$; the same argument for all prime divisors $\bar p$ of $|G_T/K|$ proves that $\frac{e}{a}\mid(q+1)$. Also, no nontrivial power $\alpha^r={\rm diag}(\delta^{r},\delta^{-r},1)\ne id$ of $\alpha$ can be contained in $C$. Hence, $\gcd(o(\alpha),a)=\gcd(o(\alpha),c)$; otherwise, the product $\langle\alpha\rangle \cdot C$ would contain new homologies in $A$ whose order does not divide $a$, a contradiction. Therefore, $\gcd(\frac{e}{ac},\frac{c}{a})=1$. Now the value of $\Delta$ is computed as in Equation \eqref{boh} and provides Equation \eqref{genereindice2}, that is the thesis. \end{proof} \begin{proposition}\label{indice2dispari} Let $q$ be odd. \begin{itemize} \item[(i)] Let $\ell$, $a$, $c$, and $e$ be positive integers satisfying $e\mid(q+1)^2$, $c\mid(q+1)$, $\ell\mid c$, $a\mid c$, $ac\mid e$, $\frac{e}{a}\mid(q+1)$, and $\gcd(\frac{e}{ac},\frac{c}{a})=1$. If $2\mid a$ or $2\nmid c$, we also require that $2\nmid \frac{e}{ac}$. Then there exists a subgroup $G\leq (C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ of order $2e$ such that $|G\cap(C_{q+1}\times C_{q+1})|=e$ and \begin{equation}\label{genereindice2dispari} g({\mathcal H}_q/G)=\frac{(q+1)\left(q-2a-c+1-h\right)-2k + 4e}{4e}, \end{equation} where $$ (h,k)=\begin{cases} \left(\frac{e}{c},\frac{e}{2}\right) & \qquad \textrm{if} \qquad 2a\nmid (q+1) \,;\\ \left(\frac{e}{c},0\right) & \qquad \textrm{if} \qquad 2a \mid (q+1),\; 2a\nmid c\;; \\ \left(0,e\right) & \qquad \textrm{if} \qquad 2a\mid c,\;2\ell\nmid(q+1)\;; \\ \left(0,0\right) & \qquad \textrm{if} \qquad 2a\mid c,\; 2\ell\mid (q+1),\; 2\ell\nmid c\;; \\ \left(\frac{2e}{c},0\right) & \qquad \textrm{if} \qquad 2a\mid c,\; 2\ell\mid c\;. \\ \end{cases} $$ \item[(ii)] Conversely, if $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ and $G\cap(C_{q+1}\times C_{q+1})$ has index $2$ in $G$, then the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{genereindice2dispari}, where: $e=|G|/2$; without loss of generality, $a-1$ is the number of homologies in $G$ with center $P_1$ which is equal to the number of homologies in $G$ with center $P_2$, and $c-1$ is the number of homologies in $G$ with center $P_3$; $\ell=\frac{o(\beta)}{2}$ for some $\beta\in G\setminus G_T$; $\ell,a,c,e$ satisfy the numerical assumptions in point {\it (i)}. \end{itemize} \end{proposition} \begin{proof} {\it (i)}: let $\ell,a,c,e$ be positive integers satisfying the assumptions in point {\it (i)}. Choose $\lambda,\mu,\rho\in\mathbb{F}_{q^2}$ such that $o(\lambda)=a$, $o(\mu)=c$, and $o(\rho)=e/a$. Define $A=\langle{\rm diag}(\lambda,1,1)\rangle$, $C=\langle{\rm diag}(\mu,\mu,1)\rangle$, $D=\langle{\rm diag}(\rho,\rho^{-1},1)\rangle$, $S=ACD= (A\times C)D$, and $$ \beta=\begin{pmatrix} 0 & t & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, $$ where $t\in\mathbb{F}_{q^2}$ has order $\ell$. Let $G=\langle S,\beta\rangle$. We have that $G\leq {\rm PGU}(3,q)$, $G$ stabilizes $T$, and $G_T=S$. The numerical assumptions imply that, $\beta^2={\rm diag}(t,t,1)\in C$, $\beta$ normalizes $G_T$, and $|G|=2e$. The homologies in $G_T$ are in $K=A\times C$, and their number is $2a+c-3$; any other nontrivial element in $G_T$ is of type (B1). Let $\sigma\in G\setminus G_T$, which is uniquely written as $\tau\beta$ with $\tau\in G_T$. We have $\tau=\zeta\alpha\xi$, where $\zeta\in D$, $\alpha$ is a fixed element of $A$ and $\xi$ is a fixed element of $C$; write $\alpha={\rm diag}(\lambda^r,1,1)$ and $\xi={\rm diag}(\mu^s,\mu^s,1)$, where $r\in\{0,\ldots,a-1\}$ and $s\in\{0,\ldots,c-1\}$ are uniquely determined. By direct checking, $\sigma^2={\rm diag}(\lambda^r \mu^{2s}t,\lambda^r \mu^{2s}t,1)$. Hence, $\sigma$ is either of type (A), or (B1), or (B2), according to $\lambda^r \mu^{2s}t=1$, or $1\ne o(\lambda^r \mu^{2s}t)\mid\frac{q+1}{2}$, or $o(\lambda^r \mu^{2s}t)\nmid \frac{q+1}{2}$, respectively. Note that the type of $\sigma$ does not depend on $\zeta$. Thus, to find the number of elements $\sigma=\tau\beta\in G\setminus G_T$ of a certain type, we must count the number of elements $\tau\in K$ such that $\sigma$ is of that type, and then multiply this number by the index $\frac{e}{ac}$ of $K$ in $G_T$; to this aim, we distinguish different cases arising from the numerical conditions on $\ell,a,c$. Recall that $\beta$ is fixed as above. \begin{itemize} \item $a\nmid\frac{q+1}{2}$: this implies $c\nmid\frac{q+1}{2}$. The number of elements of type (B2) in $G\setminus G_T$ is equal to $\frac{a}{2}\cdot c\cdot\frac{e}{ac}=\frac{e}{2}$. In fact, if $\ell\mid\frac{q+1}{2}$, then $o(\lambda^r\mu^{2s}t)\nmid\frac{q+1}{2}$ if and only if $r$ is odd; if $\ell\nmid\frac{q+1}{2}$, then $o(\lambda^r\mu^{2s}t)\nmid\frac{q+1}{2}$ if and only if $r$ is even. Suppose $\ell\nmid\frac{q+1}{2}$ and $r$ is even, or $\ell\mid\frac{q+1}{2}$ and $r$ is odd. As already observed, $\sigma$ is of type (A) if and only if $\mu^{2s}=(\lambda^r t)^{-1}$. Thus, $G\setminus G_T$ has $\frac{a}{2}\cdot 2\cdot\frac{e}{ac}=\frac{e}{c}$ elements of type (A), and $\frac{a}{2}\cdot (c-2)\cdot\frac{e}{ac}$ elements of type (B1). \item $a\mid\frac{q+1}{2}$ and $a\nmid\frac{c}{2}$: in this case $o(\lambda^r\mu^{2s}t)\mid\frac{q+1}{2}$ and $G\setminus G_T$ has no elements of type (B2). For any $\ell$ there exist exactly $\frac{a}{2}$ values of $r$ such that $o((\lambda^r t)^{-1})\mid \frac{c}{2}$ (as above, we consider the cases $r$ even and $r$ odd separately) and there are exactly $2$ values of $s$ for which $\lambda^r\mu^{2s}t=1$. In this way we get $\frac{a}{2}\cdot2\cdot\frac{e}{ac}=\frac{e}{c}$ elements of $G\setminus G_T$ are of type (A), the other ones are of type (B1). \item $a\mid\frac{c}{2}$ and $\ell\nmid\frac{q+1}{2}$: in this case $o(\lambda^r\mu^{2s}t)\nmid\frac{q+1}{2}$ since $o(\lambda^r\mu^{2s})\mid\frac{q+1}{2}$ and $o(t)\nmid\frac{q+1}{2}$; hence, all elements of $G\setminus G_T$ are of type (B2). \item $a\mid\frac{c}{2}$ and $\ell\mid\frac{q+1}{2}$ and $\ell\nmid\frac{c}{2}$: in this case $o(\lambda^r\mu^{2s}t)\mid\frac{q+1}{2}$ and $o(\lambda^r\mu^{2s})\ne o(t)$, so that $\lambda^r\mu^{2s}t\ne1$. Hence all elements of $G\setminus G_T$ are of type (B1). \item $a\mid\frac{c}{2}$ and $\ell\mid\frac{c}{2}$: in this case $o(\lambda^r\mu^{2s}t)\mid\frac{q+1}{2}$ and $G\setminus G_T$ contains no elements of type (B2). Also, $o((\lambda^r t)^{-1})\mid o(\mu^2)$ and $G\setminus G_T$ contains exactly $a\cdot2\cdot\frac{e}{ac}=\frac{2e}{c}$ elements of type (A). \end{itemize} Denote by $h$ and $k$ the number of elements of $G\setminus G_T$ of type (A) and (B2), respectively. By direct checking, the Riemann-Hurwitz formula and Theorem \ref{caratteri} provide the value given in Equation \eqref{genereindice2dispari} for the genus of ${\mathcal H}_q/G$. {\it (ii)}: let $G\leq{\rm PGU}(3,q)$ stabilize $T$ with $[G:G_T]=2$. Put $e=|G_T|$. For any $\beta\in G\setminus G_T$, $\beta^2\in G_T$, and we can assume without loss of generality that $\beta$ stabilizes $P_3$ and interchanges $P_1$ and $P_2$, so that $$ \beta=\begin{pmatrix} 0 & \gamma_1 & 0 \\ \gamma_2 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$ for some $\gamma_1,\gamma_2$ with $\gamma_1^{q+1}=\gamma_2^{q+1}=1$. With same arguments as in the proof of Proposition \ref{indice2pari}, one can show that: the number of homologies with center $P_1$, say $a-1$, is equal to the number of homologies with center $P_2$; the number of homologies with center $P_3$, say $c-1$, satisfies $a\mid c$ and $c\mid(q+1)$; the subgroup of $G_T$ generated by the homologies of $G_T$ is $K=A\times C$, where $A$ and $C$ are given by the identity together with the homologies having center $P_1$ and $P_3$, respectively; from $K\leq G_T$ follows that $ac\mid e$; $\frac{e}{a}\mid(q+1)$; and $\gcd(\frac{e}{ac},\frac{c}{a})=1$. Since there are no homologies in $G_T\setminus K$, same arguments as in Theorem \ref{fissatorepuntuale} give $2\nmid \frac{e}{ac}$ when $2\mid a$ or $2\nmid c$. The element $\beta^2={\rm diag}(\gamma_1\gamma_2,\gamma_1\gamma_2,1)$ is either trivial or a homology with center $P_3$; hence, $\beta^2\in C$ and $\ell:=o(\gamma_1\gamma_2)=\frac{o(\beta)}{2}$ is a divisor of $c$. Now the value of $\Delta$ in the Riemann-Hurwitz formula $$2g({\mathcal H}_q)-2=|G|(2g({\mathcal H}_q/G)-2)+\Delta$$ can be computed as above and provides Equation \eqref{genereindice2dispari}, so that the claim follows. \end{proof} \subsection{The pointwise stabilizer of $T$ has index $3$ in $G$}\label{sec:indice3} In this section $[G:G_T]=3$. We consider the cases $3\nmid(q+1)$ and $3\mid(q+1)$ separately. \begin{proposition}\label{indice3caso1} Let $q$ be such that $3\nmid(q+1)$. \begin{itemize} \item[(i)] Let $a$ and $e$ be positive integers satisfying $e\mid(q+1)^2$, $a^2\mid e$, $\frac{e}{a}\mid(q+1)$, $2\nmid\frac{e}{a^2}$, and $\gcd(\frac{e}{a^2},a)=1$. We also require that there exists a positive integer $m\leq \frac{e}{a^2}$ such that $\frac{e}{a^2}\mid(m^2-m+1)$. Then there exists a subgroup $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ of order $3e$ such that $|G\cap(C_{q+1}\times C_{q+1})|=e$ and \begin{equation}\label{genereindice3caso1} g({\mathcal H}_q/G)=\frac{(q+1)(q-3a+1)+2e}{6e}. \end{equation} \item[(ii)] Conversely, if $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ and $G\cap(C_{q+1}\times C_{q+1})$ has index $3$ in $G$, then the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{genereindice3caso1}, where: $e=|G|/3$; the number of homologies in $G$ with center $P_i$ is $a-1$ for $i=1,2,3$; there exists $m$ such that $a,e,m$ satisfy the numerical assumptions in point {\it (i)}. \end{itemize} \end{proposition} \begin{proof} {\it (i)}: let $a$, $e$, and $m$ satisfy the assumptions in point {\it (i)}. Define the following elements and subgroups of ${\rm {\rm PGU}}(3,q)$: $K=\{{\rm diag}(\lambda,\mu,1)\mid \lambda^a=\mu^a=1\}$, $\alpha={\rm diag}(\rho,\rho^m,1)$ where $\rho\in\mathbb{F}_{q^2}$ is an element of order $\frac{e}{a^2}$, $S=\langle K,\alpha\rangle=K\times\langle\alpha\rangle$ as $\gcd\left(\frac{e}{a^2},a\right)=1$, $$ \beta=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}, $$ and $G=\langle S,\beta \rangle\leq{\rm PGU}(3,q)$. Note that $S$ stabilizes $T$ pointwise, $\beta$ has order $3$, acts on $T$, and normalizes $K$. Also, $\beta^{-1}\alpha\beta=\alpha^{-m}$, so that $\beta$ normalizes $S$. Thus, $G_T=S$ has order $e$, and $G=S\rtimes\langle\beta\rangle$ has order $3e$. We show that $g({\mathcal H}_q/G)$ is given by Equation \eqref{genereindice3caso1}. The subgroup $K$ contains exactly $3(a-1)$ elements of type (A); the other nontrivial elements of $K$ are of type (B1). Suppose that $G_T\ne K$, that is, $\alpha$ is not trivial. Let $\sigma\in G_T\setminus K$ and write $\sigma={\rm diag}(\lambda\rho^{j},\mu\rho^{jm},1)$ with $\lambda^a=\mu^a=1$ and $0<j<\frac{e}{a^2}$. The element $\sigma$ is of type (A) if and only if either $\lambda\rho^j=1$, or $\mu\rho^{jm}=1$, or $\lambda\rho^j=\mu\rho^{jm}$. By direct checking using the numerical conditions on the parameters, none of these conditions holds. Therefore, every element in $G_T\setminus K$ is of type (B1). Let $\sigma\in G\setminus G_T$, so that $\sigma={\rm diag}(\delta,\epsilon,1)\cdot\beta$ or $\sigma={\rm diag}(\delta,\epsilon,1)\cdot\beta^2$ for some $(q+1)$-th roots of unity $\delta$ and $\epsilon$. By direct checking $\sigma$ has order $3$. If $3\mid(q-1)$, then $\sigma$ is of type (B2). If $3\mid q$, then $\sigma$ is of type (D), because no element of type (C) can act transitively on the vertices of $T$ (as it stabilizes each line through its center). Equation \eqref{genereindice3caso1} now follows from Theorem \ref{caratteri} and the Riemann-Hurwitz formula. In fact $G$ has $3(a-1)$ elements of type (A), $e-3a+2$ elements of type (B1), and $2e$ elements of type either (B2) or (D) according to $3\mid(q-1)$ or $3\mid q$, respectively. {\it (ii)}: let $G\leq{\rm PGU}(3,q)$ stabilize $T$ with $[G:G_T]=3$, $e$ be the order of $G_T$, and $\beta$ be an element of $G\setminus G_T$. Then $\beta$ acts transitively on $T$ and we can assume that $$ \beta=\begin{pmatrix} 0 & \gamma_1 & 0 \\ 0 & 0 & \gamma_2 \\ 1 & 0 & 0 \end{pmatrix} $$ for some $(q+1)$-th roots of unity $\gamma_1,\gamma_2$. The element $\beta$ has order $3$ and normalizes $G_T$, so that $G=G_T\rtimes \langle\beta\rangle$. Since $\beta$ is a $3$-cycle on $T$, the number of elements of type (A) with center $P_i$ is the same for $i=1,2,3$, say $a-1$; clearly, $a$ divides $q+1$. Then the elements of type (A) in $G_T$ generate the subgroup $K=\{{\rm diag}(\lambda,\mu,1)\mid\lambda^a=\mu^a=1\}$ of order $a^2$; this implies $a^2\mid e$. From the proof of Theorem \ref{fissatorepuntuale} applied to $G_T$ follows that $\frac{e}{a}\mid(q+1)$, $\frac{e}{a^2}$ is odd, and $G_T/K$ is cyclic; say $G_T/K=\langle\alpha K\rangle$. Suppose that $|G_T/K|=\frac{e}{a^2}\ne1$, that is, $\alpha\notin K$ and $\alpha$ is of type (B1). We can assume that $\alpha={\rm diag}(\rho,\rho^m,1)$ with $o(\alpha)=o(\rho)=\frac{e}{a^2}$ and $1<m<\frac{e}{a^2}$. If $d=\gcd(o(\rho),|K|)$, then $\alpha^d\in K$ because $K$ contains all elements of order $d$ in $C_{q+1}\times C_{q+1}$. Thus, we can replace $\alpha$ with $\alpha^d$ and assume that $\gcd(o(\alpha),|K|)=1$; hence, $\gcd(\frac{e}{a^2},a)=1$ and $G_T=K\times\langle\alpha\rangle$. Consider $\tilde\alpha=\beta^{-1}\alpha\beta={\rm diag}(\rho^{-m},\rho^{1-m},1)\in G_T$. If $\tilde\alpha\notin\langle\alpha\rangle$, then as in the proof of Theorem \ref{fissatorepuntuale} we have that $\langle\alpha,\tilde\alpha\rangle\cong C_{\frac{e}{a^2}}\times C_{\frac{e}{a^2}}$ contains elements of type (A) not in $K$, a contradiction. Hence, $\tilde \alpha=\alpha^j$ for some $j$ with $1<j<\frac{e}{a^2}$. By direct computation, this is equivalent to require that $j=\frac{e}{a^2}-m$ and $\frac{e}{a^2}\mid(m^2-m+1)$. The same condition is required for $(\beta^2)^{-1}\alpha\beta^2\in\langle\alpha\rangle$. Now the same argument as in point {\it (i)} provides the type of the elements of $G$ and therefore the genus of ${\mathcal H}_q/G$ by means of Theorem \ref{caratteri} and the Riemann-Hurwitz formula. \end{proof} \begin{proposition}\label{indice3caso2} Let $q$ be such that $3\mid(q+1)$. \begin{itemize} \item[(i)] Let $a$, $e$, and $\ell$ be positive integers satisfying $e\mid(q+1)^2$, $a^2\mid e$, $\frac{e}{a}\mid(q+1)$, $2\nmid\frac{e}{a^2}$, $\gcd(\frac{e}{a^2},a)=1$, and $\ell\mid(q+1)$. We also require that there exists a positive integer $m\leq \frac{e}{a^2}$ such that $\frac{e}{a^2}\mid(m^2-m+1)$. Then there exists a subgroup $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ of order $3e$ such that $|G\cap(C_{q+1}\times C_{q+1})|=e$ and \begin{equation}\label{genereindice3caso2} g({\mathcal H}_q/G)=\frac{(q+1)(q-3a+1)+h\cdot e}{6e}, \qquad\textrm{with}\qquad h=\begin{cases} 2 & \textrm{if}\quad a\nmid\frac{q+1}{3}, \\ 0 & \textrm{if}\quad a\mid\frac{q+1}{3},\;\ell\nmid\frac{q+1}{3}, \\ 6 & \textrm{if}\quad a\mid\frac{q+1}{3},\;\ell\mid\frac{q+1}{3}. \\ \end{cases} \end{equation} \item[(ii)] Conversely, if $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ and $G\cap(C_{q+1}\times C_{q+1})$ has index $3$ in $G$, then the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{genereindice3caso2}, where: $e=|G|/3$; the number of homologies in $G$ with center $P_i$ is $a-1$ for $i=1,2,3$; there exist $\ell$ and $m$ such that $a,e,\ell,m$ satisfy the numerical assumptions in point {\it (i)}. \end{itemize} \end{proposition} \begin{proof} {\it (i)}: let $a,e,\ell,m$ satisfy the assumptions in point {\it (i)}. Define $K,\alpha,S$ as in the proof of Proposition \ref{indice3caso1} and $\beta\in{\rm PGU}(3,q)$ by $$ \beta=\begin{pmatrix} 0 & t & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}, $$ where $t\in\mathbb{F}_{q^2}$ has order $\ell$. Let $G=\langle S,\beta\rangle$. Arguing as in the proof of Proposition \ref{indice3caso1} we have that $S=G_T$ has order $e$, $G=G_T\rtimes\langle\beta\rangle$ has order $3e$, $G_T$ has $3(a-1)$ elements of type (A) and any other nontrivial element of $G_T$ is of type (B1). Also, every element $\sigma\in G\setminus G_T$ has order $3$ and can be uniquely written as $\sigma=\kappa\alpha^j\beta$ or $\sigma=\kappa\alpha^j\beta^2$, where $\kappa={\rm diag}(\lambda,\mu,1)\in K$ and $\alpha^j={\rm diag}(\rho^j,\rho^{jm},1)$. The element $\sigma$ has exactly $3$ fixed points in ${\rm PG}(2,\bar{\mathbb{F}}_{q^2})$, namely $(\bar x,(t\lambda\rho^{j})^{-1}\bar{x}^2,1)$ where $\bar{x}^3=t\lambda\mu\rho^{j(m+1)}$. If the fixed points of $\sigma$ are $\mathbb{F}_{q^2}$-rational then $\sigma$ is of type (B1), otherwise $\sigma$ is of type (B3). Then $\sigma$ is either of type (B1) or of type (B3), according to $o(t\lambda\mu\rho^{j(m+1)})\mid\frac{q+1}{3}$ or $o(t\lambda\mu\rho^{j(m+1)})\nmid\frac{q+1}{3}$, respectively. If $3\mid o(\rho)=\frac{e}{a^2}$, then $3\mid(m^2-m+1)$ and hence $3\mid(m+1)$, so that $o(\rho^{j(m+1)})\mid\frac{q+1}{3}$; if $3\nmid o(\rho)$, then again $o(\rho^{j(m+1)})\mid\frac{q+1}{3}$. In any case, $\sigma$ is of type (B1) or (B3) according to $o(t\lambda\mu)\mid\frac{q+1}{3}$ or $o(t\lambda\mu)\nmid\frac{q+1}{3}$, respectively. We make clear now the type of $\sigma$ in relation to the assumptions on $a$ and $\ell$. \begin{itemize} \item If $a\mid\frac{q+1}{3}$ and $\ell\mid\frac{q+1}{3}$, then the $2e$ elements of $G\setminus G_T$ are all of type (B1). \item If $a\mid\frac{q+1}{3}$ and $\ell\nmid\frac{q+1}{3}$, then the $2e$ elements of $G\setminus G_T$ are all of type (B3). \item If $a\nmid\frac{q+1}{3}$, then $G\setminus G_T$ contains $\frac{a^2}{3}\cdot[G_T:K]\cdot 2=\frac{2e}{3}$ elements of type (B1) and $2e-\frac{2e}{3}=\frac{4e}{3}$ elements of type (B3). This number can be obtained counting the elements of type (B1) as follows. Consider a primitive $(q+1)$-th root of unity $\epsilon$ and write $t=\epsilon^r$, $\lambda\mu=\epsilon^s$. Then $\sigma$ is of type (B1) if and only if $s\equiv-r\pmod3$. When $s$ is given, we have $\frac{a}{3}$ choices for $s$ such that $s\equiv-r\pmod3$. Hence, we have $\frac{a^2}{3}$ choices for the couple $(\lambda,\mu)$ such that $\sigma$ is of type (B1). \end{itemize} Then Equation \eqref{genereindice3caso2} follows by Theorem \ref{caratteri} and the Riemann-Hurwitz formula. {\it (ii)}: let $G\leq{\rm PGU}(3,q)$ stabilize $T$ with $[G:G_T]=3$ and $|G_T|=e$. Then we can argue as in the proof of Proposition \ref{indice3caso1} to prove that $G=G_T\rtimes\langle\beta\rangle$ with $$ \beta=\begin{pmatrix} 0 & \gamma_1 & 0 \\ 0 & 0 & \gamma_2 \\ 1 & 0 & 0 \end{pmatrix} $$ and $\ell:=o(\gamma_1\gamma_2)\mid(q+1)$, $G_T=K\times\langle\alpha\rangle$ where $K$ is the subgroup generated by the elements of type (A), $|K|=a^2$, $\alpha={\rm diag}(\rho,\rho^{m},1)$, and the parameters $e,a,m,\ell$ satisfy the assumptions in point {\it (i)}. Now the genus of ${\mathcal H}_q/G$ is obtained as in point {\it (i)} by means of Theorem \ref{caratteri} and the Riemann-Hurwitz formula. \end{proof} \subsection{The pointwise stabilizer of $T$ has index $6$ in $G$}\label{sec:indice6} In this section $[G:G_T]=6$. \begin{proposition}\label{indice6} \begin{itemize} \item[(i)] Let $a$ be a divisor of $q+1$. We choose $e=a^2$ if $3\nmid(q+1)$ or $3\mid a$; $e\in\{a^2,3a^2\}$ if $3\mid(q+1)$ and $3\nmid a$. Then there exists a subgroup $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ of order $6e$ such that $|G\cap(C_{q+1}\times C_{q+1})|=e$ and \begin{equation}\label{genereindice6} g({\mathcal H}_q/G)=\frac{(q+1)(q-3a+1-\frac{3e}{a})-2r-3s-t+12e}{12e}, \end{equation} where $$ r=\begin{cases} \frac{7e}{2} & \textrm{if}\quad q\equiv0\,\textrm{ or }\,1\!\!\!\pmod3,\;\textrm{$q$ is odd},\;a\nmid\frac{q+1}{2}, \\ \frac{3e}{2} & \textrm{if}\quad q\equiv2\!\!\!\pmod3,\;\textrm{$q$ is odd},\;a\nmid\frac{q+1}{2}, \\ 2e & \textrm{if}\quad q\equiv0\,\textrm{ or }\,1\!\!\!\pmod3,\;\textrm{either $q$ is even or}\;a\mid\frac{q+1}{2}, \\ 0 & \textrm{if}\quad q\equiv2\!\!\!\pmod3,\;\textrm{either $q$ is even or}\;a\mid\frac{q+1}{2}; \end{cases} $$ $$ s= \begin{cases} \frac{4e}{3} & \textrm{if}\quad q\equiv2\!\!\!\pmod3\;\textrm{ and }\; a\nmid\frac{q+1}{3}, \\ 0 & \textrm{otherwise}; \end{cases} \quad t=\begin{cases} 0 & \textrm{if $q$ is odd,}\quad \\ 3e & \textrm{if $q$ is even.} \end{cases} $$ \item[(ii)] Conversely, if $G\leq(C_{q+1}\times C_{q+1})\rtimes \mathbf{S}_3$ and $G\cap(C_{q+1}\times C_{q+1})$ has index $6$ in $G$, then the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{genereindice6}, where: $e=|G|/6$; the number of homologies in $G$ with center $P_i$ is $a-1$ for $i=1,2,3$; $a$ and $e$ satisfy the numerical assumptions in point {\it (i)}. \end{itemize} \end{proposition} \begin{proof} {\it (i)}: let $a$ and $e$ satisfy the numerical assumptions in point {\it (i)}. In ${\rm PGU}(3,q)$ define $$ K=\left\{\begin{pmatrix} \lambda & 0 & 0 \\ 0 & \mu & 0 \\ 0 & 0 & 1 \end{pmatrix}:\lambda^a=\mu^a=1\right\}, \; \varphi=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}, \; \psi=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix},\; C=\langle\varphi,\psi\rangle \cong S_3. $$ If $e=3a^2$, define $\alpha={\rm diag}(\rho,\rho^{-1},1)\in{\rm PGU}(3,q)$ with $\rho^3=1$ and $S=\langle K,\alpha\rangle\cong K\times\langle\rho\rangle$; if $e=a^2$, define $S=K$. Let $G=\langle S,C\rangle$. By direct checking, $\varphi$ and $\psi$ normalize $K$ and $\alpha$, so that $G$ is a semmidirect product $S\rtimes C$. Also, $G_T=S$ and $G_T$ has index $6$ in $G$. As usual, we count the elements of different type in $G$ as described in Lemma \ref{classificazione}. The subgroup $G_T$ contains exactly $3(a-1)$ elements of type (A); any other nontrivial element of $G_T$ is of type (B1). The elements in $G\setminus G_T$ are contained in subgroups $L$ such that either $[L:G_T]=2$ or $[L:G_T]=3$; here, $L\cap(C_{q+1}\times C_{q+1})=G_T$. Thus, we can apply either Propositions \ref{indice2pari} and \ref{indice2dispari}, or Propositions \ref{indice3caso1} and \ref{indice3caso2}. Equation \eqref{genereindice6} then follows by the Riemann-Hurwitz formula and Theorem \ref{caratteri}. {\it (ii)}: let $G\leq{\rm PGU}(3,q)$ stabilize $T$ with $[G:G_T]=6$ and $|G_T|=e\mid(q+1)^2$. The factor group $G/G_T$ acts faithfully on $T$, hence $G/G_T\cong \mathbf{S}_3$. Let $G/G_T=\langle\varphi G_T\rangle\rtimes\langle\psi G_T\rangle$ where $\varphi,\psi\in G\setminus G_T$ satisfy $\varphi^3\in G_T$ and $\psi^2\in G_T$. We can assume that $$ \varphi=\begin{pmatrix} 0 & \delta_1 & 0 \\ 0 & 0 & \delta_2 \\ 1 & 0 & 0 \end{pmatrix}\quad \textrm{and}\quad \psi=\begin{pmatrix} 0 & \gamma_1 & 0 \\ \gamma_2 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ for some $(q+1)$-th roots of unity $\delta_1,\delta_2,\gamma_1,\gamma_2$. As $(\psi^{-1}\varphi\psi) G_T=\varphi^{-1}G_T$, we have that $\gamma:=\gamma_1\gamma_2$ and $\delta:=\frac{\delta_1^2\gamma_2}{\delta_2\gamma_1^2}$ are $a$-th roots of unity. Then ${\rm diag}(1,\gamma^{-1},1)\in G_T$; hence, we can replace $\psi$ with ${\rm diag}(1,\gamma^{-1},1)\cdot\psi$ and assume that $\gamma_2=\gamma_1^{-1}$. and $\delta_1\delta_2=\frac{\delta_1^3}{\gamma_1^3\delta}$. This yields $C=\langle\varphi,\psi\rangle\cong S_3$ is a complement for $G_T$ and $G=G_T\rtimes C$. From $\varphi^{-1}G_T\varphi=G_T$ follows as in the proof of Proposition \ref{indice3caso1} that the elements of type (A) in $G_T$ with center $P_1,P_2,P_3$ are in the same number $a-1$, $a\mid(q+1)$, and generate a subgroup $K\leq G_T$ of order $|K|=a^2$. Suppose that there exists $\alpha\in G_T\setminus K$, that is, $\frac{e}{a^2}>1$. From $\psi^{-1}G_T\psi=G_T$ follows as in the proof of Proposition \ref{indice2pari} that $\alpha={\rm diag}(\lambda,\lambda^{-1},1)$ for some $\lambda$ with $\lambda^e=1$. From $\varphi^{-1}G_T\varphi=G_T$ follows as in the proof of Proposition \ref{indice3caso1} that $\frac{e}{a^2}\mid(\ell^2-\ell+1)$ with $\ell=-1$. Hence $o(\alpha)=3$, $e=3a^2$, and $3\nmid a$, since there are no elements of order $3$ in $(C_{q+1}\times C_{q+1})\setminus K$ if $3\mid a$. Since now the structure of $G$ has been determined, the genus of ${\mathcal H}_q/G$ can be computed arguing as above. Here, use the following fact: if $3\mid(q+1)$ and $a\mid\frac{q+1}{3}$, then $o(\delta_1\delta_2)\mid\frac{q+1}{3}$; if $3\mid(q+1)$ and $a\nmid\frac{q+1}{3}$, then $o(\delta_1\delta_2)\nmid\frac{q+1}{3}$. \end{proof} \section{$G$ stabilizes a point $P\in{\rm PG}(2,q^2)\setminus{\mathcal H}_q$ with $q$ even}\label{sec:polopolare} In this section $q$ is even and $G$ is supposed to be contained in the maximal subgroup $M$ of ${\rm PGU}(3,q)$ of order $|M|=q(q-1)(q+1)^2$ stabilizing a pole-polar pair $(P,\ell)$ with respect to the unitary polarity associated to ${\mathcal H}_q(\mathbb{F}_{q^2})$; here $P\in{\rm PG}(2,q^2)\setminus{\mathcal H}_q$ and $|\ell\cap{\mathcal H}_q|=|\ell\cap{\mathcal H}_q(\mathbb{F}_{q^2})|=q+1$. Following \cite[Section 3]{CKT2}, we use the plane model \eqref{M3} of ${\mathcal H}_q$ and assume up to conjugation in ${\rm PGU}(3,q)$ that $P=(0,0,1)$ and $\ell$ is the line at infinity $Z=0$. Note that the $\mathbb{F}_{q}$-rational points of ${\mathcal H}_q$ are exactly the $q+1$ points of $\ell\cap{\mathcal H}_q$. Then $$ M=\left\{\begin{pmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & 1 \end{pmatrix} \mid a,b,c,d\in\mathbb{F}_{q^2},\, ac^q-a^qc=0,\,bd^q-b^qd=0,\,bc^q-a^qd=-1,\,ad^q-b^qc=1 \right\}. $$ If $\sigma\in M$, we denote by $\det(\sigma)$ the determinant of the representative of $\sigma$ with entry $1$ on the third row and column; see Remark \ref{abuso}. Let $$ H=\left\{\begin{pmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & 1 \end{pmatrix}\in M \mid a,b,c,d\in\mathbb{F}_{q},\, ad-bc=1 \right\}\leq M,\quad \Omega=\left\{\begin{pmatrix} \lambda & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & 1 \end{pmatrix}\right\}\leq M. $$ The group $H$ is isomorphic to ${\rm SL}(2,q)$, and its action on $\ell\cap{\mathcal H}_q$ is equivalent to the action of ${\rm SL}(2,q)$ in its usual permutation representation on $\mathbb{F}_{q}$. The group $\Omega$ is cyclic of order $q+1$ and made by the homologies of ${\rm PGU}(3,q)$ with center $P$. As $q$ is even, $H \cap \Omega$ is trivial and hence $$M=H\times\Omega.$$ Let $G\leq M$, $G(H)=G\cap H$, and $G(\Omega)=G\cap \Omega$. Then $G/G(\Omega)$ acts faithfully on $\ell\cap{\mathcal H}_q$ and is isomorphic to a subgroup of ${\rm PSL}(2,q)$. Throughout this section, we will denote by $\omega\mid(q+1)$ the order of $G(\Omega)$. \begin{remark}\label{spezzabile} Let $G\leq M$ be such that $G/G(\Omega)$ is generated by elements whose order is coprime to $q+1$. Then $G=G(H)\times G(\Omega)$. \end{remark} \begin{proof} Let $\alpha_1 G(\Omega),\ldots,\alpha_r G(\Omega)$ be generators of $G/G(\Omega)$ of order $o_1,\ldots,o_r$, respectively. Then $\beta_1=\alpha_1^{q+1},\ldots,\beta_r=\alpha_r^{q+1}$ have the same orders $o_1,\ldots,o_r$; also, $\det(\beta_1)=\cdots=\det(\beta_r)=1$ and hence $\beta_i\in H$ for all $i$'s. The subgroup $L=\langle\beta_1,\ldots,\beta_r\rangle$ of $G$ induces the whole group $\langle\alpha_1\ldots,\alpha_r\rangle=G/G(\Omega)$ and $L\cap G(\Omega)=\{id\}$, so that $L\cong G/G(\Omega)$. Thus, $L=G(H)$ and claim follows. \end{proof} We now compute the genus of ${\mathcal H}_q/G$ for any $G\leq M$, using Theorem \ref{Di} for $G/G(\Omega)$; recall that ${\rm SL}(2,q)\cong{\rm PGL}(2,q)\cong{\rm PSL}(2,q)$ since $q$ is even. If $G=G(\Omega)$, then $g({\mathcal H}_q/G)$ is computed in Theorem \ref{fissatorepuntuale} (see also \cite[Theorem 5.8]{GSX}) as $g({\mathcal H}_q/G)=1+\frac{(q+1)(q-|G|-1)}{2|G|}$; hence, in the following we will assume that $G/G(\Omega)$ is not trivial. Let $G/G(\Omega)$ be cyclic of order a divisor of $q+1$, say $G/G(\Omega)=\langle\alpha G(\Omega)\rangle$ with $\alpha^{q+1}\in G(\Omega)$. From Lemma \ref{classificazione}, $\alpha$ is either of type (A) or of type (B1). If $\alpha$ is of type (A), then the center of $\alpha$ is a point of $\ell$, because $\alpha\notin G(\Omega)$ and $\alpha$ commutes with $G(\Omega)$ elementwise. In any case, $\alpha$ stabilizes pointwise a self-polar triangle $\{P_0,P_1,P_2\}\subset{\rm PG}(2,q^2)\setminus{\mathcal H}_q$. For any $\beta\in G$ we have $\beta=\alpha^d \bar g$ for some integer $d$ and some $\bar g\in G(\Omega)$; hence, $\beta$ stabilizes $\{P_0,P_1,P_2\}$ pointwise. Therefore, the groups $G\leq M$ such that $|G/G(\Omega)|\mid(q+1)$ are exactly the groups considered in Theorem \ref{fissatorepuntuale}, and the genus of ${\mathcal H}_q/G$ is characterized by Equation \eqref{generefissatore}. \begin{proposition} Let $G\leq M$ be such that $G/G(\Omega)\cong{\rm PSL}(2,2)$. If $n$ is even, then $$ g({\mathcal H}_q/G)= \frac{q^2-\omega q-3q+4\omega-4}{12\omega}. $$ If $n$ is odd, either \begin{equation}\label{primocasocheaccade} g({\mathcal H}_q/G) \frac{(q+1)(q-\omega-8)+9\omega}{12\omega}, \end{equation} or \begin{equation}\label{secondocasocheaccade} g({\mathcal H}_q/G) \frac{(q+1)(q-2\cdot 3^k-\omega-2)+9\omega}{12\omega} \end{equation} where $3^k$ is the maximal power of $3$ dividing $\omega$ and $k\geq1$. Both cases \eqref{primocasocheaccade} and \eqref{secondocasocheaccade} occur for some $G\leq M$ with $G/G(\Omega)\cong{\rm PSL}(2,2)$. \end{proposition} \begin{proof} Suppose that $3\mid(q-1)$, i.e. $q$ is an even power of $2$. Then $G=G(H)\times G(\Omega)$ with $G(H)\cong{\rm PSL}(2,2)\cong\mathbf{S}_3$. By Lemma \ref{classificazione}, the nontrivial elements of $G$ are as follows: $2\omega$ elements of order $3$ times a divisor of $\omega$, of type (B2); $3$ involutions, of type (C); $\omega-1$ elements in $G(\Omega)$, of type (A); $3(\omega-1)$ elements of order $2$ times a nontrivial divisor of $\omega$, of type (E). The claim follows from the Riemann-Hurwitz formula and Theorem \ref{caratteri}. Suppose that $3\mid(q+1)$, i.e. $q$ is an odd power of $2$. The group $G/G(\Omega)\cong{\rm PSL}(2,2)\cong\mathbf{S}_3$ contains a cyclic normal subgroup $\langle\alpha G(\Omega)\rangle$ of order $3$. Since $G(\Omega)$ is central in $G$, $\langle\alpha\rangle$ is normal in $G$. As $\alpha^3\in G(\Omega)$ and $\alpha$ fixes $P$, Lemma \ref{classificazione} implies that $o(\alpha)\mid(q+1)$ and $\alpha$ is of type (A) or (B1). We show that $\alpha$ is of type (B1). Suppose by contradiction that $\alpha$ is of type (A). As $\alpha\notin \Omega$, the axis of $\alpha$ passes through $P$ and the center of $\alpha$ lies on $\ell$. Hence, $\alpha$ has exactly $2$ fixed points $Q$ and $R$ on $\ell$, and $Q,R\notin {\mathcal H}_q$; we can assume that $PQ$ is the axis and $R$ is the center of $\alpha$. Let $\beta$ be an involution of $G$ (for instance, choose $\beta=\gamma^{q+1}$ for any involution $\gamma G(\Omega)$ of $G/G(\Omega)$). Then $\beta(Q)=R$ and $\beta(R)=Q$, so that $\beta^{-1}\alpha\beta$ has type (A) with axis $PR$ and center $Q$; this is a contradiction, because $\beta$ normalizes $\alpha$ and $\alpha$ does not contain elements of type (A) with center different from $R$. Thus, $\alpha$ is of type (B1). Hence, $G$ acts on the self-polar triangle $T\subset{\rm PG}(2,q^2)\setminus{\mathcal H}_q$ fixed pointwise by $\alpha$, and the pointwise stabilizer of $T$ in $G$ has index $2$ in $G$. Then $g({\mathcal H}_q/G)$ can be computed by means of Proposition \ref{indice2pari}. If $G=G(H)\times G(\Omega)$, then we apply Proposition \ref{indice2pari} {\it (ii)}, where $a=3$, $c=\omega$, $e=3\omega$. In fact, $c=|G(\Omega)|$, and $a=3$ because the elements of type (A) with axis passing through $P$ are obtained as the product of an element of order $3$ in $G(H)$ by an element of order $3$ in $G(\Omega)$. If $G\ne G(H)\times G(\Omega)$, then $3\mid|G(\Omega)|$ and $G=(C_{3^{k+1}}\rtimes C_2)\times C_{\omega/3^k}$, where: $3^k$ is the maximal power of $3$ which divides $\omega$; $C_{3^k}$ is generated by an element of type (B1) and order $3^k$ whose cube lies in $G(\Omega)$; $C_2$ is generated by any involution of $G$; $C_{\omega/3^k}$ is the subgroup of $G(\Omega)$ of order $\frac{\omega}{3^k}$. Such a group $G$ actually exists in $M$. To see this fact, let ${\mathcal H}_q$ have equation \eqref{M1} and assume $P=(0,0,1)$, $\ell:Z=0$; define in $M$ the elements $$\alpha=\begin{pmatrix} \lambda\mu & 0 & 0 \\ 0 & \lambda^{-1}\mu & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad \beta=\begin{pmatrix} 0 & \rho & 0 \\ \rho^{-1} & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad \delta=\begin{pmatrix} \xi & 0 & 0 \\ 0 & \xi & 0 \\ 0 & 0 & 1 \end{pmatrix}, $$ with $o(\lambda)=3$, $o(\mu)=3^{k+1}$, $o(\rho)\mid(q+1)$, $o(\xi)=\frac{\omega}{3^k}$. Then $G=\langle\alpha,\beta,\delta\rangle\cong (C_{3^{k+1}}\rtimes C_2)\times C_{\omega/3^k}$ is the desired group. We apply Proposition \ref{indice2pari} {\it (ii)}, where $a=3^k$, $c=\omega$, $e=3\omega$. \end{proof} Apart from the cases considered above of $G/G(\Omega)$ cyclic of order dividing $q+1$ and $G/G(\Omega)\cong{\rm PSL}(2,2)$, we will find that the subgroup $G/G(\Omega)$ of ${\rm PSL}(2,q)$ is generated by elements of order coprime to $q+1$. Hence, in the proofs of Propositions \ref{ciclicoq-1} to \ref{conPSL} we make use of Remark \ref{spezzabile} to get $G=G(H)\times G(\Omega)$. \begin{proposition}\label{ciclicoq-1} Let $G\leq M$ be such that $G/G(\Omega)$ is cyclic of order $d\mid(q-1)$. Then $$ g({\mathcal H}_q/G)=\frac{(q+1)(q-\omega-1)+2\omega}{2d\omega}. $$ \end{proposition} \begin{proof} We have $G=G(H) \times G(\Omega)$ with $G(H)$ cyclic of order $d$. From Theorem \ref{caratteri}, $G$ has $\omega-1$ elements of type (A) and $(d-1)\omega$ elements of type (B2). By the Riemann-Hurwitz formula, $(q+1)(q-2)=d\omega(2g({\mathcal H}_q/G)-2)+\Delta$ with $\Delta=(q+1)(\omega-1)+2(d-1)\omega$. \end{proof} \begin{proposition} Let $G\leq M$ be such that $G/G(\Omega)$ is elementary abelian of order $2^f$, $f\leq n$. Then $$ g({\mathcal H}_q/G)=\frac{(q+1)(q-\omega-2^f)+\omega(2^f+1)}{2^{f+1}\omega}. $$ \end{proposition} \begin{proof} We have $G=G(H)\times G(\Omega)$ with $G(H)$ elementary abelian of order $2^f$. A nontrivial element $\sigma\in G$ is of type (C) if $o(\sigma)=2$, of type (A) if $o(\sigma)\mid(q+1)$, and of type (E) if $o(\sigma)$ is $2$ times a nontrivial divisor of $q+1$. The claim follows from the Riemann-Hurwitz formula and Theorem \ref{caratteri}. \end{proof} Let $G\leq M$ be such that $G/G(\Omega)$ is dihedral of order $2d$ with $d\mid(q+1)$. We have $G=G(H)\times G(\Omega)$, where $G(H)=\langle\alpha\rangle\rtimes\langle\beta\rangle$ is dihedral of order $2d$, $o(\alpha)=d$, $o(\beta)=2$. Since $\beta^{-1}\alpha\beta=\alpha^{-1}$, we have that $\alpha$ is of type (B1) and stabilizes pointwise a self-polar triangle $\{P_0,P_1,P_2\}$, while $\beta$ is of type (C), fixes $P_0$, and interchanges $P_1$ and $P_2$. Therefore $G$ is already considered in Proposition \ref{indice2pari} and the genus of ${\mathcal H}_q/G$ is given by Equation \eqref{genereindice2}, with $a=1$, $c=\omega$, $e=d\omega$. \begin{proposition} Let $G\leq M$ be such that $G/G(\Omega)$ is dihedral of order $2d$ with $d\mid(q-1)$. Then $$ g({\mathcal H}_q/G) \frac{q^2-q\omega-qd+\omega d+\omega-d-1}{4d\omega}. $$ \end{proposition} \begin{proof} We have $G=G(H)\times G(\Omega)$ with $G(H)$ dihedral of order $2d$. From Lemma \ref{classificazione}, the nontrivial elements of $G$ are as follows: $(d-1)\omega$ elements of order a divisor of $q^2-1$ but not of $q+1$, of type (B2); $\omega-1$ elements in $G(\Omega)$ of order a divisor of $q+1$, of type (A); $d$ involutions, of type (C); $d(\omega-1)$ elements of order $2$ times a nontrivial divisor of $q+1$, of type (E). Then the claim follows from the Riemann-Hurwitz formula and Theorem \ref{caratteri}. \end{proof} \begin{proposition} Let $G\leq M$ be such that $G/G(\Omega)\cong \mathbf{A}_4$, with $n$ even. Then $$ g({\mathcal H}_q/G) \frac{q^2-q\omega+4\omega-3q-4}{24\omega}. $$ \end{proposition} \begin{proof} We have $G=G(H)\times G(\Omega)$ with $G(H)\leq H$, $G(H)\cong \mathbf{A}_4$. By Lemma \ref{classificazione}, $G$ contains $3$ elements of type (C), $8\omega$ elements of type (B2), $\omega-1$ elements of type (A), and $3(\omega-1)$ elements of type (E). The claim follows from the Riemann-Hurwitz formula and Theorem \ref{caratteri}. \end{proof} The case $G/G(\Omega)\cong \mathbf{S}_4$ does not occur, since $16\nmid(q^2-1)$. \begin{proposition} Let $G\leq M$ be such that $G/G(\Omega)\cong\mathbf{A}_5$, with $n$ even. Then $$ g({\mathcal H}_q/G)= \frac{(q+1)(q-\omega-16)+65\omega-48\epsilon}{120\omega}, $$ where $$ \epsilon=\begin{cases} \omega & \textrm{if}\quad 5\mid(q-1); \\ 0 & \textrm{if} \quad 5\mid(q+1),\; 5\nmid\omega; \\ q+1 & \textrm{if} \quad 5\mid\omega. \end{cases} $$ \end{proposition} \begin{proof} Since $p=2$, the assumption for $n$ to be even is equivalent to require $5\mid(q^2-1)$, so that ${\rm PSL}(2,q)$ admits a subgroup isomorphic to $\mathbf{A}_5$ by Theorem \ref{Di}. We have $G=G(H)\times G(\Omega)$ with $G(H)\cong \mathbf{A}_5$. By Lemma \ref{classificazione}, $G$ contains: $15$ elements of order $2$, which are of type (C); $20\omega$ elements of order $3$ times a divisor of $q+1$, which are of type (B2); $\omega-1$ nontrivial elements in $G_\Omega$, which are of type (A); $24$ elements of order $5$ in $G(H)$, which are of type (B1) if $5\mid(q+1)$ and of type (B2) if $5\mid(q-1)$; $24(\omega-1)$ elements of order divisible by $5$ in $G\setminus (G(H)\cup G(\Omega))$. Consider the $24(\omega-1)$ elements $\sigma_i$ of order $5$ in $G\setminus (G(H)\cup G(\Omega))$. If $5\mid(q-1)$, then all $\sigma_i$'s are of type (B2); if $5\mid(q+1)$ and $5\nmid\omega$, then all $\sigma_i$'s are of type (B1). Suppose that $5\mid\omega$. Let $\sigma_i=\alpha_i\beta_i$ with $\alpha\in G(H)$, $o(\alpha_i)=5$, $\beta_i\in G(\Omega)\setminus\{id\}$. Using the plane model \eqref{M1} we can assume up to conjugation that $\alpha_i$ and $\beta_i$ stabilize pointwise the reference triangle, $\alpha_i={\rm diag}(\lambda,\lambda^{-1},1)$ where $\lambda^5=1$, and $\beta_i={\rm diag}(\mu,\mu,1)$. The type of $\sigma_i$ is either (A) or (B1), and when $\alpha$ is given there are exactly $2$ choices of $\beta_i$ for which $\sigma_i$ is of type (A), namely $\beta_i={\rm diag}(\lambda,\lambda,1)$ or $\beta_i={\rm diag}(\lambda^{-1},\lambda^{-1},1)$. Then $24\cdot2$ elements are of type (A), and $24(\omega-3)$ are of type (B1). Now the claim follows from the Riemann-Hurwitz formula and Theorem \ref{caratteri}. \end{proof} \begin{proposition} Let $G\leq M$ be such that $G/G(\Omega)$ is the semidirect product of an elementary abelian $2$-group of order $2^f$ by a cyclic group of order $d$, with $f\leq n$ and $d\mid \gcd(2^f-1,q-1)$. Then $$ g({\mathcal H}_q/G)=\frac{(q+1)(q-\omega-2^f)+\omega (2^f+1)}{2^{f+1}d\omega}. $$ \end{proposition} \begin{proof} We have $G=G(H)\times G(\Omega)$ with $G(H)\leq H$ and $G(H)=E_{2^f}\rtimes C_d$, where $E_{2^f}$ is elementary abelian of order $2^f$ and $C_d$ is cyclic of order $d$. By Lemma \ref{classificazione}, $G$ contains: $2^f-1$ involutions, which are of type (C); $\omega-1$ nontrivial elements of $G_\Omega$, which are of type (A); $(2^f-1)(\omega-1)$ elements of order $2$ times a nontrivial divisor of $q+1$, which are of type (E); $2^f (d-1) \omega$ elements whose order divides $q^2-1$ but does not divide $q+1$, which are of type (B2). Then the claim follows from the Riemann-Hurwitz formula and Theorem \ref{caratteri}. \end{proof} \begin{proposition}\label{conPSL} Let $G\leq M$ be such that $G/G(\Omega)\cong{\rm PSL}(2,2^f)$ with $f\mid n$ and $f>1$. Then $$ g({\mathcal H}_q/G)=\frac{(q+1)\left[q-\omega-2^f(2^f-1)\gcd(2^f+1,\omega)-2^f\right]+(2^f+1)\omega(2^{2f}-2^f+1)}{2^{f+1}(2^f+1)(2^f-1)\omega}$$ if $n/f$ is odd, while $$g({\mathcal H}_q/G)=\frac{(q+1)(q-2^{2f}-\omega)-\omega(2 \cdot 2^{3f}-2^{2f}-2 \cdot 2^{f}-1)}{2^{f+1}(2^f+1)(2^f-1)\omega}+1$$ if $n/f$ is even. \end{proposition} \begin{proof} From Theorem \ref{Di} follows that the elements in ${\rm PSL}(2,2^f)$ of order coprime to $|G(\Omega)|$ generate ${\rm PSL}(2,2^f)$; hence $G=G(H)\times G(\Omega)$, where $G(H)\cong{\rm PSL}(2,2^f)$. Now we use the order statistics and the subgroup lattice of ${\rm PSL}(2,q)$, as decribed in \cite[Chapter II.8]{Hup}. \begin{itemize} \item $G(\Omega)$ contains $\omega-1$ elements of type (A). \item ${\rm PSL}(2,2^f)$ contains exactly $(2^f-1)(2^f+1)$ elements of order $2$, which are of type (C). The product of one of them with a nontrivial element of $G(\Omega)$ has order $2$ times a nontrivial divisor of $q+1$; thus, we have $(2^f-1)(2^f+1)(\omega-1)$ elements of type (E). \item ${\rm PSL}(2,2^f)$ contains exactly $\binom{2^f+1}{2}(2^f-2)=\frac{2^f(2^f+1)(2^f-2)}{2}$ nontrivial elements whose order divides $2^f-1$ and hence also $q-1$. The product of one of them with a nontrivial element of $G(\Omega)$ has order a divisor of $q^2-1$ not dividing $q+1$. Thus, we have $\frac{2^f(2^f+1)(2^f-2)}{2}\cdot\omega$ elements of type (B2). \item ${\rm PSL}(2,2^f)$ contains exactly $\frac{2^f(2^{2f}-2^f)}{2}$ nontrivial elements whose order divides $2^f+1$. Suppose that $n/f$ is odd. Then $2^f+1$ divides $q+1$. Since $H$ contains no elements of type (A), any such element is of type (B1). Together with the identity, they form $\frac{2^{2f}-2^f}{2}$ distinct cyclic groups of order $2^f+1$ which intersect pairwise trivially. Consider a cyclic subgroup $C\leq G(H)$ of order $2^f+1$. We use the plane model \eqref{M1} of ${\mathcal H}_q$, and assume up to conjugacy that the self-polar triangle fixed pointwise by $C$ is the reference triangle and the center of the homologies in $G(\Omega)$ is $(0,0,1)$. In this way, $C=\langle\alpha={\rm diag}(\lambda,\lambda^{-1},1)\rangle$ with $o(\lambda)=2^f+1$, while $G(\Omega)=\langle\beta={\rm diag}(\mu,\mu,1)\rangle$ with $o(\mu)=\omega$. The element $\alpha^i\beta^j$ is either of type (A) or of type (B1); for given $\alpha^i$, $\alpha^i\beta^j$ is of type (A) if and only if $\mu^j=\lambda$ or $\mu^j=\lambda^{-1}$. Hence, there are exactly $\left(\gcd(2^f+1,\omega)-1\right)\cdot2$ elements of type (A) in $C\times G(\Omega)\setminus G(\Omega)$, and exactly $\frac{2^{2f}-2^f}{2}\cdot\left(\gcd(2^f+1,\omega)-1\right)\cdot2$ elements of type (A) in $G\setminus G(\Omega)$. If $n/f$ is even, then $2^f+1$ divides $q-1$; hence, all the $\frac{2^f(2^{2f}-2^f)\omega}{2}$ elements in this class are of type (B2). \end{itemize} In the Riemann-Hurwitz formula $(q+1)(q-2)=|G|(2g({\mathcal H}_q/G)-2)+\Delta$, we have by Theorem \ref{caratteri} that $$\Delta= (\omega-1)(q+1)+(2^f-1)(2^f+1)(q+2)+(2^f-1)(2^f+1)(\omega-1)\cdot1$$ $$ +\frac{2^f(2^f+1)(2^f-2)\omega}{2}\cdot2+\delta,$$ where $$\delta=(2^{2f}-2^f)\left(\gcd(2^f+1,\omega)-1\right)\cdot(q+1), $$ if $n/f$ is odd, while $$\delta=\frac{2^f(2^{2f}-2^f)\omega}{2} \cdot 2,$$ otherwise. The claim follows by direct computation. \end{proof} The case $G/G(\Omega)\cong {\rm PGL}(2,2^f)$ is given by Proposition \ref{conPSL}, since ${\rm PGL}(2,q)\cong{\rm PSL}(2,q)$ when $q$ is even. \section{New genera for maximal curves}\label{sec:nuovigeneri} The results of Sections \ref{sec:triangolo} and \ref{sec:polopolare} provide many genera for maximal curves over finite fields. It is possible to compare these values with the ones previously given in \cite{ABB,ATT,BMXY,CO,CO2,CKT1,CKT2,DO,DO2,FG,GGS,GSX}. We list here some new genera, for some values of $q$. \begin{center} \begin{table}[H] \begin{small} \caption{New genera $g$ for $\mathbb{F}_{q^2}$-maximal curves} \label{table1} \begin{center} \begin{tabular}{|c|c|} \hline $q$ & $g$ \\ \hline $13$ & $1$\\ \hline $2^5$ & $20$, $55$ \\ \hline $2^7$ & $22$, $133$, $287$, $420$, $903$, $904$ \\ \hline $3^5$ & $10$, $161$, $280$, $590$, $1180$, $2420$ \\ \hline $3^7$ & $91$, $1457$, $24661$, $49595$, $99190$, $198926$ \\ \hline $5^3$ & $17$, $39$, $46$, $63$, $91$, $134$, $210$, $211$, $273$, $274$, $369$, $630$, $631$, $861$ \\ \hline \end{tabular} \end{center} \end{small} \end{table} \end{center} \begin{remark} Table {\rm \ref{table1}} gives a partial answer to {\rm \cite[Remark 4.4]{ATT}}; namely, Propositions {\rm \ref{indice3caso1}} and {\rm \ref{indice6}} yield examples of $\mathbb{F}_{13^2}$-maximal curves of genus $1$. \end{remark}
2024-02-18T23:40:30.491Z
2018-05-24T02:10:31.000Z
algebraic_stack_train_0000
2,603
15,287
proofpile-arXiv_065-12645
\section{Introduction} The concept of stability of equilibrium is central to the studies of differential equations. By using the techniques of linearization and transforming the equilibrium to zero, the stability problem is reduced to $u'=Au$, where $u\in {\mathbb R}^n$ and $A$ is a real-valued $n\times n$ matrix. The equilibrium $u=0$ is asymptotically stable if each solution $u$ of $u'=Au$ converges to zero as $t\to\infty$. From the theory of linear differential equation, this is equivalent to that each eigenvalue of $A$ has negative real part. Hence it is desirable to know what kind of matrices are stable, and how to design a matrix to be stable \cite{Maybee1969}. Let $M_n$ be the set of all $n\times n$ matrices with real-valued entries. A matrix $A\in M_n$ is said to be \textit{stable} if, for each of its eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_n$, ${\rm Re}(\lambda_i)<0$. A system which is modeled by such a matrix $A$ has stable equilibria, and given small perturbations of its initial conditions the system will return to these equilibrium points. We define the \textit{sign pattern} of a matrix $A=[a_{ij}]$ to be an $n\times n$ matrix $S(A)=[s_{ij}]$ such that, for $i,j\in\{1,\ldots,n\}$, $s_{ij}=0$ when $a_{ij}=0$, $s_{ij}=-$ when $a_{ij}<0$, and $s_{ij}=+$ when $a_{ij}>0$. If some matrix $A\in M_n$ is found to be stable, then the sign pattern $S(A)$ is said to be \textit{potentially stable}, or \textit{PS} for short. In the case where $A\in M_n$ is an upper triangular, lower triangular, or diagonal matrix, or when $A$ is permutationally similar to such a matrix, the problem becomes trivial due to the ease of calculating the eigenvalues of these matrices. Therefore we restrict our examination to \textit{irreducible} matrices, or the matrices $A\in M_n$ such that there does not exist a permutation matrix $P$ such that $$PAP^T= \begin{bmatrix} A_{11} & 0\\ A_{12} & A_{22} \end{bmatrix}, A_{11}\in M_k, A_{22}\in M_{n-k}.$$ The following result has been proved in \cite{Grundy}. \begin{theorem}\label{thm1} Let the minimum number of nonzero entries required for an $n\times n$ irreducible sign pattern to be potentially stable be given by $m_n$. Then $$ \left\lbrace\begin{array}{l l} m_n=2n-1, & n=2,3,\\ m_n=2n-2, & n=4,5,\\ m_n=2n-3, & n=6,\\ m_n\leq 2n-(\lfloor\frac{n}{3}\rfloor+1), & n\geq 7. \end{array}\right. $$ \end{theorem} Hence the value of $m_n$ for $n=2,3,4,5,6$ was determined in Theorem \ref{thm1}, as well as an upper bound for $m_n$ for any $n\geq 6$ via an explicit construction. Previously other partial results have been obtained for the cases $3\le n\le 5$ \cite{Johnson1989,Johnson1997}. In this paper we prove the following theorem: \begin{theorem}\label{thm2} $$ m_7=2(7)-3=11. $$ \end{theorem} Note that Theorem \ref{thm1} has shown that $11$ is an upper bound. So here in order to prove this minimum, we need only show that there cannot exist a potentially stable $7\times7$ sign pattern with only $10$ nonzero entries. Note that if there were a potentially stable $7\times7$ sign pattern with fewer than $10$ nonzero entries, then we would similarly be able to construct a potentially stable pattern with $10$ nonzero entries by adding additional nonzero entries to an existing potentially stable pattern. Thus it is sufficient to prove that no potentially stable pattern with only $10$ nonzero entries exists. In order to prove that no such sign pattern exists, we first construct a list of all digraphs with $7$ vertices and $10$ edges which allow for correct minors (as defined by the relationship between cycles in the graph and the minors of the associated matrix in subsection 2.2). This construction is given in Section 3. Once we have constructed this list of digraphs, we will construct the associated set of nonequivalent matrix sign patterns which have correct minors. Fore that purpose we utilize a variant of Routh-Hurwitz stability criterion to show that none of these candidate sign patterns have a stable realization (see Section 4). From this we will conclude that the minimum number of nonzero entries must be equal to $11$. \section{Preliminaries} \subsection{Digraphs} We define the \textit{digraph} of an $n\times n$ matrix $A=(a_{ij})$ to be a directed graph with vertex set $V_n=\{1,\ldots,n\}$, and for each $i,j\in V_n$, there exists an edge from vertex $i$ to vertex $j$ if and only if $a_{ij}\neq 0$. For a digraph, we define a \textit{path} as an ordered set of edges such that, for some vertices $i,j,l\in\{1,\ldots,n\}$, if the $m^{th}$ edge in the set is defined by $(i,j)$, then the $(m+1)^{th}$ edge is defined by $(j,l)$. We define the \textit{length} of a path as the number of edges in the path. If for each pair of vertices $p$ and $q$, such that $p\in\{1,\ldots,n\},q\in\{1,\ldots,n\}\setminus\{p\}$, in a given digraph there exists a path which begins at $p$ and ends at $q$, we say that the digraph is \textit{strongly connected}. It is the case that for any $A\in M_n$, $A$ is irreducible if and only if the digraph of $A$ is strongly connected \cite{brualdi}. We define a \textit{cycle} to be a path which begins and ends at the same point, and which only intersects itself at this point. We refer to a cycle of length $1$ as a \textit{loop}. Also note that a permutation similarity which swaps the $i^{th}$ and $j^{th}$ rows/columns of $A$ is reflected in the digraph of $A$ by swapping the labels of the $i^{th}$ and $j^{th}$ vertices of the digraph. The \textit{circumference} of a digraph $G$ is defined as the length of the longest cycle present within the graph. We write this as $circ(G)$. Note that as the circumference decreases, the minimum number of edges needed to be strongly connected increases. The following theorem gives the minimum number of edges of a digraph $G$ on $n$ vertices given that $circ(G)=k$. \begin{theorem} Let $k,n$ be integers such that $2\leq k\leq n$ and $n=a(k-1)+b$ for some $a>0$ and $0\leq b< k-1$, define $e_{n,2}=2(n-1)$ and for $k>2$, define \[e_{n,k}=\left\{\begin{array}{ll} ka-1 & \mbox{ if } b=0, (a\geq 2)\\ ka & \mbox{ if } b=1\\ ka+b & \mbox{ if } b>1\\ \end{array}\right..\] If $G$ is a strongly connected digraph with $n$ vertices, $circ(G)=k$, then $|E|\geq e_{n,k}$. Moreover, the bound is best possible, i.e., there is a graph $G_0$ with $n$ vertices, $circ(G_0) = k$ and $e_{n,k}$ edges. \end{theorem} \noindent\textit{Proof:} Let $G$ be a strongly connected digraph with vertex set $V_n=\{1,\ldots,n\}$, edge set $E$ and $2\leq circ(G)=k$. \\ \textbf{Case 1:} Suppose $k=2$. We proceed by induction on $n$. When $n=2$, we have $(1,2),(2,1)\in E$, and hence $|E|\geq e_{2,2}=2$. Suppose $n\geq 3$ and any strongly connected graph $\bar{G}(V_{n-1},\bar{E})$ with $circ(G)=2$ satisfies $|\bar{E}|\geq e_{n-1,2}= 2(n-2)$. Assume that $|E|< e_{n,2}=2(n-1)$. Note that we can relabel the vertices so that $(n-1,n),(n,n-1)\in E$. Now, \[E\subseteq V_n\times V_n=\underbrace{(V_{n-1}\times V_{n-1})}_{S_1} \cup\underbrace{(V_{n-2}\times \{n\})}_{S_2} \cup \underbrace{(\{n\} \times V_{n-2})}_{S_3} \cup \underbrace{\{(n-1,n),(n,n-1),(n,n)\}}_{S_4} \] which is a disjoint union of sets. Thus \begin{equation} |E|=|E\cap S_1|+|E\cap S_2|+|E\cap S_3| +|E\cap S_4|< 2(n-1) \end{equation} Since $(n-1,n),(n,n-1)\in E$, we also have $|E\cap S_4|\geq 2$ and so \[|E\cap S_1|+|E\cap S_2|+|E\cap S_3|<2(n-2)\] Now, define the edge set $\bar{E}\subseteq V_{n-1}\times V_{n-1}$ as follows \[\bar{E}=\underbrace{(E\cap S_1)}_{T_1}\cup \underbrace{\{(j,n-1) \ | \ (j,n)\in E\cap S_2\}}_{T_2} \cup \underbrace{ \{(n-1,j) \ | \ (n,j)\in E\cap S_3\}}_{T_3}\] That is, we obtain $\bar{E}$ by removing the edges $(n-1,n),(n,n-1),(n,n)$ from $E$ and morphing vertices $n$ and $n-1$ into one vertex, labeling it as $n-1$. Thus, \[|\bar{E}| \leq |E\cap S_1|+|E\cap S_2|+|E\cap S_3|<2(n-2)\] It is easy to verify that if $i,j\in V_{n-1}$ and there is a path from $i$ to $j$ in $G$, then there is a path from $i$ to $j$ in $\bar{G}$. Thus, $\bar{G}$ is strongly connected. Also, if there is a cycle of length $k$ in $\bar{G}$, then there is a cycle of length greater than or equal to $k$ in $G$. Thus, $circ(G)=2$. This contradicts the induction hypothesis. By mathematical induction, $|E|\geq e_{n,2}$.\medskip\\ \textbf{Case 2:} Next, assume $3\leq k\leq n$ and $n=a(k-1)+b$. We will prove the theorem by induction on $a$. We start with the following base cases for the (i) $b=0$, that is $a=2$ and $n=2(k-1)$; (ii) $b=1$, that is, $a=1$ and $n=k$; and (iii) $b>1$, that is $a=1$ and $n=k+b-1$. \begin{enumerate} \item[(i)] Let $n=2(k-1)$. That is, $a=2$ and $b=0$. Then $e_{n,k}=2k-1=n+1$. We are assuming $G$ is strongly connected and $\mbox{circ}(G)= k<n$. We can relabel the vertices so that there is a $k$-cycle formed by vertices $V_n-V_{n-k}$, consisting of $k$ edges. Additionally, there must an outgoing edge from each vertex $j\in V_{n-k}$. This gives us additional $n-k$ distinct edges. Finally, there must be an outgoing edge from a vertex of $V_n-V_{n-k}$ going to a vertex in $V_{n-k}$. Thus $|E|\geq k+n-k+1=n+1=e_{n,k}$. \item[(ii)] Let $n=k$. That is $a=1$ and $b=1$ and $e_{n,k}=k$. It is clear that $|E|\geq k=e_{n,k}$ since there must be an outgoing (equivalently, incoming) edge for each vertex. \item[(iii)] Let $n=k+b-1$ for some $b>1$. That is $a=1$ and $e_{n,k}=k+b=n+1$. Using the same argument for $n=2(k-1)$, we get that $|E|\geq n+1=e_{n,k}$. \end{enumerate} Assume that $a\geq 2$ when $b>0$ and $a\geq 3$ when $b=0$. Suppose further that any strongly connected graph $\bar{G}=(V_{n-k+1},\bar{E})$ with $circ(\bar{G})=k$ satisfies $|\bar{E}|\geq e_{n-k+1,k}$. Suppose $|E|< e_{n,k}$. We can relabel the vertices so that $\{n-k+1,\ldots, n\}$ form a $k$-cycle in $G$, where $k<n$. We will define the digraph $\hat{G}$ with vertex set $V_k$ and edge set $\hat{E}=S_1\cup S_2\cup S_3$, where \[S_1=E\cap \Big(V_{n-k+1}\times V_{n-k+1}\Big)\] \[S_2=\Big\{(j,n-k+1)\ |\ (j,s)\in E\cap \Big(V_{n-k}\times (V_n-V_{n-k+1}) \Big)\Big\}\] \[S_3=\Big\{(n-k+1,j)\ |\ (j,s)\in E\cap \Big((V_n-V_{n-k+1})\times V_{n-k}\Big) \Big\}\] That is, we remove the edges contained in the $k$-cycle and collapse vertices $n-k+1,\ldots, n$ into one vertex labeled by $n-k+1$. Then $|\hat{E}|\leq |E|-k<e_{n,k}-k=e_{n-k+1,k}$. Note that $\hat{G}$ is strongly connected and $circ(\hat{G})\leq k$. Note that from $\hat{G}$, we can define a strongly connected digraph $\bar{G}$ with $circ(G)=k$ by rearranging its edges, relocating and realigning the edges if necessary without removing or introducing a new edge. This contradicts the induction hypothesis. By mathematical induction, $|E|\geq e_{n,k}$. \\ For the last assertion, consider $G_0$ to be the digraph on $V_n$ constructed as follows. For $k = 2$, let the edge set of $G_0$ be $E=\{(i,i+1),(i+1,i)\ | \ i=1,\ldots, n-1\}$. \\ For $k > 2$, construct a $k$-cycle $1\rightarrow 2 \rightarrow\cdots \rightarrow k\rightarrow 1$; if there are at least $k-1$ vertices left, construct another cycle $k\rightarrow k+1\rightarrow \cdots\rightarrow 2k-1\rightarrow k$; if there are at least $k-1$ vertices construct another cycle $2k-1 \rightarrow 2k \rightarrow \cdots \rightarrow 3k-2\rightarrow 2k-1$, until we have either $k-2$ vertices left (when $b=0$) or $b-1$ vertices left with $1 \le b < k-1$ vertex. For the former case or if $b> 0$, use a vertex in the last $k$-cycle and the remaining vertices to form either a $k-1$-cycle or a $b$-cycle. \hfill $\square$ Suppose $G$ is a strongly connected digraph with $n$ vertices, edge set $E$, circumference $k$ and contains $m$ loops. Note that removing the loops does not change the strong connectivity of $G$. It follows from the preceding theorem that $|E|-m \geq e_{n,k}$. \subsection{Minors} The following lemma is from elementary algebra and it is useful for better defining the properties of the characteristic polynomial of a stable matrix: \begin{lemma}\label{deg2} Let $p(x)=x^2+cx+d$ be a quadratic polynomial with real valued coefficients $c,d$. Then $p$ has roots $\lambda_1 ,\lambda_2$ with ${\rm Re}(\lambda_1)<0$ and ${\rm Re}(\lambda_2)<0$ if and only if $c>0$ and $d>0$. \end{lemma} An $m\times m$ \textit{principal submatrix} of $A$ is a matrix $B=[b_{ij}]\in M_m$, $1\leq m\leq n$ such that $b_{ij}=a_{v_iv_j}$ for some $v_1<\ldots<v_m\in\{1,\ldots,n\}$. A \textit{principal minor} of $A$ is defined as the determinant of some principal submatrix $B=[b_{ij}]$ of $A$. We denote the $m\times m$ principal minor of $A$ indexed by $v_1<\ldots<v_m\in\{1,\ldots,n\}$ as $M(A)_{v_1,\ldots,v_m}$. For example, $$ A=\begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \quad \Longrightarrow \quad M(A)_{23}=\det\left( \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\right),\quad M(A)_{13}=\det\left( \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}\right). $$ There is a direct relationship between the minors of a matrix and its eigenvalues. The sum of all $k\times k$ principal minors of a matrix $A$ is equal to the sum of all products of unique combinations of $k$ eigenvalues of $A$. That is, \begin{equation}\label{E_k} E_k=\sum_{1\leq v_1< \ldots< v_k\leq n} M(A)_{v_1,\ldots,v_k}=\sum_{1\leq u_1< \ldots< u_k\leq n} \lambda_{u_1}\cdots\lambda_{u_k}. \end{equation} Furthermore, the coefficient of $t^{n-k}$ in the characteristic polynomial $P_A(t)=\det(tI-A)$ of the $A$ is equal to $(-1)^k E_k$. Due to the relationship between the minors and the eigenvalues of a matrix, we have the following lemma, which is well known: \begin{lemma}\label{lem6} If $A\in M_n(\mathbb{R})$ is stable, then the following are true: \begin{enumerate} \item For all $k=1,\ldots, n$, the sign of the sum of the $k\times k$ minors of $A$, $E_k$, is $(-1)^k$. \item The characteristic polynomial of $A$, $$P_A(t)=\det(tI-A)=\sum_{k=0}^n(-1)^kE_kt^{n-k},$$ has all positive coefficients. \end{enumerate} \end{lemma} Note that the above lemma gives us a necessary condition for a given matrix $A$ to be stable. This condition will be very important in our work. If a given sign pattern can be realized by a real valued matrix $A$ which meets the condition that the sign of the sum of the $k\times k$ minors of $A$ is $(-1)^k$, then we say that this sign pattern has \textit{correct minors}. If for some $k$, the sum of $k\times k$ minors is equal to zero, then that sign pattern cannot be PS, as this would imply that either some of its eigenvalues are positive and some are negative, or that at least one of the eigenvalues is equal to zero The condition on the coefficients of $P_A(t)$ is necessary for the stability of $A$, however it is not sufficient. For example if $A = {\small \begin{bmatrix} - 0.8 & - 0.81 & -1.01 \cr 1 & 0 & 0 \cr 0 & 1 & 0 \cr \end{bmatrix}}$, then $$P_A(t)=t^3+0.8t^2+0.81t+1.01=(t+1)(t-0.1+i)(t-0.1-i).$$ So $P_A(t)$ has positive coefficients, but $A$ has eigenvalues $\lambda=0.1\pm i$ which have strictly positive real parts, and so $A$ is not stable. \subsection{Digraph Cycles} There is a direct relationship between the minors of a matrix and the cycles present in its digraph. If two or more cycles do not share any vertices, then we say that they are \textit{independent}. If the digraph of a sign pattern contains a cycle made up of $k$ edges, then this implies that at least one of its $k\times k$ minors is not equal to zero. Additionally, if there exist independent cycles of length $a_1,\ldots,a_m$, then this implies that, if $\sum_{i=1}^{m}a_i\leq n$, at least one of its $(\sum_{i=1}^{m}a_i)\times(\sum_{i=1}^{m}a_i)$ minors is not equal to zero. Below are examples of a digraph with correct minors, and one without \\ $$ \begin{minipage}{0.2\textwidth}\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,thick,main node/.style={circle, fill opacity=0.75,minimum size = 6pt, inner sep = 0pt}] \node[main node] (1) {1}; \node[main node] (2) [right of=1] {2}; \node[main node] (3) [below of=2] {3}; \node[main node] (4) [left of=3] {4}; \path (1) edge [loop above] node {} (1) edge node {} (2) (2) edge [bend left] node {} (3) (3) edge node {} (4) edge [bend left] node {} (2) (4) edge node {} (1); \end{tikzpicture}\end{minipage} \mbox{has correct minors,} \quad \begin{minipage}{0.2\textwidth}\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm,thick,main node/.style={circle, fill opacity=0.75,minimum size = 6pt, inner sep = 0pt}] \node[main node] (1) {1}; \node[main node] (2) [right of=1] {2}; \node[main node] (3) [below of=2] {3}; \node[main node] (4) [left of=3] {4}; \path (1) edge [loop above] node {} (1) edge node {} (2) (2) edge [loop above] node {} (2) edge node {} (3) (3) edge node {} (4) (4) edge node {} (1); \end{tikzpicture}\end{minipage} \mbox{does not have correct minors.} $$ Therefore, if a given digraph has independent cycles whose lengths add up to $1,\ldots,n$, then we can assign signs to the entries of the corresponding matrix such that it has correct minors. \section{Candidate Digraphs} In this section we construct all candidate digraphs with $7$ vertices and $10$ edges which allows correct minors for a stable matrix. In order to better organize this list, we classify the graphs based on its circumference (the maximum length of cycle in the graph). \medskip \noindent\textbf{Case 1:} $circ(G)=7$. \noindent In this case there must be at least one loop (see the minimum configuration below), and either there are at least two loops or there is exactly one loop and a 2-cycle. \begin{center} \begin{minipage}{0.4\textwidth} \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [above right of=1] {}; \node[main node] (7) [below right of=1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (6) [right of=7] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [right of=6] {}; \path (1) edge [loop left] node {} (1) edge[bend left] node[above] {} (2) (2) edge node[above] {} (3) (3) edge node[above] {} (4) (4) edge[bend left] node[right] {} (5) (5) edge node[below] {} (6) (6) edge node[below] {} (7) (7) edge[bend left] node[below] {} (1); \end{tikzpicture} \end{minipage} \end{center} \medskip \noindent\textbf{Case 1.1}: There are at least two loops. Then $9$ edges have been utilized. Suppose another edge is added to create a $k$-cycle where $k<7$. The possible sizes of nonzero minors are $1,2,k,k+1,k+2,7$ (possibly less if the $k$ cycle intersects any of the two loops.) Thus, there is at least one $3<r<7$ such that the $r\times r$ minor of the adjacency matrix is zero. Therefore $G$ is not potentially stable.\\ \textbf{Case 1.2}: There is exactly one loop and a $2$-cycle of two adjacent (numbering-wise) vertices. This utilizes $9$ edges. Suppose the remaining edge is contained in a $k$-cycle, where $2\leq k <7$. If the $2$-cycle and the loop have a vertex in common, then the possible sizes of nonzero minors are $1,2,k,k+1,k+2,7$, so we miss at least one minor, and therefore $G$ is not potentially stable. Similarly, if the $k$-cycle has a vertex in common with either the loop or the $2$-cycle, we get a non-PS adjacency matrix. Thus, the $2$-cycle, loop and $k$ cycle must be pairwise disjoint. In this case, the possible sizes of nonzero minors are $1,2,3,k,k+1,k+2,k+3,7$. Thus, $k=4$ or $k=3$. \\ In this case we have the candidate graphs as shown in Figure \ref{fig5}.1, Figure \ref{fig5}.2, Figure \ref{fig5}.3 and Figure \ref{fig5}.4.\\ \textbf{Case 1.3}: There is exactly one loop and a $2$-cycle of two non-adjacent (numbering-wise) vertices. Suppose these two additional edges create a $k$-cycle and an $r$-cycle and nonzero minors of sizes $1,2,3,k,r,r+1,7$. Thus $(k,r)=(4,5)$.\\ In this case we have the candidate graph Figure \ref{fig5}.5. \medskip \noindent\textbf{Case 2:} $circ(G) = 6$. \noindent In this case there must be at least on loop. Either there is a loop on the vertex that does not belong to the $6$-cycle or there is none (see the two possible configurations below.) Two of the three edges must be utilized to make sure that the graph is strongly connected. That is, one edge must be coming from the lone vertex and one must be going to the lone vertex. \begin{center} \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [above right of=1] {}; \node[main node] (7) [below right of=1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (6) [right of=7] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [right of=6] {}; \path (1) edge [loop left] node {} (1) (2) edge node[above] {} (3) (3) edge node[above] {} (4) (4) edge node[right] {} (5) (5) edge node[below] {} (6) (6) edge node[below] {} (7) (7) edge node[below] {} (2); \end{tikzpicture} \hspace{2cm} \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [above right of=1] {}; \node[main node] (7) [below right of=1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (6) [right of=7] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [right of=6] {}; \path (2) edge[loop left] node[above] {} (2) edge node[above] {} (3) (3) edge node[above] {} (4) (4) edge node[right] {} (5) (5) edge node[below] {} (6) (6) edge node[below] {} (7) (7) edge node[below] {} (2); \end{tikzpicture} \end{center} \noindent \textbf{Case 2.1:} There is a loop in the lone vertex (say $v_1$) and another loop in another vertex. So far, we can guarantee nonzero minors of size $1,2,6,7$. For the two remaining edges, one must be outgoing from $v_1$ and one must be incoming from $v_1$. If these two edges form a $k$-cycle (which intersects a loop and the $6$-cycle), then we get nonzero minors of size $k$ and $k+1$ and nothing else. Thus $G$ will not be potentially stable. \\ \textbf{Case 2.2:} There is a loop in the lone vertex and no loop in any other vertex. Suppose the outgoing and incoming edge to the lone vertex form a $k$-cycle (which intersects the loop and the 6-cycle), with $k<7$. Then minors of size $1,k,6,7$ are nonzero. Suppose the remaining edge gives rise to another cycle of length $1<r<7$ (this means it must necessarily intersect the $6$-cycle. This may give rise to nonzero minors of size $r,r+1$ and $r+k$ (less if the $r$-cycle also intersects with the loop or the $k$-cycle). Thus, the $r$-cycle must not intersect with the $k$-cycle and $\{k,r,r+1,r+k\}=\{2,3,4,5\}$. There is no choice but for $k=2$ and $r=3$.\\ Thus, in this case, we have the candidate graphs as in Figure \ref{fig5}.6 and Figure \ref{fig5}.7.\\ \textbf{Case 2.3}: There is no loop in the lone vertex. Hence, there is at least one loop intersecting the $6$-cycle. Suppose the incoming and outgoing edges to the lone vertex forms a $k$-cycle, where $1<k<7$. At this point, $9$ edges have been accounted for and nonzero minors of sizes $1,6,k,k+1$. Suppose the last edge forms gives rise to an $r$-cycle, where $r<7$ that, by assumption, must intersect the $6$-cycle. If this $r$-cycle intersects the loop or the $k$-cycle, then there will be a zero minor and thus, the adjacency matrix cannot be PS. If the $r$-cycle, loop and the $k$-cycle are pairwise disjoint, then we get additional nonzero minors of size $r,r+1,r+k,r+k+1$. We want $\{2,3,4,5,7\}\subseteq\{k,k+1,r,r+1,r+k,r+k+1\}$. Thus, either $(r,k)=(2,4)$ or $(r,k)=(4,2)$.\\ Thus, in this case, we have the candidate graphs: Figure \ref{fig5}.8 and Figure \ref{fig5}.9. \medskip \noindent\textbf{Case 3:} $circ(G) = 5$. \noindent In this case a $5$-cycle uses $5$ edges, and another edge must form a loop. At least three out of the four remaining edges must be used to ensure strong connectedness of the graph. Either there is a loop intersecting the $5$-cycle or there is none (see the two possible configurations below.) \begin{center} \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [above right of=1] {}; \node[main node] (7) [below right of=1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (6) [right of=7] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [right of=6] {}; \path (1) edge [loop left] node {} (1) edge[bend left] node[above] {} (2) (2) edge node[above] {} (3) (3) edge node[above] {} (6) (6) edge node[below] {} (7) (7) edge[bend left] node[below] {} (1); \end{tikzpicture}\hspace{2.5cm} \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [above right of=1] {}; \node[main node] (7) [below right of=1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (6) [right of=7] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [right of=6] {}; \path (1) edge[bend left] node[above] {} (2) (2) edge node[above] {} (3) (3) edge node[above] {} (6) (6) edge node[below] {} (7) (7) edge[bend left] node[below] {} (1) (4) edge [loop left] node {} (4); \end{tikzpicture} \end{center} \noindent \textbf{Case 3.1:} There is a loop intersecting the $5$-cycle (the one we choose at the beginning, there may be more than one $5$-cycle). Either there is an edge between the two remaining vertices or there is none.\\ \textbf{Subcase 3.1.1:} Suppose there is no edge connecting the the two vertices (let's call them $v_1$ and $v_2$). Then, two of the four remaining edges should connect $v_1$ to vertices in the $5$-cycle to form a $k$-cycle, where $k\leq 5$. Similarly, the remaining two edges must connect $v_2$ to vertices in the $5$-cycle to form an $r$-cycle, where $k\leq 5$. Assuming the $k$-cycle, $r$-cycle and the loop are disjoint, then we have nonzero minors of size $1,5,k,r,k+1,r+1,k+r,k+r+1$. (Note that if they are not pairwise disjoint, there will be at least minor size that will be missing.) Thus $\{2,3,4,6,7\}\in \{k,r,k+1,r+1,k+r,k+r+1\}$. Thus $(k,r)=(2,4)$ or $(k,r)=(4,2)$.\\ Thus, we have the candidate graph: Figure \ref{fig5}.10.\\ \textbf{Subcase 3.1.2:} Suppose $v_1$ and $v_2$ form a $2$-cycle and $v_2$ is not adjacent to any vertex in the 5-cycle. So far, we have accounted for 8 edges and nonzero minors of sizes $1,2,3,5$. The two remaining edges must be incoming and outgoing from $v_1$ to make a strongly connected graph. Say these two remaining edges forms a $k$-cycle, where $k\leq 5$. This adds nonzero minors of size $k$ and $k+1$, which is not enough to make a potentially stable adjacency matrix.\\ \textbf{Subcase 3.1.3:} Suppose $v_1$ and $v_2$ are part of a $k$-cycle, with $2<k\leq 5$. So far, we have accounted for at least $9$ edges and nonzero minors of sizes $1,k,5,k+1$, where $\geq 3$. To get a nonzero $2\times 2$ minor, either there must be another loop or there is a two cycle. Adding a loop can only guarantee at least $2$ more nonzero minor sizes. Thus, there must be a $2$-cycle in the graph. If the two cycle is disjoint from the $5$-cycle (that is, $v_1$ and $v_2$ form the $2$-cycle), we only get nonzero minors of size $\{1,2,3,5,k,k+1,7\}\neq \{1,2,3,4,5,6,7\}$. Hence the two vertices in the $2$-cycle must be part of the $5$-cycle. In this case, we get nonzero minors, $1,2,3,k,k+1,k+2,k+3,5$. Thus $k=4$.\\ Thus, we have the candidate graph: Figure \ref{fig5}.11. \smallskip \noindent \textbf{Case 3.2:} There is no loop intersecting the $5$-cycle. Again, either there is an edge connecting the remaining two vertices $v_1$ and $v_2$ or there is none. \\ \textbf{Subcase 3.2.1:} There is no edge connecting the remaining two vertices $v_1$ and $v_2$. Then, two of the four remaining edges should connect $v_1$ to vertices in the $5$-cycle to form a $k$-cycle, where $k\leq 5$. Similarly, the remaining two edges must connect $v_2$ to vertices in the $5$-cycle to form an $r$-cycle, where $k\leq 5$. Assuming the $k$-cycle, $r$-cycle and the loop are disjoint, then we have nonzero minors of size $1,5,6,k,r,r+1,r+k$. Note that there is no choice of $2\leq k,r\leq 5$ that will give a complete set of nonzero minors. Thus, this case will not give a PS adjacency matrix.\\ \textbf{Subcase 3.2.2:} Suppose $v_1$ and $v_2$ form a $2$-cycle and one of $v_1$ or $v_2$ is not adjacent to any vertex in the 5-cycle. So far, we have accounted for 8 edges and nonzero minors of sizes $1,2,5,6,7$. The two remaining edges must be incoming and outgoing from $v_1,v_2$ to make a strongly connected graph. Say these two remaining edges forms a $k$-cycle, where $k\leq 5$. This adds nonzero minors of size $k$ and possibly (if the $k$-cycle does not contain the loop) $k+1$.\\ Thus, we have the following graph: Figure \ref{fig5}.12.\\ \textbf{Subcase 3.2.3:} Suppose $v_1$ and $v_2$ are part of a $k$-cycle, with $2<k\leq 5$. So far, we have accounted for at least $9$ edges and nonzero minors of sizes $1,5,6,k$, where $k\geq 3$. To get a nonzero $2\times 2$ minor, there should be another loop or a 2-cycle. Adding a loop can only guarantee at most two more sizes of nonzero minors. If there is a $2$-cycle between $v_1$ and $v_2$, we get additional nonzero minor sizes $2,7$, which is not enough for the graph to be PS. If the $2$-cycle is disjoint with the $k$-cycle, we get additional nonzero minor sizes $2,3,k+2$. This is still not enough to get a PS matrix. \medskip \noindent\textbf{Case 4:} $circ(G) = 4$. \noindent Let $v_1$, $v_2$, $v_3$, $v_4$ form a 4-cycle. For each of $v_5$, $v_6$, $v_7$, there must be incoming edges $\{5,in\},\{6,in\}, \{7,in\}$ and outgoing edges $\{5,out\},\{6,out\}, \{7,out\}$. Note that $$\{\{5,in\},\{6,in\}, \{7,in\}\}\cap \{\{5,out\},\{6,out\}, \{7,out\}\}$$ must have at least 1 element since we still have to account for the loop. We can list down all possible nonequivalent strongly connected graphs with less than $9$ edges and maximum cycle length 4 as follows: \begin{center} \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [below of =1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [below of=4] {}; \node[main node] (6) [left of=5] {}; \node[main node] (7) [left of=6] {}; \path (1) edge[bend left] node[above] {} (2) (2) edge[bend left] node[above] {} (1) edge node[above] {} (3) (3) edge node[above] {} (6) edge node[above] {} (4) (4) edge node[above] {} (5) (5) edge node[above] {} (3) (6) edge node[above] {} (7) (7) edge node[above] {} (2); \end{tikzpicture}\qquad \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [below of =1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [below of=4] {}; \node[main node] (6) [left of=5] {}; \node[main node] (7) [left of=6] {}; \path (1) edge[bend left] node[above] {} (2) (2) edge[bend left] node[above] {} (1) edge node[above] {} (3) (3) edge node[above] {} (6) (4) edge node[above] {} (5) (5) edge node[above] {} (6) (6) edge node[above] {} (7) edge node[above] {} (4) (7) edge node[above] {} (2); \end{tikzpicture}\qquad \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [below of =1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [below of=4] {}; \node[main node] (6) [left of=5] {}; \node[main node] (7) [left of=6] {}; \path (1) edge[bend left] node[above] {} (2) (2) edge[bend left] node[above] {} (1) edge node[above] {} (6) (3) edge node[above] {} (4) (4) edge node[above] {} (5) (5) edge node[above] {} (6) (6) edge node[above] {} (7) edge node[above] {} (3) (7) edge node[above] {} (2); \end{tikzpicture}\medskip\\ \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [below of =1] {}; \node[main node] (3) [right of=2] {}; \node[main node] (4) [right of=3] {}; \node[main node] (5) [below of=4] {}; \node[main node] (6) [left of=5] {}; \node[main node] (7) [left of=6] {}; \path (1) edge[bend left] node[above] {} (2) (2) edge[bend left] node[above] {} (1) edge node[above] {} (7) (3) edge node[above] {} (4) edge node[above] {} (2) (4) edge node[above] {} (5) (5) edge node[above] {} (6) (6) edge node[above] {} (3) (7) edge node[above] {} (6); \end{tikzpicture}\qquad \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[main node] (1) {}; \node[main node] (2) [above right of=1] {}; \node[main node] (7) [below right of=1] {}; \node[main node] (3) [below right of=2] {}; \node[main node] (4) [above right of=3] {}; \node[main node] (6) [below right of=3] {}; \node[main node] (5) [below right of=4] {}; \path (1) edge node[above] {} (2) (2) edge node[above] {} (3) (3) edge node[above] {} (7) (7) edge node[right] {} (1) (3) edge node[below] {} (4) (4) edge node[below] {} (5) (5) edge node[below] {} (6) (6) edge node[below] {} (3); \end{tikzpicture} \end{center} For the top left and middle graphs, adding a loop will give an adjacency matrix that has zero determinant. For the top rightmost graph, a loop that is disjoint from the $2$-cycle and $4$-cycle must be added to get all nonzero minors. For the lower left graph, a loop must be added so that the loop, a $4$-cycle and a $2$-cycle are all disjoint. Finally, for the lower right graph, a loop and a $2$-cycle must be added as shown in the figure below to get nonzero minors.\\ Thus, we have the following candidate graphs: Figure \ref{fig5}.13, Figure \ref{fig5}.14, and Figure \ref{fig5}.15. \medskip \noindent\textbf{Case 5:} $circ(G) = 3$.\\ By our formula, $e_{7,3}=9$, so any digraph with $7$ vertices and a circumference of $3$ must contain at least $9$ edges in order to be strongly connected. Let $v_1$, $v_2$ and $v_3$ form a $3$-cycle. Since the graph needs at least $9$ edges in order to be strongly connected, it can have at most $1$ loop, giving a total of $9+1=10$ edges. Then the graph must have a $2$-cycle as well.\\ \textbf{Case 5.1:} Suppose the $2$-cycle shares an edge with the $3$-cycle ($v_1v_2v_3$). Then, between the $2$-cycle, the $3$-cycle and the loop, we have used $5$ of the $10$ available edges. So there are $5$ edges remaining with which to connect the vertices $v_4$, $v_5$, $v_6$ and $v_7$. Each of these vertices requires at least one incoming edge and one outgoing edge. Since $circ(G)=3$, it would require at least $3$ edges in order to connect any two of the remaining vertices to the original $3$-cycle. From that point, it would require at least an additional $3$ edges in order to connect the remaining two vertices. However there are only $5$ edges available, and thus the $2$-cycle cannot share an edge with the $3$-cycle.\\ \textbf{Case 5.2:} Suppose the $2$-cycle does not share an edge with the $3$-cycle ($v_1v_2v_3$), say the $2$-cycle is ($v_4v_5$) without loss of generality. Then, between the $2$-cycle, the $3$-cycle and the loop, we have used $6$ of the $10$ available edges. So there are $4$ edges remaining with which to connect the vertices $v_6$ and $v_7$ with the cycle ($v_1v_2v_3$) and the cycle ($v_4v_5$). Since the circumference of the graph is $4$, it would require at least $3$ edges in order to connect $v_6$ and $v_7$ to either the $3$-cycle or the $2$-cycle. Then there is at most $1$ edge remaining, which is insufficient to connect the remaining separated cycles. Therefore the $2$-cycle cannot be separate from the $3$-cycle, and so there are no digraphs with $7$ vertices and a circumference of $3$ which have correct minors. \noindent\textbf{Case 6:} $circ(G) = 2$.\\ By our formula, $e_{7,2}=12$, so any digraph with $7$ vertices and a circumference of $2$ must contain at least $12$ edges in order to be strongly connected. However we are assuming only $10$ edges, and therefore we cannot have a circumference of $2$. \noindent Summarizing the above discussion, we reach the main result in this section: \begin{proposition}\label{pro3} Suppose that $(V,E)$ is a strongly connected digraph with $7$ vertices and $10$ edges which has all non-zero minors. Then $(V,E)$ is equivalent to one of digraphs in Figure \ref{fig5}. \end{proposition} \begin{figure} \begin{multicols}{3} \begin{enumerate} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge [color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue,bend left] node[above] {} (3) (3) edge[color=blue] node[above] {} (4) edge[color=blue,bend left] node[above] {} (2) (4) edge[color=blue,bend left] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) edge[color=blue] node[right] {} (4) (7) edge[color=blue,bend left] node[below] {} (1); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge [color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue,bend left] node[above] {} (3) (3) edge[color=blue] node[above] {} (4) edge[color=blue,bend left] node[above] {} (2) (4) edge[color=blue,bend left] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) (7) edge[color=blue,bend left] node[below] {} (1) edge[color=blue,bend left] node[right] {} (5); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge [color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue,bend left] node[above] {} (3) (3) edge[color=blue] node[above] {} (4) edge[color=blue,bend left] node[above] {} (2) (4) edge[color=blue,bend left] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) (7) edge[color=blue,bend left] node[below] {} (1) edge[color=blue] node[right] {} (4); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge [color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue] node[above] {} (3) (3) edge[color=blue, bend left] node[above] {} (4) (4) edge[color=blue,bend left] node[right] {} (5) edge[color=blue,bend left] node[above] {} (3) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) (7) edge[color=blue,bend left] node[below] {} (1) edge[color=blue,bend left] node[right] {} (5); \end{tikzpicture}\\ \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge [color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue] node[above] {} (3) (3) edge[color=blue] node[above] {} (4) edge[color=blue,bend left] node[above] {} (7) (4) edge[color=blue,bend left] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) (7) edge[color=blue,bend left] node[below] {} (1) edge[color=blue,bend left] node[below] {} (3); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge[color=blue,loop left] node[above] {} (1) edge[color=blue] node[above] {} (2) (2) edge[color=blue] node[above] {} (3) edge[color=blue,bend right] node[above] {} (1) (3) edge[color=blue] node[above] {} (4) (4) edge[color=blue] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) edge[color=blue] node[below] {} (3) (6) edge[color=blue] node[below] {} (7) (7) edge[color=blue] node[below] {} (2); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \path (1) edge[color=blue,loop left] node[above] {} (1) edge[color=blue] node[above] {} (2) (2) edge[color=blue] node[above] {} (3) edge[color=blue,bend right] node[above] {} (1) (3) edge[color=blue] node[above] {} (4) (4) edge[color=blue] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) edge[color=blue] node[below] {} (4) (7) edge[color=blue] node[below] {} (2); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (7) {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (6) [above right of=7] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (1) [below right of=7] {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (2) [right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (4) [right of=5] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \path (1) edge[color=blue] node[below] {} (2) edge[color=blue,loop left] node[below] {} (1) (2) edge[color=blue] node[below] {} (3) (3) edge[color=blue] node[above] {} (4) (4) edge[color=blue] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) edge[color=blue] node[below] {} (2) (6) edge[color=blue,bend right] node[below] {} (7) edge[color=blue] node[below] {} (1) (7) edge[color=blue] node[above] {} (6); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (6) {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (1) [below of=6] {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (7) [above of=5] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (2) [right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (4) [right of=5] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \path (1) edge[color=blue] node[below] {} (2) edge[color=blue,loop left] node[below] {} (1) (2) edge[color=blue, bend right] node[below] {} (3) (3) edge[color=blue] node[above] {} (4) edge[color=blue, bend right] node[below] {} (2) (4) edge[color=blue] node[right] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (7) edge[color=blue] node[below] {} (1) (7) edge[color=blue] node[above] {} (4); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (5) [below of=7] {\textcolor{white}{\textbf{6}}}; \path (1) edge[color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue] node[above] {} (3) (3) edge[color=blue] node[above] {} (6) (4) edge[color=blue,bend right] node[above] {} (2) (5) edge[color=blue,bend left] node[above] {} (7) (6) edge[color=blue] node[below] {} (7) edge[color=blue] node[below] {} (4) (7) edge[color=blue,bend left] node[below] {} (1) edge[color=blue,bend left] node[above] {} (5); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (5) [below right of=1] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (4) [right of=5] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (7) [below of=4] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (6) [below of=5] {\textcolor{white}{\textbf{6}}}; \path (1) edge [color=blue,loop left] node {} (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue,bend left] node[above] {} (3) (3) edge[color=blue] node[above] {} (4) edge[color=blue,bend left] node[above] {} (2) (4) edge[color=blue] node[above] {} (5) (5) edge[color=blue,bend left] node[above] {} (1) edge[color=blue] node[above] {} (6) (6) edge[color=blue] node[below] {} (7) (7) edge[color=blue] node[below] {} (4); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (6) [right of=7] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (5) [right of=6] {\textcolor{white}{\textbf{2}}}; \path (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue] node[above] {} (3) (3) edge[color=blue] node[above] {} (6) (6) edge[color=blue] node[below] {} (7) edge[color=blue] node[below] {} (5) (7) edge[color=blue,bend left] node[below] {} (1) (4) edge [color=blue,loop left] node {} (4) edge[color=blue,bend left] node[below] {} (5) (5) edge[color=blue,bend left] node[below] {} (4) edge[color=blue] node[below] {} (3); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (2) [below of =1] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (5) [below of=4] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (6) [left of=5] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (7) [left of=6] {\textcolor{white}{\textbf{1}}}; \path (1) edge[color= blue,bend left] node[above] {} (2) (2) edge[color= blue,bend left] node[above] {} (1) edge[color= blue] node[above] {} (6) (3) edge[color= blue] node[above] {} (4) (4) edge[color= blue] node[above] {} (5) (5) edge[color= blue] node[above] {} (6) (6) edge[color= blue] node[above] {} (7) edge[color= blue] node[above] {} (3) (7) edge[color= blue] node[above] {} (2) edge[color= blue,loop left] node[above] {} (7); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (2) [below of =1] {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (3) [right of=2] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (4) [right of=3] {\textcolor{white}{\textbf{6}}}; \node[color=blue,main node] (5) [below of=4] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (6) [left of=5] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (7) [left of=6] {\textcolor{white}{\textbf{1}}}; \path (1) edge[color=blue,bend left] node[above] {} (2) (2) edge[color=blue,bend left] node[above] {} (1) edge[color=blue] node[above] {} (7) (3) edge[color=blue] node[above] {} (4) edge[color=blue] node[above] {} (2) (4) edge[color=blue] node[above] {} (5) (5) edge[color=blue] node[above] {} (6) (6) edge[color=blue] node[above] {} (3) (7) edge[color=blue] node[above] {} (6) edge[color= blue,loop left] node[above] {} (7); \end{tikzpicture} \item \begin{tikzpicture}[->, >=stealth',shorten >=1pt,auto,node distance=1.2cm,thick,main node/.style={circle,fill, fill opacity=0.75,minimum size = 6pt, inner sep = 1pt}] \node[color=blue,main node] (1) {\textcolor{white}{\textbf{4}}}; \node[color=blue,main node] (2) [above right of=1] {\textcolor{white}{\textbf{1}}}; \node[color=blue,main node] (7) [below right of=1] {\textcolor{white}{\textbf{3}}}; \node[color=blue,main node] (3) [below right of=2] {\textcolor{white}{\textbf{2}}}; \node[color=blue,main node] (4) [above right of=3] {\textcolor{white}{\textbf{5}}}; \node[color=blue,main node] (6) [below right of=3] {\textcolor{white}{\textbf{7}}}; \node[color=blue,main node] (5) [below right of=4] {\textcolor{white}{\textbf{6}}}; \path (1) edge[color=blue] node[above] {} (2) edge[color=blue,bend left] node[above] {} (7) (2) edge[color=blue] node[above] {} (3) edge[color=blue,loop above] node[above] {} (2) (3) edge[color=blue] node[above] {} (7) (7) edge[color=blue,bend left] node[right] {} (1) (3) edge[color=blue] node[below] {} (4) (4) edge[color=blue] node[below] {} (5) (5) edge[color=blue] node[below] {} (6) (6) edge[color=blue] node[below] {} (3); \end{tikzpicture} \end{enumerate} \end{multicols} \caption{\bf List of potential digraphs with $7$ vertices and $10$ edges. \label{fig5}} \end{figure} \section{Calculations} We now convert the graphs from Figure \ref{fig5} into properly signed matrices, and show that none of them can be realized by a stable matrix. First however, we prove the following lemma: \begin{lemma}\label{ineq} Let $A$ be a $7\times7$ real-valued matrix with the characteristic polynomial \begin{equation}\label{PA} P_A(t)=t^7+c_1 t^6+c_2 t^5+c_3 t^4+c_4 t^3+c_5 t^2+c_6 t+c_7. \end{equation} If $A$ is stable, then all of the following inequalities must hold: \begin{enumerate} \item $c_2c_4-c_6>0$; \item $c_1c_2-c_3>0$; \item $c_1c_6-c_7>0$; \item $c_2c_5-c_7>0$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{deg2}, a matrix $A$ is stable if and only if there exist $a_1,a_2,a_3,a_4,b_1,b_2,b_3>0$ such that \begin{equation}\label{PA1} P_A(t) = (t^2+b_1t+a_1)(t^2+b_2t+a_2)(t^2+b_3t+a_3)(t+a_4). \end{equation} Comparing the coefficients of \eqref{PA} and \eqref{PA1}, we have: \begin{equation} \begin{split} c_1 =& a_4 + b_1 + b_2 + b_3, \\ c_2 =& a_1 + a_2 + a_3 + a_4(b_1 + b_2+b_3) + b_1 b_2 + b_1 b_3 + b_2 b_3, \\ c_3 =& a_1 a_4 + a_2 a_4 + a_3 a_4 + a_2 b_1 + a_3 b_1 + a_1 b_2 + a_3 b_2 + a_4 b_1 b_2\\ &+a_1 b_3 + a_2 b_3 + a_4 b_1 b_3 + a_4 b_2 b_3 + b_1 b_2 b_3,\\ c_4 =& a_1 a_2 + a_1 a_3 + a_2 a_3 + a_2 a_4 b_1 + a_3 a_4 b_1 + a_1 a_4 b_2 + a_3 a_4 b_2\\ &+a_3 b_1 b_2 + a_1 a_4 b_3 + a_2 a_4 b_3 + a_2 b_1 b_3 + a_1 b_2 b_3 + a_4 b_1 b_2 b_3,\\ c_5 =& a_1 a_2 a_4 + a_1 a_3 a_4 + a_2 a_3 a_4 + a_2 a_3 b_1 + a_1 a_3 b_2 + a_3 a_4 b_1 b_2\\ &+a_1 a_2 b_3 + a_2 a_4 b_1 b_3 + a_1 a_4 b_2 b_3,\\ c_6 =& a_1a_2a_3 + a_2a_3a_4b_1 + a_1 a_3 a_4 b_2 + a_1 a_2 a_4 b_3,\\ c_7 =& a_1a_2a_3a_4. \end{split} \end{equation} We can verify that for each case of $c_i c_j-c_{i+j}$ listed in here, $c_i c_j-c_{i+j}$ can be expressed as a sum of products of $a_i$ and $b_j$'s, hence $c_i c_j-c_{i+j}>0$ as all $a_i$ and $b_j$ are positive. \end{proof} Now we use Lemma \ref{ineq} to exclude all the $15$ digraphs (or equivalently sign patterns) in Figure \ref{fig5} to be potentially stable. \begin{enumerate} \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & -a_{32} & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & -a_{64} & 0 & 0 & a_{67}\\ a_{71} & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_1=a_{11}$, $c_2=a_{23}a_{32}$, and $c_3=a_{11}a_{23}a_{32}+a_{45}a_{56}a_{64}$. So $c_1c_2-c_3=-a_{45}a_{56}a_{64}<0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & -a_{32} & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ -a_{71} & 0 & 0 & 0 & -a_{75} & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_1=a_{11}$, $c_2=a_{23}a_{32}$, $c_3=a_{11}a_{23}a_{32}+a_{56}a_{67}a_{75}$. So $c_1c_2-c_3=-a_{56}a_{67}a_{75}<0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & -a_{32} & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ \pm a_{71} & 0 & 0 & -a_{74} & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have:\\ $c_2=a_{23}a_{32}$, $c_4=a_{45}a_{56}a_{67}a_{74}$, $c_6=a_{23}a_{32}a_{45}a_{56}a_{67}a_{74}$. So $c_2c_4-c_6=0$. Note that while both positive and negative values of $a_{71}$ allow for correct minors, the value of $a_{71}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{71}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 1. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & -a_{43} & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ -a_{71} & 0 & 0 & 0 & -a_{75} & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_1=a_{11}$, $c_2=a_{34}a_{43}$, $c_3=a_{11}a_{34}a_{43}+a_{56}a_{67}a_{75}$. So $c_1c_2-c_3=-a_{56}a_{67}a_{75}<0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & a_{37}\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ -a_{71} & 0 & -a_{73} & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_1=a_{11}$, $c_2=a_{37}a_{73}$, $c_3=a_{11}a_{37}a_{73}$. So $c_1c_2-c_3=0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ -a_{21} & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & -a_{53} & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & -a_{72} & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_1=a_{11}$, $c_6=a_{23}a_{43}a_{45}a_{56}a_{67}a_{72}$, $c_7=a_{11}a_{23}a_{43}a_{45}a_{56}a_{67}a_{72}$. So $c_1c_6-c_7=0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 3. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ -a_{21} & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & -a_{64} & 0 & 0 & a_{67}\\ 0 & -a_{72} & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_1=a_{11}$, $c_6=a_{23}a_{43}a_{45}a_{56}a_{67}a_{72}$, $c_7=a_{11}a_{23}a_{43}a_{45}a_{56}a_{67}a_{72}$. So $c_1c_6-c_7=0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 3. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & -a_{52} & 0 & 0 & 0 & a_{56} & 0\\ -a_{61} & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & 0 & 0 & 0 & 0 & -a_{76} & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{67}a_{76}$, $c_4=a_{23}a_{34}a_{45}a_{52}$, $c_6=a_{67}a_{76}a_{23}a_{34}a_{45}a_{52}$. So $c_2c_4-c_6=0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & -a_{32} & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ \pm a_{61} & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & 0 & 0 & -a_{74} & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{23}a_{32}$, $c_5=a_{11}a_{45}a_{56}a_{67}a_{74}$, $c_7=a_{23}a_{32}a_{11}a_{45}a_{56}a_{67}a_{74}$. So $c_2c_5-c_7=0$. Note that while both positive and negative values of $a_{61}$ allow for correct minors, the value of $a_{61}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{61}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 4. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & a_{47}\\ \pm a_{51} & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & -a_{65} & 0 & 0\\ 0 & -a_{72} & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{56}a_{65}$, $c_4=a_{23}a_{34}a_{47}a_{72}$, $c_6=a_{56}a_{65}a_{23}a_{34}a_{47}a_{72}$. So $c_2c_4-c_6=0$. Note that while both positive and negative values of $a_{51}$ allow for correct minors, the value of $a_{51}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{51}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & -a_{32} & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ \pm a_{51} & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & 0 & 0 & -a_{74} & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{23}a_{32}$, $c_4=a_{45}a_{56}a_{67}a_{74}$, $c_6=a_{23}a_{32}a_{45}a_{56}a_{67}a_{74}$. So $c_2c_4-c_6=0$. Note that while both positive and negative values of $a_{51}$ allow for correct minors, the value of $a_{51}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{51}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ -a_{21} & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & -a_{42} & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & 0 & -a_{73} & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{12}a_{21}$, $c_5=a_{34}a_{45}a_{56}a_{67}a_{73}$, $c_7=a_{12}a_{21}a_{34}a_{45}a_{56}a_{67}a_{73}$. So $c_2c_5-c_7=0$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 4. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & -a_{27}\\ \pm a_{31} & 0 & 0 & a_{34} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & -a_{63} & 0 & 0 & 0 & 0\\ 0 & -a_{72} & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{27}a_{72}$, $c_4=a_{34}a_{45}a_{56}a_{63}$, $c_6=a_{27}a_{72}a_{34}a_{45}a_{56}a_{63}$. So $c_2c_4-c_6=0$. Note that while both positive and negative values of $a_{31}$ allow for correct minors, the value of $a_{31}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{31}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 2. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & a_{36} & 0\\ \pm a_{41} & 0 & 0 & 0 & a_{45} & 0 & 0\\ 0 & 0 & 0 & -a_{54} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & -a_{72} & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{45}a_{54}$, $c_5=a_{11}a_{23}a_{36}a_{67}a_{72}$, $c_7=a_{45}a_{54}a_{11}a_{23}a_{36}a_{67}a_{72}$. So $c_2c_5-c_7=0$. Note that while both positive and negative values of $a_{41}$ allow for correct minors, the value of $a_{41}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{41}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 4. \item {\small $\begin{bmatrix} -a_{11} & a_{12} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & a_{23} & 0 & a_{25} & 0 & 0\\ 0 & 0 & 0 & a_{34} & 0 & 0 & 0\\ \pm a_{41} & 0 & -a_{43} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & a_{56} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{67}\\ 0 & -a_{72} & 0 & 0 & 0 & 0 & 0 \end{bmatrix}$}. \medskip\noindent Here we have: $c_2=a_{43}a_{34}$, $c_5=a_{11}a_{25}a_{56}a_{67}a_{72}$, $c_7=a_{43}a_{34}a_{11}a_{25}a_{56}a_{67}a_{72}$. So $c_2c_5-c_7=0$. Note that while both positive and negative values of $a_{41}$ allow for correct minors, the value of $a_{41}$ does not appear in our contradiction, and thus the contradiction holds regardless of the value of $a_{41}$. Thus this sign pattern is not potentially stable by Lemma \ref{ineq} part 4. \end{enumerate}
2024-02-18T23:40:30.537Z
2018-05-24T02:07:09.000Z
algebraic_stack_train_0000
2,606
12,395
proofpile-arXiv_065-12689
\section{Introduction} \label{sec_intro} Polar codes build on channels polarization to efficiently achieve the capacity of symmetric channels (refer to \cite{arikan_channel_2009,arikan_rate_2009,sasoglu_polarization_2009, korada_polar_2010} for detailed presentations). Channel polarization is a method that takes two independent binary-input discrete memoryless channels $W(y|x)$ to a \textit{bad} channel and a \textit{good} channel, given by \begin{align} W(y_1^2|u_1) &= \sum_{u_2 \in \qty{0,1}}W(y_2 | u_2) W(y_1 | u_1 \oplus u_2), \label{eq:bad_channel} \\ W(y_1^2, u_1 |u_2) &= W(y_2 | u_2) W(y_1 | u_1 \oplus u_2) \label{eq:good_channel} \end{align} respectively, where $x_a^b = (x_a, x_{a+1} \ldots x_b)^\top$. These channels are obtained by combining two copies of $W(y|x)$ with a CNOT gate $(u_1,u_2)\rightarrow (u_1\oplus u_2, u_2)$ and then decoding successively bits $u_1$ and $u_2$. That is, output bit $u_1$ is decoded first assuming that $u_2$ is erased. Then bit $u_2$ is decoded taking into account the previously decoded value of $u_1$. Polar codes are obtained by recursing this process to obtain $2^l$ different channels from the polarization of $2^{l-1}$ pair of channels (\fig{subfig_circ_2_1}). As the number of polarization steps $l$ goes to infinity, the fraction of channels for which the error probability approaches 0 tends to $I(W)$ and the fraction of channels for which the error probability approaches 1 tends to $1 - I(W)$, where $I(W)$ is the mutual information of the channel with uniform distribution of the inputs \cite{arikan_channel_2009}. Thus, polar codes are capacity achieving for those channels. The above construction can be generalized by replacing the CNOT transformation by a different polarization kernel \cite{korada_polar_2010-1}. See \sec{kernels} for details. The kernel can generally take as input more than two copies of the channel $W(y|x)$ and the {\em breadth} $b$ of a kernel is define as the number of channels it combines. An increasing breadth offers the possibility of a more efficient polarization (i.e. a lower decoding error probability), but has the drawback of an increased decoding complexity. \begin{figure}[!t] \centering \subfloat[]{ \centering \includegraphics[width=1.25in]{polar_codes_circ.pdf} \label{fig:subfig_circ_2_1}} \quad \subfloat[]{ \centering \includegraphics[width=1.83in]{circ_b_3_sub_1.pdf} \label{fig:subfig_circ_3_1}} \quad \\ \subfloat[]{ \centering \includegraphics[width=1.25in]{circ_b_2_sub_2.pdf} \label{fig:subfig_circ_2_2}} \quad \subfloat[]{ \centering \includegraphics[width=1.83in]{circ_b_3_sub_4.pdf} \label{fig:subfig_circ_3_4}} \caption{Examples of regular (depth=1) and convolutional (depth$>1$) polar code circuits. The parameters (breadth, depth, polarization steps) are \textbf{(a)} (2,1,4), \textbf{(b)} (3,1,3), \textbf{(c)} (2,2,4) and \textbf{(d)} (3,3,3).} \label{fig:fig_circuits} \end{figure} Another possible generalization of polar codes is to replace the block-structure polarization procedure by a convolutional structure. See \sec{conv} for details. Note indeed that each polarization step of a polar code consists of independent application of the polarization kernel on distinct blocks of $b$ bits (pairs of bits in the above example with $b=2$). Recently (\cite{ferris_branching_2014,ferris_convolutional_2017}), this idea was extended to a convolutional structure (see \fig{subfig_circ_2_2} and \fig{subfig_circ_3_4}), where each polarization step does not factor into a product of independent transformations on disjoint blocks but instead consists of $d$ layers of shifted block transformations. We refer to the number of layers $d$ as the {\em depth} of a code. An increasing depth offers the advantage of faster polarization and the drawback of an increased decoding complexity. The focus of the present work is to compare the trade-off between breadth and depth in terms of the speed at which the decoding error rate goes to zero and the decoding complexity. We focus on codes which have practically relevant sizes using Monte Carlo numerical simulations. \section{Decoding} In this section, the general successive cancellation decoding scheme is define in terms of tensor networks. This enables a straightforward extension to convolutional polar codes. \subsection{Successive cancellation} \label{sec:SC} Define $G$ as the reversible encoding circuit acting on $N$ input bits and $N$ output bits. $K$ of these input bits take arbitrary values $u_i$ while the $N-K$ others are frozen to the value $u_i=0$. From this input $u_1^N$, the message $x_1^N = G u_1^N$ is transmitted. The channel produces the received message $y_1^N$, resulting in a composite channel \begin{align} W_G(y_1^N| u_1^N) = \prod_{i=1}^N W(y_i | (G u_1^N)_i). \label{eq:comp_channel} \end{align} This composite channel induces a correlated distribution on the bits $u_i$ and is represented graphically on \fig{subfig_comp_channel}. Successive cancellation decoding converts this composite channel into $N$ different channels given by \begin{align} W^{(i)}_G(y_1^N, u_{1}^{i-1}| u_i) = \sum_{u_{i+1}, \ldots u_{N}} W_G(y_1^N | u_1^N), \label{eq:eff_channel} \end{align} for $i = 1,2,\ldots N$. Those channels are obtain by decoding successively symbols $u_1$ through $u_N$ (i.e., from right to left on \fig{fig_succ_dec}) by summing over all the bits that are not yet decoded and fixing the value of all the bits $u_1^{i-1}$. Either to their frozen value, if the corresponding original input bit was frozen, or to their previously decoded value. This effective channel is represented graphically on \fig{subfig_eff_channel}. Given $W^{(i)}_G$, $u_i$ is decoded by maximizing the likelihood of the acquired information: \begin{align} u_i = \mathop{\text{argmax}}_{\tilde u_i\in\qty{0,1}} \, W^{(i)}_G(y_1^N, u_{1}^{i-1}| \tilde u_i). \label{eq_ml_decoder} \end{align} Applying this procedure for all bits from right to left yield the so-called \textit{successive cancellation decoder}. Equation \ref{eq_ml_decoder} can be generalized straightforwardly by decoding not a single bit $u_i$ at the time but instead a $w$-bit sequence $u_i^{i+w-1}$ jointly, collectively viewed as a single symbol from a larger alphabet of size $2^w$. To this effect, the decoding width $w$ is defined as the number of bits that are decoded simultaneously. \begin{figure}[!t] \centering \subfloat[]{ \centering \includegraphics[width=0.95in]{composite_channel.pdf} \label{fig:subfig_comp_channel}} \qquad \subfloat[]{ \centering \includegraphics[width=1.69in]{effective_channel.pdf} \label{fig:subfig_eff_channel}} \caption{Schematic representation of the successive cancellation decoder. \textbf{(a)} A composite channel is obtain from an encoding circuit $G$ and $N$ copies of a channel $W$. Contracting this tensor network for given $y_1^N$ and $u_1^N$ yields \eq{comp_channel}. \textbf{(b)} An effective channel is obtain from the composite channel by summing over all the values of bits $u_{i+1}^N$, graphically represented by the uniform tensor $e = \binom 11$, when decoding bit $u_i$. Contracting this tensor yields \eq{eff_channel} up to a normalization factor.} \label{fig:fig_succ_dec} \end{figure} \subsection{Decoding with tensor networks} \label{sec:TN} Convolutional polar codes were largely inspired by tensor network methods used in quantum many-body physics (see e.g. \cite{orus_practical_2014} and \cite{bridgeman_hand-waving_2017} for an introduction). Akin of the graphical tools used in information theory (Tanner graph, factor graph, etc.), tensor networks were introduced as compact graphical representation of probability distributions (or amplitudes in quantum mechanics) involving a large number of correlated variables. Moreover, certain computational procedures are more easily cast using these graphical representations. It is the case of the successive cancellation decoding problem described above, where the goal is to compute $W^{(i)}_G(y_1^N, u_{1}^{i-1}| u_i)$ given fixed values of $y_1^N, u_{1}^{i-1}$. While $G$ is a $\mathbb F_2^N$ linear transformation, it is sometime convenient to view it as a linear transformation on the space of probability over $N$-bit sequences, i.e., the linear space $\mathbb R^{2^N}$ whose basis vectors are labeled by all possible $N$-bit strings. On this space, $G$ acts as a permutation matrix mapping basis vector $u_1^N$ to basis vector $x_1^N = Gu_1^N$. A single bit is represented in the state $0$ by $u=\binom 10$, in the state $1$ by $u=\binom 01$ and a bit string $u_1^N$ is represented by the $2^N$ dimensional vector $u_1^N = u_1\otimes u_2\otimes \ldots \otimes u_N$. A single bit channel is a $2\times 2$ stochastic matrix and a CNOT gate is given by \begin{equation} {\rm CNOT} = \left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{array} \right), \label{eq:CNOT} \end{equation} because it permutes the inputs $10$ and $11$ while leaving the other inputs $00$ and $01$ unchanged. In this representation \begin{align} W^{(i)}_G&(y_1^N, u_{1}^{i-1}| u_i) = \nonumber \\ &\frac{1}{Z}[u_1 \otimes \ldots \otimes u_{i-1} \otimes u_i \otimes e^{\otimes(N-i)}]^T G W^{\otimes N} y_1^N, \label{eq:Wi} \end{align} where $e = \binom 11$ and $Z = \sum_{u_i \in \qty{0,1}} W^{(i)}_G(y_1^N, u_{1}^{i-1}| u_i)$ is a normalization factor. Ignoring normalization, this quantity can be represented graphically as a tensor network (see \fig{subfig_eff_channel}), where each element of the network is a rank-$r$ tensor, i.e., an element of $\mathbb R^{2^r}$. Specifically, a bit $u_i$ is a rank-one tensor, a channel $W$ is a rank-two tensor, and a two-bit gate is a rank-four tensor (two input bits and two output bits). The CNOT gate is obtained by reshaping \eq{CNOT} into a $(2 \times 2 \times 2 \times 2)$ tensor. In this graphical representation, a rank-$r$ tensor $A_{\mu_1,\mu_2,\ldots \mu_r}$ is represented by a degree-$r$ vertex, with one edge associated to each index $\mu_k$. An edge connecting two vertices means that the shared index is summed over \begin{equation} \includegraphics[width=6.5cm]{TNC},\label{eq:TNC} \end{equation} generalizing the notion of vector and matrix product to higher rank tensors. Tensors can be assembled into a network where edges represent input-output relations just like in an ordinary logical circuit representation. Evaluating \eq{Wi} then amounts to summing over edge values. This computational task, named {\em tensor contraction}, generally scales exponentially with the tree-width of the tensor network \cite{arad_quantum_2010}. The graphical calculus becomes valuable when using circuit identities that simplify the tensor network. Specifically, these identities encode two simple facts illustrated on \fig{tn_simp}: a permutation $G$ acting on the uniform distribution returns the uniform distribution $ G e^{\otimes t} = e^{\otimes t}$, and a permutation acting on a basis vector returns another basis vector $ Gx_1^N = y_1^N$. Once these circuit identities are applied to the evaluation of \eq{Wi} in the specific case of polar codes, it was shown in \cite{ferris_branching_2014,ferris_convolutional_2017} that the resulting tensor network is a tree, so it can be efficiently evaluated. Convolutional polar codes were introduced based on the observation that \eq{Wi} produces a tensor network of constant tree-width despite not being a tree (see \fig{causal_width}), an observation first made in the context of quantum many-body physics \cite{evenbly_class_2014}, so they can also be decoded efficiently. \begin{figure}[!t] \centering \subfloat[]{ \centering \includegraphics[width=1.57in]{tn_simp_e.pdf} \label{fig:tn_simp_e}} \quad \subfloat[]{ \centering \includegraphics[width=1.57in]{tn_simp_xy.pdf} \label{fig:tn_simp_xy}} \caption{Circuit identities. \textbf{(a)} Any permutation acting on the uniform distribution return the uniform distribution. \textbf{(b)} Any contraction of a permutation and basis vector $x_1^t$ gives another basis vector $y_1^t$.} \label{fig:tn_simp} \end{figure} \section{Polar code generalizations} \label{sec:generalizations} In this section, two possible generalizations of polar codes are described and their decoding complexity is analyzed. \subsection{Breadth} \label{sec:kernels} Channel polarization can be achieved using various kernels. In fact, as long as a kernel is not a permutation matrix on $\mathbb F_2^b$, it achieves a non-trivial polarization transform \cite{korada_polar_2010-1}. The CNOT gate is one such example that acts on two bits. However, a general kernel of breadth $b$ can act on $b$ bits, (see \fig{subfig_circ_3_1} for an illustration with $b=3$). An increasing breadth can produce faster polarization, i.e. a decoding error probability which decreases faster with the number of polarization steps. Indeed, in the asymptotic regime, Arikan \cite{arikan_channel_2009} showed that provided the code rate is below the symmetric channel capacity and that the location of the frozen bits are chosen optimally, the asymptotic decoding error probability of the polar code under successive cancellation decoding is $\mathbb P_e \in \mathcal O\qty(2^{N^{-1/2}})$. A different error scaling exponent $\mathbb P_e \in \mathcal O\qty(2^{N^{-\beta}})$ can be achieved from a broader kernel, but breadth 16 is required to asymptotically surpass $\beta=\frac 12$ \cite{korada_polar_2010-1}. Such a broad polarization kernel has the drawback of a substantially increased decoding complexity. Arikan \cite{arikan_channel_2009} showed that the decoding complexity of polar codes is $\mathcal O\qty(N \log_2 N)$. From a tensor network perspective, this complexity can be understood \cite{ferris_convolutional_2017} by counting the number of elementary contractions required to evaluate \eq{Wi} and by noting that the tensor network corresponding to \eq{Wi} for $u_i$ and for $u_{i+1}$ differ only on a fraction $1/\log_2 N$ of locations, so most intermediate calculations can be recycled and incur no additional complexity. As discussed previously, a breadth-$b$ polarization kernel can also be represented as a $2^b\times 2^b$ permutation matrix that act on $\mathbb R^{2^b}$. Applying such a matrix to a $b$-bit probability distribution has complexity $2^b$, and this dominates the complexity of each elementary tensor operation of the successive cancellation decoding algorithm. On the other hand, the total number of bits after $l$ polarization steps with a breadth-$b$ polarization kernel is $N = b^l$, so the overall decoding complexity in this setting is $\mathcal O\qty(2^b N \log_b N)$. \subsection{Depth} \label{sec:conv} The previous section described a natural generalization of polar codes which use a broader polarization kernel. A further generalization, first explored in \cite{ferris_branching_2014,ferris_convolutional_2017}, is to use a polarization step whose circuit is composed of $b$-local gates and has depth $d>1$ (see \fig{subfig_circ_2_2}), which results in a convolutional transformation. A $\text{CP}_{b,d}$ code, that is, a convolutional polar code with kernel breadth $b$ and circuit depth $d$, is define similarly to a polar code with a kernel of size $b$ where each polarization step is replace by a stack of $d$ polarization layers each shifted relative to the previous layer. \fig{subfig_circ_2_2} and \fig{subfig_circ_3_4} illustrates two realizations of convolutional polar codes. To analyze the decoding complexity, it is useful to introduce the concept of a causal cone. Given a circuit and a $w$-bit input sequence $u_i^{i+w-1}$, the associated causal cone is defined as the set of gates together with the set of edges of this circuit whose bit value depends on the value of $u_i^{i+w-1}$. Figure \ref{fig:causal_width} illustrates the causal cone of the sequence $u_{11}^{13}$ for the code $\text{CP}_{2,2}$. Given a convolutional code's breadth $b$ and depth $d$, define $m(d,b,w)$ to be the maximum number of gates in the causal cone of any $w$-bit input sequence of a single polarization step. Because a single convolutional step counts $d$ layers, define $m_s(d,b,w)$ as the number of those gates in the causal cone which are in the $s$-th layer (counting from the top) of the convolution. For the first layer, have $m_1(d,b,w) = \lceil\frac{w-1}b\rceil +1$. This number can at most increase by one for each layer, i.e., $m_{s+1}(d,b,w) \leq m_{s}(d,b,w)+1$, leading to a total number of gates in the causal cone of a single polarization step \begin{align} m(d,b,w) &= \sum_{s=1}^d m_s(d,b,w) \leq dm_1(d,b,w)+\frac{d(d-1)}2 \nonumber \\ &= d\left\lceil\frac{w-1}b\right\rceil +\frac{d(d+1)}2. \label{eq:max_gates} \end{align} Similarly, define the optimal decoding width $w^*(b,d)$ as the smallest value of $w$ for which the causal cone of any $w$ bit sequence after one step of polarization contains at most $bw$ output bits. Figure \ref{fig:causal_width} illustrates that $w^* = 3$ for a $\text{CP}_{2,2}$ code since any 3 consecutive input bits affect at most 6 consecutive bits after one polarization step. Choosing a decoding width $w^*(b,d)$ thus leads to a recursive decoding procedure which is identical at all polarization steps. Since the bottom layer counts $m_d(d,b,w) \leq \lceil\frac{w-1}b\rceil +d$ gates, each acting on $b$ bits, we see that there are at most $b\lceil\frac{w-1}b\rceil +db\leq w+db$ output bits in the causal cone of a single polarization step. The optimal decoding width $w^*$ is chosen such that this number does not exceed $bw^*$, thus \begin{equation} w^*(b,d) \leq \frac b{b-1}d. \label{eq:max_w} \end{equation} Using this optimal value in \eq{max_gates} bounds the number of rank-$b$ tensors that are contracted at each polarization layer, and each contraction has complexity $2^b$. Here again, only a fraction $1/\log_b N$ of these contractions differ at each step of successive cancellation decoding, leading to an overall decoding complexity \begin{align} C_{b,d}(N) = 2^b \frac{m(b,d, w^*)}{w^*} N \log_b N\in \mathcal O(2^b d N \log_b N). \label{eq:complexity} \end{align} Ref. \cite{ferris_convolutional_2017} provides analytical arguments that the resulting convolutional polar codes have a larger asymptotic error exponent $\beta>\frac 12$, and present numerical results showing clear performance improvement over standard polar codes at finite code lengths. These advantages come at the cost of a small constant increased decoding complexity \begin{figure}[!t] \centering \includegraphics[width=3.5in]{causal_width.pdf} \caption{Graphical representation of the causal cone of $u_{11}^{13}$ in the $\text{CP}_{2,2}$ code. Only the gates in the shaded region receive inputs that depend on the sequence $u_{11}^{13}$. Similarly, the edges contained in the shaded region represent bits at intermediate polarization steps whose value depends on sequence $u_{11}^{13}$. This shows that decoding a $\text{CP}_{2,2}$ code amouts to contracting a constant tree-width graph. The optimal width $w^* = 3$ and at most $m(2,2,w^*) = 5$ gates are involved per polarization step.} \label{fig:causal_width} \end{figure} \section{Simulation results} \label{sec:results} Numerical simulations were performed to analyze the performance of codes breadth and depth up to 4. The breadth-2 kernel used was the CNOT, while the breadth-3 and breadth-4 kernels were \begin{align*} &G_3 = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \\ \end{pmatrix}, &&G_4 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \end{pmatrix}, \end{align*} where these are given as representations over $\mathbb F_2^b$. It can easily be verified that these transformations are not permutations, so they can in principle be used to polarize \cite{korada_polar_2010-1}. Also, we choose a convolutional structure where each layer of gates is identical but shifted by one bit to the right (from top to bottom), c.f. \fig{subfig_circ_3_4}. Many others kernel and convolutional structures have been simulated, but those gave the best empirical results. The encoding circuit $G$ is used to define the code, but the complete definition of a polar code must also specify the set of frozen bits $\mathcal F$, i.e. the set of bits that are fixed to $u_i=0$ at the input of the encoding circuit $G$. In general, for a given encoding circuit $G$, the set $\mathcal F$ will depend on the channel and is chosen to minimize the error probability under successive cancellation decoding. Here, a simplified channel selection procedure which uses an error detection criteria described in the next section was used. All the simulations presented focus on the binary symmetric memoryless channel. \subsection{Error detection} \label{sec:ED} Considering an error detection setting enables an important simplification in which the channel selection and code simulation can be performed simultaneously without sampling. In this setting, it is consider that a transmission error $x_1^N \rightarrow y_1^N = x_1^N + \vb e$ is not detected if there exists a non-frozen bit $u_i$, $i\in \mathcal F^c$ which is flipped while none of the frozen bits to its right $u_j$, $j<i$, $j\in \mathcal F$ have been flipped. In other words, an error is considered not detected if its first error location (starting from the right) occurs on a non-frozen bit. Note that this does not correspond to the usual definition of an undetectable error which would be conventionally defined as an error which affects no frozen locations. By considering only frozen bits to the right of a given location, the notion used is tailored to the context of a sequential decoder. Empirically, it was observed that this simple notion is a good proxy to compare the performance of different coding schemes under more common settings. Denote $\mathbb P_U(i)$ the probability that the symbol $u_i$ is the first such undetected error. Then, given a frozen bit set $\mathcal F$, the probability of an undetected error is $\mathbb P_U = \sum_{i\in \mathcal F^c} \mathbb P_U(i)$. This can be evaluated efficiently using the representation of the encoding matrix over $\mathbb R^{2^N}$ as above. For $\vb e \in \mathbb F_2^N$, denote $\mathbb P(\vb e)$ the probability of a bit-flip pattern $\vb e$, viewed as a vector on $\mathbb R^{2^N}$. At the output of the symmetric channels with error probability $p$, $\mathbb P^T = (1-p,p)^{\otimes N}$. Then \begin{equation} \mathbb P_U(i) = (1-p,p)^{\otimes N} G \binom 10^{\otimes i-1} \otimes \binom 01\otimes e^{\otimes(i-1)}, \label{eq:detection} \end{equation} where here again $e = \binom 11$. In terms of tensor networks, this corresponds to the evaluation of the network of \fig{subfig_eff_channel} with $u_i = 1$ and all $u_j=0$ for all $j<i$. Thus, this can be accomplished with complexity given by \eq{complexity}. Because the evaluation of \eq{detection} is independent of the set of frozen bits, it can be evaluate for all positions $i$, selecting the frozen locations as the $N-K$ locations $i$ with the largest value of $\mathbb P_U(i)$. Then, the total undetected error probability is the sum of the $\mathbb P_U(i)$ over the remaining locations. This is equivalently the sum of the $K$ smallest values of $\mathbb P_U(i)$. \begin{figure*}[!th] \centering \subfloat[]{ \centering \includegraphics[width=2.25in]{results_decoding.pdf} \label{fig:subfig_erasure_all}} \subfloat[]{ \centering \includegraphics[width=2.25in]{results_erasure.pdf} \label{fig:subfig_erasure_complexity}} \subfloat[]{ \centering \includegraphics[width=2.25in]{results_flip.pdf} \label{fig:subfig_flip_complexity}} \quad \caption{Numerical simulation results. \textbf{(a)} Undetected error probability under successive cancellation decoding for polar codes ($d=1$) and convolutional polar codes ($d>1$) for various kernel breadths $b$, plotted as a function of code size $N=b^l$ by varying the number of polarization steps $l$. The channel is BSC($1/4$) and the encoding rate is $1/3$. \textbf{(b)} Same thing as (a) but plotted as a function of their decoding complexity, c.f. \eq{complexity}. The number of polarization steps $l$ is chosen in such a way that all codes are roughly of equal size $N(b,l)=b^l\approx 10^3$. The dots connected by a line all have the same kernel breadth $b$ but show a progression of depth $d=1,2,3,4$, with $d=1$ appears on the left and corresponds to regular polar codes. \textbf{(c)} The bit error rate for a BSC($1/20$) with an $1/3$ encoding rate plotted as a function of the decoding complexity. The depth is specify similarly to (b) by the connected dots. The number of polarization steps is chosen to have roughly $N \approx 250$ bits.} \label{fig:results} \end{figure*} The results are shown on \fig{subfig_erasure_all} for various combinations of kernel breadths $b$ and convolutional depth $d$. The code rate was $\frac 13$, meaning that the plotted quantity is the sum of the $N/3$ smallest values of $\mathbb P_U(i)$. \fig{subfig_erasure_complexity} presents a subset of the same data with parameters $b$ and $l$ resulting in codes of roughly equal size $N = b^l\approx 10^3$. This implies that codes with larger breadth use fewer polarization steps. The undetected error probability $\mathbb P_U$ is then plotted against the decoding complexity, compute from \eq{complexity}. Notice that increasing the depth is a very efficient way of suppressing errors with a modest complexity increase. In contrast, increasing the breadth actually deteriorates the performance of these finite-size code and increases the decoding complexity. \subsection{Error correction} \label{sec:EC} For the symmetric channel, the frozen bits were chosen using the error detection procedure describe in the previous section. This is not optimal, but it is sufficient for the sake of comparing different code constructions. Then, standard Monte Carlo simulations were done by transmitting the all 0 codeword sampling errors, using successive cancellation decoding and comparing the decoded message. The results are presented in \fig{subfig_flip_complexity}. The conclusions drawn from the error detection simulations all carry over to this more practically relevant setting. \section{Conclusion} \label{sec:conclusion} We numerically explored a generalization of the polar code family based on a convoluted polarization kernel given by a finite-depth local circuit. On practically relevant code sizes, it was found that these convoluted kernel offer a very interesting error-suppression {\em vs} decoding complexity trade-off compare to previously proposed polar code generalizations using broad kernels. Empirically, no incentive were found to consider increasing both the breadth and the depth: an increasing depth alone offers a greater noise suppression at comparable complexity increase. It will be interesting to see what further gains can be achieved, for instance, from list decoding of convolutional polar codes. \section*{Acknowledgment} This work was supported by Canada's NSERC and Québec's FRQNT. Computations were done using Compute Canada and Calcul Québec clusters. \bibliographystyle{IEEEtran}
2024-02-18T23:40:30.677Z
2018-05-24T02:14:51.000Z
algebraic_stack_train_0000
2,614
4,420
proofpile-arXiv_065-12788
\section{Introduction}\label{sec:intro} First-order methods for minimizing smooth convex functions are a cornerstone of large-scale optimization and machine learning. Given the size and heterogeneity of the data in these applications, there is a particular interest in designing iterative methods that, at each iteration, only optimize over a subset of the decision variables~\cite{wright2015coordinate}. This paper focuses on two classes of methods that constitute important instantiations of this idea. The first class is that of {\it block-coordinate descent methods}, i.e., methods that partition the set of variables into $n \geq 2$ blocks and perform a gradient descent step on a single block at every iteration, while leaving the remaining variable blocks fixed. A paradigmatic example of this approach is the randomized Kaczmarz algorithm of~\cite{SV} for linear systems and its generalization~\cite{nesterov2012efficiency}. The second class is that of {\it alternating minimization methods}, i.e., algorithms that partition the variable set into only $n=2$ blocks and alternate between {\it exactly optimizing} one block or the other at each iteration (see, e.g.,~\cite{beck2015convergence-AM} and references therein). Besides the computational advantage in only having to update a subset of variables at each iteration, methods in these two classes are also able to exploit better the structure of the problem, which, for instance, may be computationally expensive only in a small number of variables. To formalize this statement, assume that the set of variables is partitioned into $n \leq N$ mutually disjoint blocks, where the $i^{\mathrm{th}}$ block of variable $\mathbf{x}$ is denoted by $\mathbf{x}^{i}$, and the gradient corresponding to the $i^{\mathrm{th}}$ block is denoted by $\nabla_i f(\mathbf{x})$. Each block $i$ will be associated with a smoothness parameter $L_i,$ I.e., $\forall \mathbf{x}, \mathbf{y} \in \mathbb{R}^N$: \begin{equation}\label{eq:smoothness-grad} \|\nabla_i f(\mathbf{x} + I_N^i\mathbf{y}) - \nabla_i f(\mathbf{x})\|_* \leq L_i \|\mathbf{y}^i\|, \end{equation} where $I_N^i$ is a diagonal matrix whose diagonal entries equal one for coordinates from block $i$, and are zero otherwise. In this setting, the convergence time of standard randomized block-coordinate descent methods, such as those in~\cite{nesterov2012efficiency}, scales as $O\left(\frac{\sum_i L_i}{\epsilon}\right)$, where $\epsilon$ is the desired additive error. By contrast, when $n=2,$ the convergence time of the alternating minimization method~\cite{beck2015convergence-AM} scales as $O\left(\frac{L_{\min}}{\epsilon}\right),$ where $L_{\min}$ is the minimum smoothness parameter of the two blocks. This means that one of the two blocks can have arbitrarily poor smoothness (including $\infty$), as long it is easy to optimize over it. Some important examples with a nonsmooth block (with smoothness parameter equal to infinity) can be found in~\cite{beck2015convergence-AM}. Additional examples of problems for which exact optimization over the least smooth block can be performed efficiently are provided in Appendix~\ref{app:efficient-iterations}. In this paper, we address the following open question, which was implicitly raised by~\cite{beck2013convergence-BCD}: can we design algorithms that combine the features of randomized block-coordinate descent and alternating minimization? In particular, assuming we can perform exact optimization on block $n$, can we construct a block-coordinate descent algorithm whose running time scales with $O(\sum_{i=1}^{n-1} L_i),$ i.e., independently of the smoothness $L_n$ of the $n^{\mathrm{th}}$ block? This would generalize both existing block-coordinate descent methods, by allowing one block to be optimized exactly, and existing alternating minimization methods, by allowing $n$ to be larger than $2$ and requiring exact optimization only on a single block. We answer these questions in the affirmative by presenting a novel algorithm: alternating randomized block coordinate descent (\ref{eq:AR-BCD}). The algorithm alternates between an exact optimization over a fixed, possibly non-smooth block, and a gradient descent or exact optimization over a randomly selected block among the remaining blocks. For two blocks, the method reduces to the standard alternating minimization, while when the non-smooth block is empty (not optimized over), we get randomized block coordinate descent (RCDM) from~\cite{nesterov2012efficiency}. Our second contribution is \ref{eq:AAR-BCD}, an accelerated version of \ref{eq:AR-BCD}, which achieves the accelerated rate of $\frac{1}{k^2}$ without incurring any dependence on the smoothness of block $n$. Furthermore, when the non-smooth block is empty, \ref{eq:AAR-BCD} recovers the fastest known convergence bounds for block-coordinate descent~\cite{qu2016coordinate,allen2016even, nesterov2012efficiency, lin2014accelerated,nesterov-Stich2017efficiency}. Another conceptual contribution is our extension of the approximate duality gap technique of~\cite{thegaptechnique}, which leads to a general and more streamlined analysis. Finally, to illustrate the results, we perform a preliminary experimental evaluation of our methods against existing block-coordinate algorithms and discuss how their performance depends on the smoothness and size of the blocks. \paragraph{Related Work} Alternating minimization and cyclic block coordinate descent are old and fundamental algorithms~\cite{ortega1970iterative} whose convergence (to a stationary point) has been studied even in the non-convex setting, in which they were shown to converge asymptotically under the additional assumptions that the blocks are optimized exactly and their minimizers are unique~\cite{bertsekas1999nonlinear}. However, even in the non-smooth convex case, methods that perform exact minimization over a fixed set of blocks may converge arbitrarily slowly. This has lead scholars to focus on the case of smooth convex minimization, for which nonasymptotic convergence rates were obtained recently in~\cite{beck2013convergence-BCD,beck2015convergence-AM,sun2015improved,saha2013nonasymptotic}. However, prior to our work, convergence bounds that are independent of the largest smoothness parameter were only known for the setting of two blocks. Randomized coordinate descent methods, in which steps over coordinate blocks are taken in a non-cyclic randomized order (i.e., in each iteration one block is sampled with replacement) were originally analyzed in~\cite{nesterov2012efficiency}. The same paper~\cite{nesterov2012efficiency} also provided an accelerated version of these methods. The results of~\cite{nesterov2012efficiency} were subsequently improved and generalized to various other settings (such as, e.g., composite minimization) in \cite{lee2013efficient,allen2016even,nesterov-Stich2017efficiency,richtarik2014iteration,fercoq2015accelerated,lin2014accelerated}. The analysis of the different block coordinate descent methods under various sampling probabilities (that, unlike in our setting, are non-zero over all the blocks) was unified in \cite{qu2016coordinate} and extended to a more general class of steps within each block in \cite{ GowerR15, qu2016sdna}. Our results should be carefully compared to a number of proximal block-coordinate methods that rely on different assumptions~\cite{tseng2009coordinate,richtarik2014iteration,lin2014accelerated,fercoq2015accelerated}. In this setting, the function $f$ is assumed to have the structure $f_0(\mathbf{x})+ \Psi(\mathbf{x}),$ where $f_0$ is smooth, the non-smooth convex function $\Psi$ is separable over the blocks, i.e., $\Psi(\mathbf{x}) = \sum_{i=1}^n \Psi_i(\mathbf{x}_i)$, and we can efficiently compute the proximal operator of each $\Psi_i$. This strong assumption allows these methods to make use of the standard proximal optimization framework. By contrast, in our paper, the convex objective can be taken to have an arbitrary form, where the non-smoothness of a block need not be separable, though the function is assumed to be differentiable. \section{Preliminaries}\label{sec:prelims} We assume that we are given oracle access to the gradients of a continuously differentiable convex function $f: \mathbb{R}^N \rightarrow \mathbb{R}$, where computing gradients over only a subset of coordinates is computationally much cheaper than computing the full gradient. We are interested in minimizing $f(\cdot)$ over $\mathbb{R}^N$, and we denote $\mathbf{x}_* = \argmin_{\mathbf{x} \in \mathbb{R}^N}f(\mathbf{x})$. We let $\|\cdot\|$ denote an arbitrary (but fixed) norm, and $\|\cdot\|_*$ denote its dual norm, defined in the standard way: $\|\mathbf{z}\|_* = \sup_{\mathbf{x} \in \mathbb{R}^N: \|\mathbf{x}\|=1}\innp{\mathbf{z}, \mathbf{x}}$.\footnote{Note that the analysis extends in a straightforward way to the case where each block is associated with a different norm (see, e.g.,~\cite{nesterov2012efficiency}); for simplicity of presentation, we take the same norm over all blocks.} Let $I_N$ be the identity matrix of size $N$, $I_N^{i}$ be a diagonal matrix whose diagonal elements $j$ are equal to one if variable $j$ is in the $i^{\mathrm{th}}$ block, and zero otherwise. Notice that $I_N = \mathop{\textstyle\sum}_{i=1}^n I_N^i$. Let $S_{i}(\mathbf{x}) = \{\mathbf{y} \in \mathbb{R}^N : (I_N - I_N^i)\mathbf{y} = (I_N - I_N^i)\mathbf{x}\}$, that is, $S_i$ contains all the points from $\mathbb{R}^N$ whose coordinates differ from those of $\mathbf{x}$ only over block $i$. We denote the smoothness parameter of block $i$ by $L_i$, as defined in Equation~\eqref{eq:smoothness-grad}. Equivalently, $\forall \mathbf{x}, \mathbf{y} \in \mathbb{R}^N$: \begin{equation}\label{eq:smoothness-2nd-order} f(\mathbf{x} + I_N^i \mathbf{y}) \leq f(\mathbf{x}) + \innp{\nabla_i f(\mathbf{x}), \mathbf{y}^i} + \frac{L_i}{2}\|\mathbf{y}^i\|^2. \end{equation} The gradient step over block $i$ is then defined as: \begin{equation}\label{eq:grad-step-i} \begin{aligned} &T_i(\mathbf{x})\\ &\hspace{.2cm}= \argmin_{\mathbf{y} \in {S}_i(\mathbf{x})} \Big\{ \innp{\nabla f(\mathbf{x}), \mathbf{y} - \mathbf{x}} + \frac{L_i}{2}\|\mathbf{y} - \mathbf{x}\|^2 \Big\}. \end{aligned} \end{equation} By standard arguments (see, e.g., Exercise 3.27 in \cite{boyd2004convex}): \begin{equation}\label{eq:gradient-progress} f(T_i(\mathbf{x}))-f(\mathbf{x}) \leq -\frac{1}{2L_i}\|\nabla_i f(\mathbf{x})\|_*^2. \end{equation} Without loss of generality, we will assume that the $n^{\mathrm{th}}$ block has the largest smoothness parameter and is possibly non-smooth (i.e., it can be $L_n = \infty$). The standing assumption is that exact minimization over the $n^{\mathrm{th}}$ block is ``easy'', meaning that it is computationally inexpensive and possibly solvable in closed form; for some important examples that have this property, see Appendix~\ref{app:efficient-iterations}. Observe that when block $n$ contains a small number of variables, it is often computationally inexpensive to use second-order optimization methods, such as, e.g., interior point method. We assume that $f(\cdot)$ is strongly convex with parameter $\mu \geq 0$, where it could be $\mu = 0$ (in which case $f(\cdot)$ is not strongly convex). Namely, $\forall \mathbf{x}, \mathbf{y}$: \begin{equation}\label{eq:strong-convexity} f(\mathbf{y}) \geq f(\mathbf{x}) + \innp{\nabla f(\mathbf{x}), \mathbf{y} - \mathbf{x}} + \frac{\mu}{2}\|\mathbf{y} - \mathbf{x}\|^2. \end{equation} When $\mu > 0$, we take $\|\cdot\| = \|\cdot\|_2$, which is customary for smooth and strongly convex minimization~\cite{Bube2014}. Throughout the paper, whenever we take unconditional expectation, it is with respect to all randomness in the algorithm \subsection{Alternating Minimization} In (standard) alternating minimization (AM), there are only two blocks of coordinates, i.e., $n = 2$. The algorithm is defined as follows. \noindent\vsepfbox{\begin{minipage}{.47\textwidth} \begin{equation}\label{eq:alt-min}\tag{AM} \begin{gathered} \mathbf{\hat{x}}_{k} = \argmin_{\mathbf{x} \in S_1(\mathbf{x}_{k-1})} f(\mathbf{x}),\\ \mathbf{x}_{k} = \argmin_{\mathbf{x} \in S_2(\mathbf{\hat{x}}_k)} f(\mathbf{x}),\\ \mathbf{x}_1 \in \mathbb{R}^N \text{ is an arbitrary initial point.} \end{gathered} \end{equation} \end{minipage}} We note that for the standard analysis of alternating minimization~\cite{beck2015convergence-AM}, the exact minimization step over the smoother block can be replaced by a gradient step (Equation~(\ref{eq:grad-step-i})), while still leading to convergence that is only dependent on the smaller smoothness parameter. \subsection{Randomized Block Coordinate (Gradient) Descent} The simplest version of randomized block coordinate (gradient) descent (RCDM) can be stated as~\cite{nesterov2012efficiency}: \noindent\vsepfbox{\begin{minipage}{.47\textwidth} \begin{equation}\label{eq:standard-BCD}\tag{RCDM} \begin{gathered} \text{Select } i_k\in \{1,\dots,n\} \text{ w.p. }p_{i_k} > 0,\\ \mathbf{x}_{k} = T_{i_k}(\mathbf{x}_{k-1}),\\ \mathbf{x}_1\in \mathbb{R}^N \text{ is an arbitrary initial point,} \end{gathered} \end{equation} \end{minipage}} where $\sum_{i=1}^n p_i = 1$. A standard choice of the probability distribution is $p_i \sim L_i$, leading to the convergence rate that depends on the sum of block smoothness parameters. \section{AR-BCD}\label{sec:standard-algo} The basic version of alternating randomized block coordinate descent (AR-BCD) is a direct generalization of (\ref{eq:alt-min}) and (\ref{eq:standard-BCD}): when $n=2$, it is equivalent to (\ref{eq:alt-min}), while when the size of the $n^{\mathrm{th}}$ block is zero, it reduces to (\ref{eq:standard-BCD}). The method is stated as follows: \noindent\vsepfbox{ \begin{minipage}{.47\textwidth} \begin{equation}\label{eq:AR-BCD}\tag{AR-BCD} \begin{gathered} \text{Select } i_k\in \{1,\dots,n-1\} \text{ w.p. }p_{i_k} > 0,\\ \mathbf{\hat{x}}_k = T_{i_k}(\mathbf{x}_{k-1}),\\ \mathbf{x}_{k} = \argmin_{\mathbf{x} \in S_n(\mathbf{\hat{x}}_k)}f(\mathbf{x}),\\ \mathbf{x}_1\in \mathbb{R}^N \text{ is an arbitrary initial point,} \end{gathered} \end{equation} \end{minipage}} where $\sum_{i=1}^{n-1} p_i = 1$. We note that nothing will change in the analysis if the step $\mathbf{\hat{x}}_{k} = T_{i_k}(\mathbf{x}_{k-1})$ is replaced by $\mathbf{\hat{x}}_{k} = \argmin_{\mathbf{x} \in S_{i_k}(\mathbf{x}_{k-1})}f(\mathbf{x})$, since $\min_{\mathbf{x} \in S_{i_k}(\mathbf{x}_{k-1})}f(\mathbf{x}) \leq f(T_{i_k}(\mathbf{x}_{k-1}))$. In the rest of the section, we show that (\ref{eq:AR-BCD}) leads to a convergence bound that interpolates between the convergence bounds of (\ref{eq:alt-min}) and (\ref{eq:standard-BCD}): it depends on the sum of the smoothness parameters of the first $n-1$ blocks, while the dependence on the remaining problem parameters is the same for all these methods. \subsection{Approximate Duality Gap} To analyze (\ref{eq:AR-BCD}), we extend the approximate duality gap technique~\cite{thegaptechnique} to the setting of randomized block coordinate descent methods. The approximate duality gap $G_k$ is defined as the difference of an upper bound $U_k$ and a lower bound $L_k$ to the minimum function value $f(\mathbf{x}_*)$. For (\ref{eq:AR-BCD}), we choose the upper bound to simply be $U_k = f(\mathbf{x}_{k+1})$. The generic construction of the lower bound is as follows. Let $\mathbf{x}_1, \mathbf{x}_2,..., \mathbf{x}_k$ be any sequence of points from $\mathbb{R}^N$ (in fact we will choose them to be exactly the sequence constructed by (\ref{eq:AR-BCD})). Then, by (strong) convexity of $f(\cdot)$, $f(\mathbf{x}_*)\geq f(\mathbf{x}_j) + \innp{\nabla f(\mathbf{x}_j), \mathbf{x}_* - \mathbf{x}_j} + \frac{\mu}{2}\|\mathbf{x}_* - \mathbf{x}_j\|^2$, $\forall j \in \{1, \dots, k\}$. In particular, if $a_j > 0$ is a sequence of (deterministic, independent of $i_j$) positive real numbers and $A_k = \sum_{j=1}^k a_j$, then: \begin{align} f(\mathbf{x}_*) \geq & \frac{\mathop{\textstyle\sum}_{j=1}^k a_j f(\mathbf{x}_j) + \mathop{\textstyle\sum}_{j=1}^k a_j \innp{\nabla f(\mathbf{x}_j), \mathbf{x}_* - \mathbf{x}_j}}{A_k}\notag\\ & + \frac{\frac{\mu}{2}\sum_{j=1}^k a_j\|\mathbf{x}_* - \mathbf{x}_j\|^2}{A_k}\stackrel{\mathrm{\scriptscriptstyle def}}{=} L_k. \label{eq:lb-non-acc} \end{align} \subsection{Convergence Analysis} The main idea in the analysis is to show that $\mathbb{E}[A_k G_k - A_{k-1}G_{k-1}] \leq E_k$, for some deterministic $E_k$. Then, using linearity of expectation, $\mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*)\leq \mathbb{E}[G_k] \leq \frac{\mathbb{E}[A_1 G_1]}{A_k} + \frac{\sum_{j=2}^k E_j}{A_k}$. The bound in expectation can then be turned into a bound in probability, using well-known concentration bounds. The main observation that allows us not to pay for the non-smooth block is: \begin{observation}\label{obs:grad-ns-zero} For $\mathbf{x}_k$'s constructed by (\ref{eq:AR-BCD}), $\nabla_n f(\mathbf{x}_k) = \mathbf{0}$, $\forall k$, where $\mathbf{0}$ is the vector of all zeros. \end{observation} This observation is essentially what allows us to sample $i_k$ only from the first $n-1$ blocks, and holds due to the step $\mathbf{x}_{k} = \argmin_{\mathbf{x} \in S_n(\mathbf{\hat{x}}_k)}f(\mathbf{x})$ from (\ref{eq:AR-BCD}). Denote $R_{\mathbf{x}_*^i} = \max_{\mathbf{x}\in \mathbb{R}^N}\{\|I_N^i(\mathbf{x}_* - \mathbf{x})\|^2: f(\mathbf{x}) \leq f(\mathbf{x}_1)\}$, and let us bound the initial gap $A_1G_1$. \begin{proposition}\label{prop:ARBCD-initial-gap} $\mathbb{E}[A_1G_1] \leq E_1$, where $E_1 = a_1\mathop{\textstyle\sum}_{i=1}^{n-1} \left(\frac{L_i}{2p_i}-\frac{\mu}{2}\right)R_{\mathbf{x}_*^i}$. \end{proposition} \begin{proof} By linearity of expectation, $\mathbb{E}[A_1G_1] = \mathbb{E}[A_1U_1] - \mathbb{E}[A_1L_1].$ The initial lower bound is deterministic, and, by $\nabla_n f(\mathbf{x}_1) = \mathbf{0}$ and duality of norms, is bounded as: \begin{equation*} \begin{aligned} \mathbb{E}[A_1L_1]\geq & a_1f(\mathbf{x}_1) - a_1 \mathop{\textstyle\sum}_{i=0}^{n-1} \|\nabla_i f(\mathbf{x}_1)\|_*\|\mathbf{x}_*^i - \mathbf{x}_1^i\|\\ & + a_1\frac{\mu}{2}\|\mathbf{x}_* - \mathbf{x}_1\|^2. \end{aligned} \end{equation*} Using (\ref{eq:gradient-progress}), if $i_2 = i$, then: \begin{equation*} U_1 = f(\mathbf{x}_2) \leq f(\mathbf{\hat{x}}_2) \leq f(\mathbf{x}_1) - \frac{1}{2L_i}\|\nabla_i f(\mathbf{x}_1)\|_*^2. \end{equation*} Since block $i$ is selected with probability $p_i$ and $A_1 = a_1$: \begin{align*} \mathbb{E}[A_1 U_1] \leq & a_1 f(\mathbf{x}_1) - \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{a_1p_i}{2L_i}\|\nabla_i f(\mathbf{x}_i)\|_*^2. \end{align*} Since the inequality $2ab - a^2 \leq b^2$ holds $\forall a, b$, we have: \begin{equation*} \begin{aligned} & a_1 \|\nabla_i f(\mathbf{x}_1)\|_*\|\mathbf{x}_*^i - \mathbf{x}_1^i\| - \frac{a_1p_i}{2L_i}\|\nabla_i f(\mathbf{x}_i)\|_*^2\\ &\hspace{1cm}\leq \frac{a_1L_i}{2p_i}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2, \; \forall i \in \{1,\dots, n-1\} \end{aligned} \end{equation*} Hence, when $\mu = 0$, $\mathbb{E}[A_1 G_1] \leq \sum_{i=1}^{n-1}\frac{a_1L_i}{2p_i}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2$. When $\mu > 0,$ since in that case we are assuming $\|\cdot\| = \|\cdot\|_2$ (Section~\ref{sec:prelims}), $\|\mathbf{x}_* - \mathbf{x}_1\|^2 \geq \sum_{i=1}^{n-1}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2$, leading to $\mathbb{E}[A_1 G_1] \leq a_1\sum_{i=1}^{n-1}\left(\frac{L_i}{2p_i}- \frac{\mu}{2}\right)\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2$. \end{proof} We now show how to bound the error in the decrease of the scaled gap $A_kG_k$. \begin{lemma}\label{lemma:ARBCD-gap-decrease} $\mathbb{E}[A_kG_k - A_{k-1}G_{k-1}] \leq E_k$, where $E_k = a_k\sum_{i=1}^{n-1}\left(\frac{{a_k}L_i}{2A_kp_i}-\frac{\mu}{2}\right)R_{\mathbf{x}_*^i}$. \end{lemma} \begin{proof} Let $\mathcal{F}_k$ denote the natural filtration up to iteration $k$. By linearity of expectation and $A_{k}L_k - A_{k-1}L_{k-1}$ being measurable w.r.t. $\mathcal{F}_k$, \begin{align*} &\mathbb{E}[A_kG_k - A_{k-1}G_{k-1}|\mathcal{F}_{k}]\\ &\hspace{.2cm}= \mathbb{E}[A_kU_k - A_{k-1}U_{k-1}|\mathcal{F}_{k}] - (A_k L_k - A_{k-1}L_{k-1}). \end{align*} With probability $p_i$ and as $f(\mathbf{x}_{k+1}) \leq f(\mathbf{\hat{x}}_{k+1})$, the change in the upper bound is: \begin{align} A_k U_k - A_{k-1}U_{k-1} \leq & A_k f(\mathbf{\hat{x}}_{k+1}) - A_{k-1}f(\mathbf{x}_k)\notag\\ \leq & a_k f(\mathbf{x}_k) - \frac{A_k}{2L_i}\|\nabla_i f(\mathbf{x}_k)\|_*^2\notag, \end{align} where the second line follows from $\mathbf{\hat{x}}_{k+1} = T_{i_k}(\mathbf{x}_k)$ and Equation (\ref{eq:gradient-progress}). Hence: \begin{align*} &\mathbb{E}[A_k U_k - A_{k-1}U_{k-1}|\mathcal{F}_{k}]\\ &\hspace{1cm}\leq a_k f(\mathbf{x}_k) - A_k\mathop{\textstyle\sum}_{i=1}^{n-1}\frac{p_i}{2L_i}\|\nabla_i f(\mathbf{x}_k)\|_*^2. \end{align*} On the other hand, using the duality of norms, the change in the lower bound is: \begin{align*} &A_kL_k - A_{k-1}L_{k-1}\\ &\hspace{1cm}\geq a_k f(\mathbf{x}_k) - a_k \mathop{\textstyle\sum}_{i=1}^{n-1} \|\nabla_i f(\mathbf{x}_k)\|_*\|\mathbf{x}_*^i - \mathbf{x}_k^i\|\\ &\hspace{1.4cm}+ a_k\frac{\mu}{2}\|\mathbf{x}_* - \mathbf{x}_k\|^2\\ &\hspace{1cm}\geq a_k f(\mathbf{x}_k) - a_k \mathop{\textstyle\sum}_{i=1}^{n-1} \|\nabla_i f(\mathbf{x}_k)\|_* \sqrt{R_{\mathbf{x}_*^i}}\\ &\hspace{1.4cm}+ a_k\frac{\mu}{2}\|\mathbf{x}_* - \mathbf{x}_k\|^2. \end{align*} By the same argument as in the proof of Proposition~\ref{prop:ARBCD-initial-gap}, it follows that: $\mathbb{E}[A_k G_k - A_{k-1}G_{k-1}|\mathcal{F}_k] \leq a_k\sum_{i=1}^{n-1}\left(\frac{L_i{a_k}}{2A_kp_i}-\frac{\mu}{2}\right)R_{\mathbf{x}_*^i} = E_k$. Taking expectations on both sides, as $E_k$ is deterministic, the proof follows. \end{proof} We are now ready to prove the convergence bound for (\ref{eq:AR-BCD}), as follows. \begin{theorem}\label{thm:ARBCD-convergence} Let $\mathbf{x}_k$ evolve according to (\ref{eq:AR-BCD}). Then, $\forall k \geq 1$: \begin{enumerate} \item If $\mu = 0:$ $ \mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*) \leq \frac{2\mathop{\textstyle\sum}_{i=1}^{n-1}\frac{L_i}{p_i}R_{\mathbf{x}_*^i}}{k+3}. $ In particular, for $p_i = \frac{L_i}{\sum_{i' = 1}^{n-1} L_{i'}},$ $1\leq i \leq n-1$: $$ \mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*) \leq \frac{2(\sum_{i'=1}^{n-1}L_{i'}) \mathop{\textstyle\sum}_{i=1}^{n-1}R_{\mathbf{x}_*^i}}{k+3}. $$ Similarly, for $p_i = \frac{1}{n-1}$, $1\leq i\leq n-1:$ $$ \mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*) \leq \frac{2(n-1)\mathop{\textstyle\sum}_{i=1}^{n-1}{L_i}R_{\mathbf{x}_*^i}}{k+3} $$ \item If $\mu > 0$, $p_i = \frac{L_i}{\sum_{i' = 1}^{n-1}L_{i'}}$ and $\|\cdot\| = \|\cdot\|_2:$ \begin{align*} &\mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*)\\ &\hspace{1cm}\leq \Big(1 - \frac{\mu}{\sum_{i'=1}^{n-1}L_{i'}}\Big)^k\\ & \hspace{1.5cm}\cdot \frac{(\sum_{i'=1}^{n-1}L_{i'})\|(I_N - I_N^n)(\mathbf{x}_* - \mathbf{x}_1)\|^2}{2}. \end{align*} \end{enumerate} \end{theorem} \begin{proof} From Proposition~\ref{prop:ARBCD-initial-gap} and Lemma~\ref{lemma:ARBCD-gap-decrease}, by linearity of expectation and the definition of $G_k$: \begin{equation} \mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*) \leq \mathbb{E}[G_k] \leq \frac{\sum_{j=1}^k E_j}{A_k}, \end{equation} where $E_j = \frac{{a_j}^2}{A_j}\sum_{i=1}^{n-1}\frac{L_i}{2p_i}R_{\mathbf{x}_*^i}$. Notice that the algorithm does not depend on the sequence $\{a_j\}$ and thus we can choose it arbitrarily. Suppose that $\mu = 0$. Let $a_j = \frac{j+1}{2}$. Then $\frac{{a_j}^2}{A_j} = \frac{(j+1)^2}{j(j+3)}\leq 1$, and thus: \frac{\sum_{j=1}^k E_j}{A_k} \leq \frac{2\mathop{\textstyle\sum}_{i=1}^{n-1}\frac{L_i}{p_i}R_{\mathbf{x}_*^i}}{k+3}, which proves the first part of the theorem, up to concrete choices of $p_i$'s, which follow by simple computations. For the second part of the theorem, as $\mu > 0$, we are assuming that $\|\cdot\| = \|\cdot\|_2$, as discussed in Section~\ref{sec:prelims}. From Lemma~\ref{lemma:ARBCD-gap-decrease}, $E_j = a_j \sum_{i=1}^{n-1} \left(\frac{a_j L_i}{2A_jp_i} - \frac{\mu}{2}\right)R_{\mathbf{x}_*^i}$, $\forall j \geq 2$. As $p_i = \frac{L_i}{\sum_{i'=1}^{n-1}L_{i'}}$, if we take $\frac{a_j}{A_j} = \frac{\mu}{\sum_{i'=1}^{n-1}L_{i'}}$, it follows that $E_j = 0$, $\forall j \geq 2$. Let $a_1 = A_1 = 1$ and $\frac{a_j}{A_j} = \frac{\mu}{\sum_{i'=1}^{n-1}L_{i'}}$ for $j \geq 2$. Then: \mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*) \leq \mathbb{E}[G_k] \leq \frac{\mathbb{E}[A_1 G_1]}{A_k}. $ As $\frac{A_1}{A_k} = \frac{A_1}{A_2}\cdot \frac{A_2}{A_3}\cdot \dots \cdot \frac{A_{k-1}}{A_k}$ and $\frac{A_{j-1}}{A_j} = 1 - \frac{a_j}{A_j}$: \mathbb{E}[f(\mathbf{x}_{k+1})] - f(\mathbf{x}_*) \leq \Big(1 - \frac{\mu}{\sum_{i'=1}^{n-1}L_{i'}}\Big)^{k-1}\mathbb{E}[G_1]. $ It remains to observe that, from Proposition~\ref{prop:ARBCD-initial-gap}, $\mathbb{E}[G_1] \leq \big(1 - \frac{\mu}{\sum_{i'=1}^{n-1}L_{i'}}\big)\frac{(\sum_{i'=1}^{n-1}L_{i'})\|(I_N - I_N^n)(\mathbf{x}_* - \mathbf{x}_1)\|^2}{2}$. \end{proof} We note that when $n=2$, the asymptotic convergence of~\ref{eq:AR-BCD} coincides with the convergence of alternating minimization~\cite{beck2015convergence-AM}. When $n^{\mathrm{th}}$ block is empty (i.e., when all blocks are sampled with non-zero probability and there is no exact minimization over a least-smooth block), we obtain the convergence bound of the standard randomized coordinate descent method~\cite{nesterov2012efficiency}. \section{Accelerated AR-BCD}\label{sec:acc-algo} In this section, we show how to accelerate (\ref{eq:AR-BCD}) when $f(\cdot)$ is smooth. We believe it is possible to obtain similar results in the smooth and strongly convex case, which we defer to a future version of the paper. Denote: \begin{gather} \Delta_k = I_N^{i_k} \nabla f(\mathbf{x}_k)/p_{i_k},\label{eq:Delta_k-def}\notag\\ \mathbf{v}_k = \argmin_{\mathbf{u}}\Big\{\mathop{\textstyle\sum}_{j=1}^k a_j \innp{\Delta_j, \mathbf{u} \notag\\ \hspace{3cm} + \mathop{\textstyle\sum}_{i=1}^n \frac{\sigma_i}{2}\|\mathbf{u}^i - \mathbf{x}_1^i\|^2\Big\},\label{eq:arg-min-lb} \end{gather} where $\sigma_i > 0$, $\forall i$, will be specified later. Accelerated AR-BCD (AAR-BCD) is defined as follows: \vsepfbox{ \begin{minipage}{.47\textwidth} \begin{equation}\label{eq:AAR-BCD}\tag{AAR-BCD} \begin{gathered} \text{Select }i_k \text{ from } \{1,\dots, n-1\} \text{ w.p. } p_{i_k},\\ \mathbf{\hat{x}}_k = \frac{A_{k-1}}{A_k}\mathbf{y}_{k-1} + \frac{a_k}{A_k}\mathbf{v}_{k-1},\\ \mathbf{x}_k = \argmin_{\mathbf{x} \in S_n(\mathbf{\hat{x}}_k)}f(\mathbf{x}),\\ \mathbf{y}_k = \mathbf{x}_k + \frac{a_k}{p_{i_k}A_k}I_N^{i_k}(\mathbf{v}_{k}-\mathbf{v}_{k-1}),\\%T_{i_k}(\mathbf{x}_k),\\% \mathbf{x}_1 \text{ is an arbitrary initial point,} \end{gathered} \end{equation} \end{minipage}} where $\sum_{i=1}^{n-1}p_i = 1$, $p_i > 0$, $\forall i \in \{1,\dots, n-1\}$, and $\mathbf{v}_k$ is defined by (\ref{eq:arg-min-lb}). To seed the algorithm, we further assume that $\mathbf{y}_1 = \mathbf{x}_1 + I_N^{i_1}\frac{1}{{p_{i_1}}}(\mathbf{v}_1 - \mathbf{x}_1)$. \begin{remark}\label{remark:iteration-complexity} Iteration complexity of (\ref{eq:AAR-BCD}) is dominated by the computation of $\mathbf{\hat{x}}_k,$ which requires updating an entire vector. This type of an update is not unusual for accelerated block coordinate descent methods, and in fact appears in all such methods we are aware of~\cite{nesterov2012efficiency,lee2013efficient,lin2014accelerated,fercoq2015accelerated,allen2016even}. In most cases of practical interest, however, it is possible to implement this step efficiently (using that $\mathbf{v}_k$ changes only over block $i_k$ in iteration $k$). More details are provided in Appendix~\ref{app:efficient-iterations} \end{remark} To analyze the convergence of \ref{eq:AAR-BCD}, we will need to construct a more sophisticated duality gap than in the previous section, as follows. \subsection{Approximate Duality Gap} \begin{figure*}[ht! \begin{equation}\label{eq:rand-lb} \Lambda_k = \frac{\sum_{j=1}^k a_j f(\mathbf{x}_j) + \min_{\mathbf{u} \in \mathbb{R}^N}\left\{\sum_{j=1}^k a_j\innp{\Delta_j, \mathbf{u} - \mathbf{x}_j} + \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{u}^i - \mathbf{x}_1^i\|^2\right\}- \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2}{A_k}. \end{equation} \vspace{-15pt} \end{figure*} We define the upper bound to be $U_k = f(\mathbf{y}_k)$. The constructed lower bound $L_k$ from previous subsection is not directly useful for the analysis of (\ref{eq:AAR-BCD}). Instead, we will construct a random variable $\Lambda_k$, which in expectation is upper bounded by $f(\mathbf{x}^*)$. The general idea, as in previous subsection, is to show that some notion of approximate duality gap decreases in expectation. Towards constructing $\Lambda_k,$ we first prove the following technical proposition, whose proof is in Appendix~\ref{app:omitted-proofs}. \begin{proposition}\label{prop:expect-Delta_k} Let $\mathbf{x}_k$ be as in (\ref{eq:AAR-BCD}). Then: $$ \mathbb{E}[\mathop{\textstyle\sum}_{j=1}^k a_j \innp{\Delta_j, \mathbf{x}_* - \mathbf{x}_j}] = \mathbb{E}[\mathop{\textstyle\sum}_{j=1}^k a_j \innp{\nabla f(\mathbf{x}_j), \mathbf{x}_* - \mathbf{x}_j}]. $$ \end{proposition} Define the randomized lower bound as in Eq.~(\ref{eq:rand-lb}), and observe that (\ref{eq:arg-min-lb}) defines $\mathbf{v}_k$ as the argument of the minimum from $\Lambda_k$. The crucial property of $\Lambda_k$ is that it lower bounds $f(\mathbf{x}_*)$ in expectation, as shown in the following lemma. \begin{lemma}\label{lemma:Lambda_k-is-lb} Let $\mathbf{x}_k$ be as in (\ref{eq:AAR-BCD}). Then $ f(\mathbf{x}_*) \geq \mathbb{E}[\Lambda_k]. $ \end{lemma} \begin{proof By convexity of $f(\cdot),$ for any sequence $\{\tilde{\mathbf{x}}_j\}$ from $\mathbb{R}^N$, $f(\mathbf{x}_*)\geq \frac{\sum_{j=1}^k a_j (f(\tilde{\mathbf{x}}_j) + \innp{\nabla f(\tilde{\mathbf{x}}_j), \mathbf{x}_* - \tilde{\mathbf{x}}_j})}{A_k}$. Since the statement holds for any sequence $\{\tilde{\mathbf{x}}_j\},$ it also holds if $\{\tilde{\mathbf{x}}_j\}$ is selected according to some probability distribution. In particular, for $\{\tilde{\mathbf{x}}_j\} = \{\mathbf{x}_j\}$: \begin{align*} f(\mathbf{x}_*)\geq &\mathbb{E}\Big[\frac{\sum_{j=1}^k a_j (f(\mathbf{x}_j) + \innp{\nabla f(\mathbf{x}_j), \mathbf{x}_* - \mathbf{x}_j})}{A_k}\Big]. \end{align*} By linearity of expectation and Proposition~\ref{prop:expect-Delta_k}: \begin{equation}\label{eq:lb-in-exp} \begin{aligned} f(\mathbf{x}_*)\geq \mathbb{E}\Big[\frac{\sum_{j=1}^k a_j (f(\mathbf{x}_j) + \innp{\Delta_j, \mathbf{x}_* - \mathbf{x}_j})}{A_k} \Big]. \end{aligned} \end{equation} Adding and subtracting (deterministic) $\mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2$ to/from (\ref{eq:lb-in-exp}) and using that: \begin{equation*} \begin{aligned} &\mathop{\textstyle\sum}_{j=1}^k a_j \innp{\Delta_j, \mathbf{x}_* - \mathbf{x}_j} + \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2\\ &\hspace{.5cm}\geq \min_{\mathbf{u}}\Big\{ \mathop{\textstyle\sum}_{j=1}^k a_j \innp{\Delta_j, \mathbf{u} - \mathbf{x}_j}+\mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{u}^i - \mathbf{x}_1^i\|^ \Big\}\\ &\hspace{.5cm}= \min_{\mathbf{u}} m_k(\mathbf{u}), \end{aligned} \end{equation*} where $m_k(\mathbf{u}) = \mathop{\textstyle\sum}_{j=1}^k a_j \innp{\Delta_j, \mathbf{u} - \mathbf{x}_j}+ \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{u}^i - \mathbf{x}_1^i\|^ $, it follows that: \begin{align*} &f(\mathbf{x}_*) \geq \mathbb{E}\Big[\frac{\sum_{j=1}^k a_j f(\mathbf{x}_j) - \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2}{A_k}\\ &\hspace{1.7cm} + \frac{\min_{\mathbf{u} \in \mathbb{R}^N} m_k(\mathbf{u})}{A_k}\Big], \end{align*} which is equal to $\mathbb{E}[\Lambda_k],$ and completes the proof. \end{proof} Similar as before, define the approximate gap as $\Gamma_k = U_k - \Lambda_k$. Then, we can bound the initial gap as follows. \begin{proposition}\label{prop:AAR-BCD-init-gap} If $a_1 = \frac{{a_1}^2}{A_1} \leq \frac{\sigma_i{p_i}^2}{L_i}$, $\forall i\in\{1,..., n-1\}$, then $\mathbb{E}[A_1\Gamma_1] \leq \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_* - \mathbf{x}_1\|^2$. \end{proposition} \begin{proof} As $a_1 = A_1$ and $\mathbf{y}_1$ differs from $\mathbf{x}_1$ only over block $i = i_1$, by smoothness of $f(\cdot)$: \begin{align*} &A_1 U_1 = A_1f(\mathbf{y}_1)\\ &\leq a_1 f(\mathbf{x}_1) + a_1\innp{\nabla_i f(\mathbf{x}_1), \mathbf{y}_1^i - \mathbf{x}_1^i} + \frac{a_1 L_i}{2}\|\mathbf{y}_1^i - \mathbf{x}_1^i\|^2. \end{align*} On the other hand, the initial lower bound is: \begin{align*} A_1\Lambda_1 =& a_1 (f(\mathbf{x}_1) + \innp{\Delta_1, \mathbf{v}_1 - \mathbf{x}_1}) \\ &+ \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{v}_1^i - \mathbf{x}_1^i\|^2 - \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2. \end{align*} Recall that $\mathbf{y}_1^i = \mathbf{x}_1^i + \frac{1}{p_i}(\mathbf{v}_1^i - \mathbf{x}_1^i)$. Using $A_1\Gamma_1 = A_1U_1 - A_1\Lambda_1$ and the bounds on $U_1,\, \Lambda_1$ from the above: $ A_1\Gamma_1 \leq \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2, $ as $a_1 \leq {p_i}^2\frac{\sigma_i}{L_i}$, and, thus, $\mathbb{E}[A_1\Gamma_1] \leq \mathop{\textstyle\sum}_{i=1}^{n-1} \frac{\sigma_i}{2}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2$. \end{proof} The next part of the proof is to show that $A_k\Gamma_k$ is a supermartingale. The proof is provided in Appendix~\ref{app:omitted-proofs}. \begin{lemma}\label{lemma:AkGammak-supermartingale} If $\frac{{a_k}^2}{A_k}\leq \frac{{p_i}^2\sigma_i}{L_i}$, $\forall i \in\{1, \dots,n-1\}$, then $\mathbb{E}[A_k\Gamma_k|\mathcal{F}_{k-1}] \leq A_{k-1}\Gamma_{k-1}$. \end{lemma} Finally, we bound the convergence of~(\ref{eq:AAR-BCD}). \begin{theorem}\label{thm:AAR-BCD-convergence} Let $\mathbf{x}_k,\, \mathbf{y}_k$ evolve according to (\ref{eq:AAR-BCD}), for $\frac{{a_k}^2}{A_k} = \min_{1\leq i\leq n-1}\frac{\sigma_i{p_i}^2}{L_i} = \mathrm{const}$. Then, $\forall k \geq 1$: $$ \mathbb{E}[f(\mathbf{y}_k)] - f(\mathbf{x}_*) \leq \frac{\sum_{i=1}^{n-1} \sigma_i \|\mathbf{x}_*^i - \mathbf{x}_a^i\|^2}{2A_k}. $$ In particular, if $p_i = \frac{\sqrt{L_i}}{\sum_{i'=1}^{n-1}\sqrt{L_{i'}}}$, $\sigma_i = (\sum_{i'=1}^{n-1}\sqrt{L_{i'}})^2$, and $a_1 = 1$, then: $$ \mathbb{E}[f(\mathbf{y}_k)] - f(\mathbf{x}_*) \leq \frac{2(\sum_{i'=1}^{n-1}\sqrt{L_{i'}})^2 \sum_{i=1}^{n-1}\|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2}{k(k+3)}. $$ Alternatively, if $p_i = \frac{1}{n-1}$, $\sigma_i = L_i$, and $a_1 = \frac{1}{(n-1)^2}$: $$ \mathbb{E}[f(\mathbf{y}_k)] - f(\mathbf{x}_*) \leq \frac{2(n-1)^2\sum_{i=1}^{n-1}L_i \|\mathbf{x}_*^i - \mathbf{x}_1^i\|^2}{k(k+3)}. $$ \end{theorem} \begin{proof} The first part of the proof follows immediately by applying Proposition~\ref{prop:AAR-BCD-init-gap} and Lemma~\ref{lemma:AkGammak-supermartingale}. The second part follows by plugging in the particular choice of parameters and observing that $a_j$ grows faster than $\frac{j+1}{2}$ in the former, and faster than $\frac{j+1}{2(n-1)^2}$ in the latter case \end{proof} Finally, we make a few remarks regarding Theorem~\ref{thm:AAR-BCD-convergence}. In the setting without a non-smooth block (when $n^{\mathrm{th}}$ block is empty), (\ref{eq:AAR-BCD}) with sampling probabilities $p_i \sim \sqrt{L_i}$ has the same convergence bound as the NU\_ACDM algorithm~\cite{allen2016even} and the ALPHA algorithm for smooth minimization~\cite{qu2016coordinate}. Further, when the sampling probabilities are uniform, (\ref{eq:AAR-BCD}) converges at the same rate as the ACDM algorithm \cite{nesterov2012efficiency} and the APCG algorithm applied to non-composite functions \cite{lin2014accelerated}. \section{Numerical Experiments}\label{sec:experiments} \begin{figure*}[ht!] \centering \hspace{.3cm} \subfigure[$N/n = 5$]{\includegraphics[width=0.22\textwidth]{smoothness_bf_5.png}\label{fig:smooth-5-blog}}\hspace{.3cm} \subfigure[$N/n = 10$]{\includegraphics[width=0.22\textwidth]{smoothness_bf_10.png}\label{fig:smooth-10-blog}}\hspace{.3cm} \subfigure[$N/n = 20$]{\includegraphics[width=0.22\textwidth]{smoothness_bf_20.png}\label{fig:smooth-20-blog}}\hspace{.3cm} \subfigure[$N/n = 40$]{\includegraphics[width=0.22\textwidth]{smoothness_bf_40.png}\label{fig:smooth-40-blog}}\hspace{-.1cm} \subfigure[$N/n = 5$]{\includegraphics[width=0.24\textwidth]{BCD_grad_med_5_lip.png}\label{fig:lip-5-blog}} \subfigure[$N/n=10$]{\includegraphics[width=0.24\textwidth]{BCD_grad_med_10_lip.png}\label{fig:lip-10-blog}} \subfigure[$N/n=20$]{\includegraphics[width=0.24\textwidth]{BCD_grad_med_20_lip.png}\label{fig:lip-med-20}} \subfigure[$N/n=40$]{\includegraphics[width=0.24\textwidth]{BCD_grad_med_40_lip.png}\label{fig:lip-med-40}} \subfigure[$N/n = 5$]{\includegraphics[width=0.24\textwidth]{Acc_BCD_grad_med_5_lip-min.png}\label{fig:acc-lip-5-blog}} \subfigure[$N/n=10$]{\includegraphics[width=0.24\textwidth]{Acc_BCD_grad_med_10_lip-min.png}\label{fig:acc-lip-10-blog}} \subfigure[$N/n=20$]{\includegraphics[width=0.24\textwidth]{Acc_BCD_grad_med_20_lip-min.png}\label{fig:acc-lip-med-20}} \subfigure[$N/n=40$]{\includegraphics[width=0.24\textwidth]{Acc_BCD_grad_med_40_lip-min.png}\label{fig:acc-lip-med-40}} \caption{Comparison of different block coordinate descent methods: \protect\subref{fig:smooth-5-blog}-\protect\subref{fig:smooth-40-blog} distribution of smoothness parameters over blocks, \protect\subref{fig:lip-5-blog}-\protect\subref{fig:lip-med-40} comparison of non-accelerated methods, and \protect\subref{fig:acc-lip-5-blog}-\protect\subref{fig:acc-lip-med-40} comparison of accelerated methods. Block sizes $N/n$ increase going left to right.} \label{fig:AR-BCD} \end{figure*} To illustrate the results, we solve the least squares problem on the BlogFeedback Data Set~\cite{buza2014feedback} obtained from UCI Machine Learning Repository~\cite{Lichman:2013}. The data set contains 280 attributes and 52,396 data points. The attributes correspond to various metrics of crawled blog posts. The data is labeled, and the labels correspond to the number of comments that were posted within 24 hours from a fixed basetime. The goal of a regression method is to predict the number of comments that a blog post receives. What makes linear regression with least squares on this dataset particularly suitable to our setting is that the smoothness parameters of individual coordinates in the least squares problem take values from a large interval, even when the data matrix $\mathbf{A}$ is scaled by its maximum absolute value (the values are between 0 and $\sim$354).\footnote{We did not compare \ref{eq:AR-BCD} and \ref{eq:AAR-BCD} to other methods on problems with a non-smooth block ($L_n = \infty$), as no other methods have any known theoretical guarantees in such a setting.} The minimum eigenvalue of $\mathbf{A}^T\mathbf{A}$ is zero (i.e., $\mathbf{A}^T\mathbf{A}$ is not a full-rank matrix), and thus the problem is not strongly convex. We partition the data into blocks as follows. We first sort the coordinates by their individual smoothness parameters. Then, we group the first $N/n$ coordinates (from the sorted list of coordinates) into the first block, the second $N/n$ coordinates into the second block, and so on. The chosen block sizes $N/n$ are 5, 10, 20, 40, corresponding to $n=\{56, 28, 14, 7\}$ coordinate blocks, respectively. The distribution of the smoothness parameters over blocks, for all chosen block sizes, is shown in Fig.~\ref{fig:smooth-5-blog}-\ref{fig:smooth-40-blog}. Observe that as the block size increases (going from left to right in Fig.~\ref{fig:smooth-5-blog}-\ref{fig:smooth-40-blog}), the discrepancy between the two largest smoothness parameters increases. In all the comparisons between the different methods, we define an epoch to be equal to $n$ iterations (this would correspond to a single iteration of a full-gradient method). The graphs plot the optimality gap of the methods over epochs, where the optimal objective value $f^*$ is estimated via a higher precision method and denoted by $\hat{f}^*$. All the results are shown for 50 method repetitions, with bold lines representing the median\footnote{We choose to show the median as opposed to the mean, as it is well-known that in the presence of outliers the median is a robust estimator of the true mean~\cite{hampel2011robust}.} optimality gap over those 50 runs. The norm used in all the experiments is $\ell_2$, i.e., $\|\cdot\| = \|\cdot\|_2$. \paragraph{Non-accelerated methods} We first compare \ref{eq:AR-BCD} with a gradient step to \ref{eq:standard-BCD}~\cite{nesterov2012efficiency} and standard cyclic BCD -- C-BCD (see, e.g.,~\cite{beck2013convergence-BCD}). To make the comparison fair, as \ref{eq:AR-BCD} makes two steps per iteration, we slow it down by a factor of two compared to the other methods (i.e., we count one iteration of \ref{eq:AR-BCD} as two). In the comparison, we consider two cases for RCDM and C-BCD: (i) the case in which these two algorithms perform gradient steps on the first $n-1$ blocks and exact minimization on the $n^{\mathrm{th}}$ block (denoted by RCDM and C-BCD in the figure), and (ii) the case in which the algorithms perform gradient steps on all blocks (denoted by RCDM-G and C-BCD-G in the figure). The sampling probabilities for RCDM and AR-BCD are proportional to the block smoothness parameters. The permutation for C-BCD is random, but fixed in each method run. Fig.~\ref{fig:lip-5-blog}-\ref{fig:lip-med-40} shows the comparison of the described non-accelerated algorithms, for block sizes $N/n \in\{5, 10, 20, 40\}$. The first observation to make is that adding exact minimization over the least smooth block speeds up the convergence of both C-BCD and RCDM, suggesting that the existing analysis of these two methods is not tight. Second, \ref{eq:AR-BCD} generally converges to a lower optimality gap. While RCDM makes a large initial progress, it stagnates afterwards due to the highly non-uniform sampling probabilities, whereas \ref{eq:AR-BCD} keeps making progress. \paragraph{Accelerated methods} Finally, we compare \ref{eq:AAR-BCD} to NU\_ACDM~\cite{allen2016even}, APCG~\cite{lin2014accelerated}, and accelerated C-BCD (ABCGD) from~\cite{beck2013convergence-BCD}. As \ref{eq:AAR-BCD} makes three steps per iteration (as opposed to two steps normally taken by other methods), we slow it down by a factor 1.5 (i.e., we count one iteration of \ref{eq:AAR-BCD} as 1.5). We chose the sampling probabilities of NU\_ACDM and \ref{eq:AAR-BCD} to be proportional to $\sqrt{L_i}$, while the sampling probabilities for APCG are uniform\footnote{The theoretical results for APCG were only presented for uniform sampling~\cite{lin2014accelerated}.}. Similar as before, each full run of ABCGD is performed on a random but fixed permutation of the blocks. The results are shown in Fig.~\ref{fig:acc-lip-5-blog}-\ref{fig:acc-lip-med-40}. Compared to APCG (and ABCGD), NU\_ACDM and \ref{eq:AAR-BCD} converge much faster, which is expected, as the distribution of the smoothness parameters is highly non-uniform and the meethods with non-uniform sampling are theoretically faster by factor of the order $\sqrt{n}$~\cite{allen2016even}. As the block size is increased (going left to right), the discrepancy between the smoothness parameters of the least smooth block and the remaining blocks increases, and, as expected, \ref{eq:AAR-BCD} exhibits more dramatic improvements compared to the other methods. \section{Conclusion}\label{sec:conclusion} We presented a novel block coordinate descent algorithm \ref{eq:AR-BCD} and its accelerated version for smooth minimization \ref{eq:AAR-BCD}. Our work answers the open question of~\cite{beck2013convergence-BCD} whether the convergence of block coordinate descent methods intrinsically depends on the largest smoothness parameter over all the blocks by showing that such a dependence is not necessary, as long as exact minimization over the least smooth block is possible. Before our work, such a result only existed for the setting of two blocks, using the alternating minimization method. There are several research directions that merit further investigation. For example, we observed empirically that exact optimization over the non-smooth block improves the performance of RCDM and C-BCD, which is not justified by the existing analytical bounds. We expect that in both of these methods the dependence on the least smooth block can be removed, possibly at the cost of a worse dependence on the number of blocks. Further, \ref{eq:AR-BCD} and \ref{eq:AAR-BCD} are mainly useful when the discrepancy between the largest block smoothness parameter and the remaining smoothness parameters is large, while under uniform distribution of the smoothness parameters it can be slower than other methods by a factor 1.5-2. It is an interesting question whether there are modifications to \ref{eq:AR-BCD} and \ref{eq:AAR-BCD} that would make them uniformly better than the alternatives. \section{Acknowledgements} Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing. It was partially supported by NSF grant \#CCF-1718342 and by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF grant \#CCF-1740425. \clearpage
2024-02-18T23:40:31.081Z
2018-05-24T02:11:49.000Z
algebraic_stack_train_0000
2,633
7,865
proofpile-arXiv_065-12789
\section{Background} \label{sec:background} \vspace{0in} Adblocking browsers (e.g. Brave) and adblocking extensions (e.g. AdBlock Plus and uBlock Origin) use crowdsourced filter lists to block ads and trackers. Examples of popular crowdsourced filter lists include EasyList \cite{easylist_web} to block ads, EasyPrivacy \cite{EasyPrivacy} to block trackers, and Anti-Adblock Killer \cite{Anti-Adblock_Killer_Forum} to block anti-adblockers. Filter lists contain HTTP and HTML signatures to block ads and trackers. The crowdsourced, signature-based filter list approach to block ads and trackers has significant benefits. Crowdsourced filter lists are updated by volunteers at an impressive pace \cite{easylistgithubcommits,easylistublockgithubcommits} to cover new ads and trackers as they are encountered and reported by users \cite{abpforumsupport,easylist_forum}. Further, the signatures used by these lists are relatively easy for contributors to understand, broadening the number of possible contributors. However, this crowdsourced, signature-based filter list approach entails significant, and possibly intractable, downsides that suggest the need for new adblocking approaches going forward. The remainder of this section details some of the weaknesses in the current filter list approach, while the following section describes our proposed solution. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{workflow_updated.pdf} \vspace{-0in} \caption{Our proposed approach for classification of \textsf{AD} and \textsf{NON-AD} URLs. We combine information from HTML, HTTP, and JavaScript layers and represent connections among and across these layers in form of a graph. We then extract features from the graph and train a machine learning model to classify \textsf{AD} and \textsf{NON-AD} URLs.} \label{figure:classifier_flow} \vspace{-0in} \end{figure*} \point{Bloat} A growing problem with crowdsourced, signature-based filter lists is that they bloat over time, becoming more difficult to maintain. This is partially a result of the non-expert, non-methodical way of adding signatures to the list. Rules are added far more frequently than rules are removed, resulting in a long tail of stale, low-utility rules. During a crawl of the Alexa top-1K websites, where we visited the home page and three randomly selected child pages, we observed that only 884 of the 32,218 (less than 3\%) HTTP rules in EasyList trigger. Additionally, because these increasingly bloated filter lists are manually maintained, they are slow to catch up when websites introduce changes. Iqbal et al. \cite{Iqbal17AntiABIMC} reported that filter lists sometimes take several months to add rules for blocking new anti-adblockers. \point{Accuracy} Crowdsourced filter lists also suffer in terms of accuracy (both precision and recall). Precision suffers from overly broad rules. This is evidenced by the ever-growing number of exception rules in these filter lists, which are added to undo incorrectly blocked resources from other overly broad rules. As one notable example, 263 exception rules in EasyList exist only to allow resources blocked by other rules \textit{for a single domain}. EasyList contains many such examples of exception rules catering for other overly broad rules. Recall suffers mainly due to the lack of sufficient feedback for less popular websites. Englehardt and Narayanan \cite{Englehardt16MillionSiteMeasurementCCS} reported that filter lists are less effective at blocking obscure trackers. \point{Evasion} Crowdsourced filter lists can be easily evaded by publishers and advertisers. Simple randomization of HTTP and HTML information can successfully evade filter lists used by adblockers \cite{Wang16FSCWebRanz}. Some publishers have already begun using these techniques. For example, Facebook recently manipulated HTML element identifiers that were being used by EasyList to get their ads past adblockers \cite{facebookadblocking,facebook_html_rand,facebook_newmanipulation}. Domain generation algorithm (DGA) is another technique that allows publishers and advertisers to evade filter lists \cite{Zaifeng2018DGAunblock,dga_blog}. Third-party services such as Instart Logic provide ad ``unblocking'' services~\cite{instartlogic} that leverage similar HTTP and HTML obfuscation techniques to evade filter lists. In summary, filter lists used by adblockers are inefficient, unreliable, and susceptible to obfuscation by financially motivated adversaries. We believe that adblocking is a valuable security and privacy-enhancing technology that needs to move beyond crowdsourced filter lists to stay viable as the arms race between advertisers and adblockers continues to escalate. \section{Conclusion} \label{sec:conclusions} We presented a graph-based machine learning approach, called \textsc{AdGraph}\xspace, to automatically and effectively block ads and trackers on the web. The key insight behind \textsc{AdGraph}\xspace is to leverage information obtained from multiple layers of the web stack: HTML, HTTP, and JavaScript. With these three ingredients brought together, we showed that we can train supervised machine learning models to automatically block ads and trackers. We found that \textsc{AdGraph}\xspace replicates the behavior of popular crowdsourced filter lists with an 97.7\%\xspace accuracy. In addition, \textsc{AdGraph}\xspace is able to detect a significant number of ads and tracker which are missed by popular crowdsourced filter lists. In summary, \textsc{AdGraph}\xspace represents a significant advancement over unreliable crowdsourced filter lists that are used by state-of-the-art adblocking browsers and extensions. More importantly, as adblockers pose a growing threat to the ad-driven ``free''web, we expect more and more financially motivated publishers and advertisers to employ adversarial obfuscation techniques to evade adblockers. Unfortunately, crowdsourced filter lists used by state-of-the-art adblockers can be easily evaded using simple obfuscation techniques. Therefore, \textsc{AdGraph}\xspace's resistance to adversarial obfuscation attempts by publishers and advertisers represents an important technical advancement in the rapidly escalating adblocking arms race. \subsection{False Positive Analysis} When \textsc{AdGraph}\xspace detects \textsf{NON-AD}\xspace nodes (labeled by crowdsourced filter lists) as \textsf{AD}\xspace nodes, the disagreement is recorded as a false positive. Due to the previously discussed shortcomings of crowdsourced filter lists as ground truth, we manually evaluate a sample of these false positives. Our analysis reveals that in most cases \textsc{AdGraph}\xspace makes the right evaluation, and that the filter lists are incorrect---we refer to these cases as ``false, false-positives.'' Overall, we find that 65.4\% of \textsc{AdGraph}\xspace's false positives are in fact ``false, false-positives.'' Therefore, \textsc{AdGraph}\xspace's precision reported in Section \ref{sec: setup} is a lower bound on actual precision. Next, we discuss our methodology to analyze \textsc{AdGraph}\xspace's false positives with a few illustrative examples. For ~22,062\xspace cases, \textsc{AdGraph}\xspace has a disagreement with the ground truth label by crowdsourced filter lists. To analyze these disagreements, we group 22,062\xspace URLs into 3,950\xspace clusters according to their base domain because resources from the same base domain are likely to provide similar functionality. For example, \texttt{evil.com/script1.js} and \texttt{evil.com/script2.js} are more likely to have a similar functionality as compared to \texttt{nice.com/script1.js}. We then manually analyze 1,400\xspace of these~3,950\xspace groups, which comprise of 11,714 unique URLs. Due to a large number of these URLs, we only sample one URL out of each cluster for manual analysis. \subsubsection{Methodology} We manually analyze the disagreements between \textsc{AdGraph}\xspace and the filter lists as follows. \begin{enumerate} \item If the URL contains keywords associated with advertising (e.g. an ad exchange) or tracking (e.g. analytics), we consider \textsc{AdGraph}\xspace's labeling correct. \item If we find that the URL is present in less popular regional filter lists \cite{easylist_variants} or it is mentioned on adblocking forums \cite{abpforumsupport}, we consider \textsc{AdGraph}\xspace's labeling correct. \item If we find ad and tracking related keyword in JavaScript served by URLs, we consider \textsc{AdGraph}\xspace's labeling correct. \item Otherwise, we consider \textsc{AdGraph}\xspace's labeling incorrect. \end{enumerate} \begin{table}[!b] \vspace{-0in} \centering \renewcommand{\tabcolsep}{2pt} \begin{tabular}{l*{4}{c}r} \toprule \bf Functional & \bf Advertising/Tracking & \bf N/A & \bf Unknown\bigstrut[t]\\ \midrule 427 (30.5\%) & 915 (65.4\%) & 23 (1.6\%) & 35 (2.5\%) \\ \bottomrule \end{tabular} \caption{Breakdown of false positive analysis} \label{table:fp_breakdown} \vspace{-0in} \end{table} \subsubsection{Results} Table~\ref{table:fp_breakdown} shows the breakdown of our manual analysis. `N/A' refers to URLs that are inaccessible due to server downtime. `Unknown' refers to URLs that are difficult to analyze due to code obfuscation. `Functional` refers to URLs that are not associated with ads or trackers. Interestingly, 65.4\% of the sampled false positive cases are confirmed to be ``false, false-positives.'' Thus, \textsc{AdGraph}\xspace is able to automatically leverage structural information through machine learning to identify many ads and trackers that are otherwise missed by crowdsourced filter lists. \subsubsection{Case Studies} Below we discuss a few interesting examples of ``false, false-positives.'' We will discuss \textsc{AdGraph}\xspace's actual false positives later in Section \ref{sec: breakage}. \point{yimg.com} We first discuss a case of conversion tracking script in Code \ref{lst:yimg} from \url{yimg.com} that is detected by \textsc{AdGraph}\xspace but missed by crowdsourced filter lists. Besides code analysis, yimg also confirms this tracking module in its official documentation~\cite{yimg}. We find 42 different websites in our sample that load this script. Given the popularity of this unobfuscated tracking script, it is surprising that none of the filter lists block it. This case study further highlights the benefit of \textsc{AdGraph}\xspace's automated detection of ads and trackers. \begin{figure}[htbp] \begin{lstlisting}[style=htmlcssjs,caption=yimg.com ads conversion tracker loader script., label={lst:yimg}] function e(e, a) { if (e.google_conversion_language = window.yahoo_conversion_language, e.google_conversion_color = window.yahoo_conversion_color, e.google_conversion_label = window.yahoo_conversion_label, e.google_conversion_value = window.yahoo_conversion_value, e.google_conversion_domain = a, e.google_remarketing_only = !1, "function" == typeof window.google_trackConversion) window.google_trackConversion(e); else { var i = o(a, "conversion_async.js"); n(i, function() { "function" == typeof window.google_trackConversion && window.google_trackConversion(e) }) } } \end{lstlisting} \end{figure} \point{digitru.st} Next, we discuss a script in Code~\ref{lst:digitrust} that claims to provide ``anonymous'' tracking service to publishers \cite{digitrust}. Beyond tracking, we also identify anti-adblocking functionality in the script. We find 10 different websites in our sample that load this script. However, it is not blocked by any of the filter lists. \begin{figure}[htbp] \begin{lstlisting}[style=htmlcssjs,caption=digitru.st tracking script., label={lst:digitrust}] ad blocker: { detection: !1, blockContent: !1, userMessage: "Did you know advertising pays for this brilliant content? Please disable your ad blocker, then press the Reload button below ... and thank you for your visit!", popupFontColor: "#5F615D", popupBackgroundColor: "#FFFFFF", logoSrc: null, logoText: null, pictureSrc: null }, \end{lstlisting} \end{figure} \point{glbimg.com} Next, we discuss a tracking script in Code~\ref{lst:fl_fn} from \url{glbimg.com}. The script sends the recorded page views to the server. We find 5 different websites in our sample that load this script. \begin{figure}[h] \begin{lstlisting}[style=htmlcssjs,caption=glbimg.com tracking script., label={lst:fl_fn}] function b(k) { k = a.call(this, k) || this; void 0 === b.instance && (b.instance = k, f.noAutoStartTracker || k.client.sendPageView(k.makeParams())); return b.instance } \end{lstlisting} \end{figure} \point{intellicast.com} Next, we discuss an ad loading script in Code \ref{lst:intellicast} from \url{intellicast.com}. The script serves as an ``entry point'' to load ads on a web page. We visually compare the web pages with and without the script, and confirm that we are able to block ads that are loaded by the script. The visual comparison can be seen in Figure \ref{figure:intellicast}. \begin{figure}[h] \begin{lstlisting}[style=htmlcssjs,caption=intellicast.com ads loader script., label={lst:intellicast}] createClass(MoneyTreeBase, [{ key: 'getSlots', value: function getSlots() { return new Promise$1(function (resolve) { return gptCmd(function () { return resolve(window.googletag.pubads().getSlots() || []); }); }); } } \end{lstlisting} \end{figure} \begin{figure}[htbp] \vspace{-0in} \centering \hfill \subfigure[Before blocking]{\vtop{\vskip 0pt \hbox{\includegraphics[width=4cm]{intellicast_before.png}}}} \hfill \subfigure[After blocking]{\vtop{\vskip 0pt \hbox{\includegraphics[width=4cm]{intellicast_after.png}}}} \hfill \caption{Effect of blocking ``false, false-positive'' on \url{intellicast.com}.} \label{figure:intellicast} \vspace{-0in} \end{figure} \subsection{Breakage Analysis} \label{sec: breakage} We now analyze the impact of \textsc{AdGraph}\xspace's actual false positives on site breakage. To this end, we manually open websites and observe any breakage caused by the blocking of \textsc{AdGraph}\xspace's actual false positives. Specifically, we identify the breakage by analyzing whether the false positives remove resources that are critical for the site's functionality. We manually analyze 528 websites from 427 clusters, which are classified as `Functional' false positives in Table~\ref{table:fp_breakdown}. Note that we could not analyze 183 websites for breakage because of the use of dynamic URLs. Interestingly, we find that \textsc{AdGraph}\xspace's actual false positives do not result in any visible breakage for more than 74\% of the websites. Therefore, we conclude that a majority of \textsc{AdGraph}\xspace's actual false positives in fact do not harm user experience. Next, we discuss our methodology to analyze the breakage caused by \textsc{AdGraph}\xspace's actual false positives. \subsubsection{Methodology} We manually analyze the site breakage as follows. \begin{enumerate} \item We first generate a custom filter list to block \textsc{AdGraph}\xspace's actual false positives. \item We then open the same website on two browser instances, one with adblocker and the other without adblocker. \item We compare the two loaded web pages, perform~3--5 clicks in different regions and scroll up/down~3 times to trigger any potential breakage. \end{enumerate} We infer breakage by looking for visual inconsistencies across the two browser instances. We specify three levels for site breakage: none, minor, and major. These three levels respectively correspond to no visible inconsistencies, minor visible inconsistencies (e.g. few images disappear), and page malfunction (e.g. garbled layout). \subsubsection{Results} Table~\ref{table:ba_breakdown} shows the breakdown of our manual analysis of site breakage. Interestingly, we do not observe any breakage for 74.7\% of the websites. We do observe minor breakage on 19.1\% of the websites and major breakage on 6.1\% of the websites. We argue this is a reasonably small number, showing limited impact of \textsc{AdGraph}\xspace's actual false positives on user experience. In future, we plan to further reduce such breakage by incorporating additional features. \begin{table}[htbp] \vspace{-0in} \centering \begin{tabular}{l*{3}{c}r} \toprule \bf Not visible & \bf Visible: minor & \bf Visible: major\\ \midrule 247 (74.7\%) & 66 (19.1\%) & 21 (6.1\%) \\ \bottomrule \end{tabular} \caption{Breakdown of breakage analysis results.} \label{table:ba_breakdown} \vspace{-0in} \end{table} \subsubsection{Case Studies} Below we discuss a few interesting examples of \textsc{AdGraph}\xspace's actual false positives. We discuss cases for different breakage levels and provide some insights into their causes and impact on user experience. \point{urbandictionary.com} We first discuss \textsc{AdGraph}\xspace's actual false positive on \url{urbandictionary.com} that does not cause any breakage. \url{urbandictionary.com} loads a JavaScript library named \texttt{twemoji} to support emojis \cite{twemoji}. Interestingly, removing this library does not cause any visible breakage on the website. \point{darty.com} \textsc{AdGraph}\xspace blocks a functional script on \url{darty.com} without causing any visible breakage. Our manual analysis shows that it is a helper module to manage browser cookies for shopping cart functionality. We do not find any functional breakage in multiple test runs. \point{fool.com} \textsc{AdGraph}\xspace blocks a script called \texttt{FontAwesome} on \url{fool.com} while causing minor breakage. Our manual analysis shows that it helps with display of vector icons and social logos \cite{fontawesome}. As shown in Figure \ref{figure:fool}, blocking the script only changes icons used on the web page while not affecting any major functionality. \begin{figure}[htbp] \vspace{-0in} \centering \hfill \subfigure[Before blocking]{\includegraphics[width=4cm]{fool_button_before.png}} \hfill \subfigure[After blocking]{\includegraphics[width=4cm]{fool_button_after.png}} \hfill \caption{Minor breakage on \url{fool.com}.} \label{figure:fool} \vspace{-0in} \end{figure} \point{meteored.mx} \textsc{AdGraph}\xspace blocks an \texttt{iframe} served by \url{meteored.mx} on \url{noticiaaldia.com} causing minor breakage. We note that it is a weather widget as shown in Figure~\ref{figure:fp_example}. Blocking the \texttt{iframe} only causes the weather widget to disappear. \begin{figure}[htbp] \centering \includegraphics[width=1\columnwidth]{weather_widget.png} \caption{Weather widget iframe from meteored.mx.} \label{figure:fp_example} \end{figure} \point{game8.jp} \textsc{AdGraph}\xspace blocks a script called \texttt{Bootstrap} on \url{game8.jp} causing major breakage. As shown in Figure \ref{figure:game8}, blocking \texttt{Bootstrap} garbles the web page and major portion of the web page goes missing. Fortunately, such a major breakage happens for only 6.1\% of \textsc{AdGraph}\xspace's actual false positives. \begin{figure}[htbp] \vspace{-0in} \centering \hfill \subfigure[Before blocking]{\vtop{ \vskip0pt \hbox{\includegraphics[width=4cm]{fig_ba_ss_normal.png}}}} \hfill \subfigure[After blocking]{\vtop{ \vskip0pt \hbox{\includegraphics[width=4cm]{fig_ba_ss_broken.png}}}} \hfill \caption{Major breakage on \url{game8.jp}.} \label{figure:game8} \vspace{-0in} \end{figure} \subsection{Robustness to Obfuscation} Next, we analyze the robustness of \textsc{AdGraph}\xspace against adversarial obfuscation by publishers and advertisers to get their ads past adblockers. We focus our attention on practical obfuscation techniques that have been recently developed and deployed to bypass filter lists used by adblockers. Wang et al. \cite{Wang16FSCWebRanz} recently proposed to randomize HTTP URLs and HTML element information to this end. Similar obfuscation techniques are being used by ad ``unblocking'' services \cite{adunblock,instartlogic,Zaifeng2018DGAunblock} to bypass filter lists used by adblockers. We implement these obfuscation attacks on Alexa top-10K websites and evaluate the robustness of \textsc{AdGraph}\xspace against them. \subsubsection{HTML Element Obfuscation Attacks} First, we implement an adversarial publisher who can manipulate attributes of HTML elements. HTML element hiding rules in filter lists typically use element \texttt{id} and \texttt{class} attributes. Thus, an adversarial publisher can simply randomize \texttt{id} and \texttt{class} attributes at the server side to bypass these HTML element hiding rules. It is noteworthy, however, that modifications of \texttt{id} and \texttt{class} attributes need to be constrained so as to not impact the appearance and functionality of the website. For example, a script that interacts with an HTML element based on its attribute will not be able to interact if HTML element attributes are randomized. To address this issue with obfuscation, Wang et al.~\cite{Wang16FSCWebRanz} demonstrated a workable solution that requires overriding the relevant JavaScript APIs and keeping a map of original attribute values. To evaluate the robustness of \textsc{AdGraph}\xspace against such an adversary who can manipulate attributes of HTML elements, we randomize HTML element \texttt{id} and \texttt{class} attributes of all HTML elements in our constructed graph. As shown in Figure \ref{figure:obfuscation_experiment}, we find that \textsc{AdGraph}\xspace's precision and recall is not impacted at all by randomization of HTML element \texttt{id} and \texttt{class} attributes. \textsc{AdGraph}\xspace is robust to HTML element obfuscation attacks because it mainly relies on structural properties of the constructed graph and not on HTML element attributes. \subsubsection{HTTP URL Obfuscation Attacks} Second, we implement an adversarial publisher who can manipulate HTTP URLs. HTTP URL blocking rules in filter lists typically use domain and query string information. Thus, an adversarial publisher can simply randomize domain and query string information to bypass these HTTP URL blocking rules. While modifications to query string can generally be easily done without impacting the site's appearance and functionality, modifications to domain names of URLs are constrained. For example, a third-party advertiser cannot arbitrarily modify its domain to that of the first-party publisher or other publishers. As another example, due to the non-trivial overheads of domain registration and maintenance, a third-party advertiser can only dynamically select its domain from a pool of a few domains. To evaluate the robustness of \textsc{AdGraph}\xspace against such an adversary, as discussed next, we manipulate information in HTTP URLs in three ways: (1) modify query string, (2) modify domain name, and (3) modify both query string and domain name. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{obfuscation_experiment_diff_complete-eps-converted-to.pdf} \caption{Change in precision and recall of \textsc{AdGraph}\xspace against different obfuscation attacks.} \label{figure:obfuscation_experiment} \end{figure} \point{Query string randomization} We modify query string of a URL by randomly changing its parameter names, parameter values, and the number of parameters. For each URL in our constructed graph, we conduct a randomly selected combination of these modifications. Recall from Section \ref{sec:feature_extraction} that we do use query string information to extract features. For example, we look for specific ad-related keywords. An adversary who randomizes query strings would be able to successfully manipulate ad-related keywords. Figure \ref{figure:obfuscation_experiment} shows that an adversary employing query string randomization has a small impact on precision and recall. Specifically, precision and recall of \textsc{AdGraph}\xspace decrease by~{0.01} and~{0.02}, respectively. \point{Domain name randomization} We modify domain name of a URL by adding a random sub-domain (if not already there), randomly changing an existing sub-domain, and randomly changing the base domain. For each URL in our constructed graph, we conduct a randomly selected combination of these modifications. Note that we only add or modify sub-domains for first-party URLs because changing its base domain would make them third-party URLs. Similarly, we do not change the base domain of a third-party URL to that of the first-party. Recall from Section \ref{sec:feature_extraction} that we do use domain information to extract features. For example, we use sub-domain information of HTTP URLs. An adversary who randomizes domain information would be able to successfully manipulate these domain features. Figure \ref{figure:obfuscation_experiment} shows that an adversary employing domain name randomization has a modest impact on precision and recall. Specifically, precision and recall of \textsc{AdGraph}\xspace decrease by~{0.06} and~{0.17}, respectively. \point{Randomization of both query string and domain name} We also jointly modify both query string and domain name for each URL. Figure \ref{figure:obfuscation_experiment} shows that joint randomization of query string and domain name has the most impact on precision and recall of \textsc{AdGraph}\xspace. Specifically, precision and recall of \textsc{AdGraph}\xspace decrease by~{0.07} and~{0.19}, respectively. Overall, we conclude that \textsc{AdGraph}\xspace is fairly robust against adversaries who have the capability of manipulating HTML and HTTP information on web pages. \textsc{AdGraph}\xspace achieves robustness because it does not rely on any singular source of information (e.g., patterns in query string or blacklist of domains). The graph representation of interactions across HTTP, HTML, and JavaScript layers of the web stack means that an adversary would have to manipulate information in all three layers together to bypass \textsc{AdGraph}\xspace. \begin{comment} \subsection{Deployment} Next, we briefly discuss two different deployment strategies for \textsc{AdGraph}\xspace. \point{In-Browser deployment} First, \textsc{AdGraph}\xspace can be deployed in-browser for on-the-fly adblocking. The key challenge for in-browser deployment is efficiency. We current rely on an instrumented version of Chromium for graph construction which has around 3-5\% performance overhead \cite{LiNDSS18JSgraph}. Graph featurization and machine learning classification\footnote{Note that machine learning model training can be done offline.} modules would further add performance overhead. For in-browser deployment of \textsc{AdGraph}\xspace, we would need to optimize graph construction, featurization, and classification modules. \point{Offline deployment} Second, \textsc{AdGraph}\xspace can be deployed to automatically generate filter lists offline. Specifically, as \textsc{AdGraph}\xspace would detect \textsf{AD} URLs on popular websites, we can generalize the URLs to generate HTTP and HTML filtering rules similar to those in popular crowdsourced filter lists. The main drawback of offline deployment is that filter rules can be easily evaded by simple obfuscation of HTTP and HTML information. \end{comment} \begin{comment} We can train a separate model for each individual website. The detected ad URLs with each model can be used to block ads the next time we visit that website. However, this setup will not work for a website that we have not seen before and for dynamic URLs on previously seen websites. To take care of unseen websites we can crawl a large number of websites offline and extract advertising and tracking URLs detected by our classifier. Since publishers use third party advertisers and trackers, by processing a large number of websites offline, we will visit majority of third party advertisers and trackers URLs. We then need to generalize the specific URLs to generalized filtering rules. One method for generalization can be automatic generation of regular expressions for detected ad URLs. The process will start by taking a union of ad URLs detected across all websites. The matching will only be on domain name, we will keep variable query strings. After taking the union, we will have distinct domain names and indistinct query strings. For indistinct query strings we need a way to automatically generate regular expressions (preferably some existing tool to generate regex or state machines). Once we have the regular expression to match, we can match it on unseen websites and dynamic URLs on previously seen websites. We can further generic filters based on some clustering (e.g. clustering on graph structure, category of content) \end{comment} \section{Introduction} \label{sec: introduction} \vspace{0in} \point{Background} Adblocking deployment has been steadily increasing over the last several years. According to PageFair, adblockers are used on more than 600 million devices globally as of December 2016~\cite{pagefair_report,pagefair_desktop,pagefair_mobile}. There are several reasons that have led to many users installing adblockers like Adblock Plus and uBlock Origin. First, many websites show flashy and intrusive ads that degrade user experience~\cite{BetterAdsStandardsStudy2016,BASCBA2017}. Second, online advertising has been repeatedly abused by hackers to serve malware (so-called \emph{malvertising}) \cite{sood11malvertising,li2012malvertising,xing15malvertisingwww} and more recently \textit{cryptojacking}, where attackers hijack computers to mine cryptocurrencies \cite{eskandari18cryptojacking}. Third, online behavioral advertising has incentivized a nefarious ecosystem of online trackers and data brokers that infringes on user privacy \cite{anthes15databrokerscacm,Englehardt16MillionSiteMeasurementCCS}. Most adblockers not only block ads, but also block associated malware and trackers. Thus, in addition to instantly improving user experience, adblocking is a valuable security and privacy enhancing technology that protects users against these threats. \point{Motivation} Perhaps unsurprisingly, online publishers and advertisers have started to retaliate against adblockers. First, many publishers deploy anti-adblockers, which detect adblockers and force users to disable their adblockers. This arms race between adblockers and anti-adblockers has been extensively studied \cite{Nithyanand16ArmsRaceFOCI,Mughees17AntiABPETS}. Researchers have proposed approaches to detect anti-adblocking scripts \cite{Iqbal17AntiABIMC,Zhu18AntiABNDSS,grant17futureadblocking}, which can then be blocked by adblockers. Second, some advertisers have started to manipulate the delivery of their ads to bypass filer lists used by adblockers. For example, Facebook recently obfuscated signatures of ad elements that were used by filter lists to block ads. Adblockers, in response, quickly identified new signatures to block Facebook ads. This prompted a few back and forth actions, with Facebook changing their website to remove ad signatures, and adblockers responding with new signatures \cite{facebook_html_rand}. \point{Limitations of Filter Lists} \label{point:filter-list-limitations} While adblockers are able to block Facebook ads (for now), Facebook's whack-a-mole strategy points to two fundamental limitations of adblockers. First, adblockers use manually curated filter lists to block ads and trackers based on informally crowdsourced feedback from the adblocking community. This manual process of filter list maintenance is inherently slow and error-prone. When new websites are created, or existing websites make changes, it takes adblocking community some time to catch up by updating the filter lists \cite{abpforumsupport}. This is similar to other areas of system security, such as updating anti-virus signatures \cite{bilge12zeroday,reactive16tr}. Second, rules defined in these filter lists are fairly simple HTTP and HTML signatures that are easy to defeat for financially motivated publishers and advertisers. Researchers have shown that randomization techniques, where publishers constantly mutate their web content (e.g., URLs, element ID, style class name), can easily defeat signatures used by adblockers \cite{Wang16FSCWebRanz}. Thus, publishers can easily, and continuously, engage in the back and forth with adblockers and bypass their filtering rules. It would be prohibitively challenging for the adblocking community to keep up in such an adversarial environment at scale. In fact, a few third-party ad ``unblocking'' services claim to serve unblockable ads using the aforementioned obfuscation techniques \cite{adunblock,instartlogic,Zaifeng2018DGAunblock}. \point{Proposed Approach} In this paper, we aim to address these challenges by developing an automatic and effective adblocking approach. We propose \textsc{AdGraph}\xspace, which alleviates the need for manual filter list curation by using machine learning to automatically identify effective (both accurate and robust) patterns in the page load process to block ads and trackers. Our key idea is to construct a multi-layer graph representation that incorporates fine-grained HTTP, HTML, and JavaScript information during page load. Combining information across the different layers of the web stack allows us to capture tell-tale signs of ads and trackers. We extract a variety of structural (degree and connectivity) and content (domain and keyword) features from the constructed graph to train a supervised machine learning model to detect ads and trackers. \point{Technical Challenges} In order to achieve automatic and effective adblocking, \textsc{AdGraph}\xspace needs to handle two main technical challenges. First, we need to capture fine-grained interactions between DOM elements for tracing the relationships between ads/trackers and the rest of the page content. Specifically, we need to record changes to the DOM, especially in relation to JavaScript code loading and execution. To this end, we use an instrumented version of Chromium web browser to extract HTML, HTTP, and JavaScript information during page load in an efficient manner \cite{LiNDSS18JSgraph}. Second, we need a reliable ground truth for ads and trackers to train an accurate machine learning model. As discussed earlier, the adblocking community relies on filter lists, which are manually curated based on informal crowdsourced feedback. While the collective amount of work put in by volunteers for maintaining these filter lists is impressive \cite{easylistgithubcommits,easylistublockgithubcommits}, filter lists routinely suffer from errors, which include both false negatives: missed ads \emph{and} false positives: site breakage \cite{abpforumsupport}. To this end, we extract a variety of structural and content features to train a supervised machine learning classification algorithm called Random Forest, which enables us to avoid over-fitting on noisy ground truth \cite{zhang2012ensambleml}. \point{Contributions} We summarize our key contributions as follows. \begin{enumerate}[leftmargin=1.5\parindent] \item We leverage information across multiple layers of the web stack---HTML, HTTP, and JavaScript---to build a fine-grained graph representation. We then extract a variety of structural and content features from this graph representation so that we can use off-the-shelf supervised machine learning algorithms to automatically and effectively block ads and trackers. \item We employ \textsc{AdGraph}\xspace on Alexa top-10K websites. We show that \textsc{AdGraph}\xspace successfully replicates the behavior of existing crowdsourced filter lists. We evaluate the contribution of different feature sets that capture HTTP, HTML, and JavaScript information. We show that jointly using these feature sets helps in achieving 97.7\%\xspace accuracy, 83.0\%\xspace precision, and 81.9\%\xspace recall. The accuracy of \textsc{AdGraph}\xspace also compares favorably to prior machine learning based approaches \cite{Bhagavatula14MLforFilterListsAISec}, which reported an accuracy between 57.5\% and 81.8\%\xspace. \item Our manual analysis reveals that more than 65\% of \textsc{AdGraph}\xspace's reported false positives are in fact false negatives in the crowdsourced filter lists. We further show that about 74\% of the remaining false positives are harmless in that they do not cause any visible site breakage. \item We evaluate the effectiveness of \textsc{AdGraph}\xspace in a variety of adversarial scenarios. Specifically, we test \textsc{AdGraph}\xspace against an adversary who can obfuscate HTTP URLs (domains and query strings) and HTML elements (ID and class based CSS selectors)~\cite{Wang16FSCWebRanz}. While crowdsourced filter lists are completely bypassed by such adversarial obfuscation, we show that \textsc{AdGraph}\xspace is fairly robust against obfuscation attempts---\textsc{AdGraph}\xspace's precision and recall decrease by at most 7\% and 19\% for the most aggressive obfuscation, respectively. \end{enumerate} \vspace{-.05in} \point{Paper organization} The rest of the paper is organized as follows. Section \ref{sec:background} provides some background on adblockers while highlighting some of the key challenges. Section \ref{sec:methodology} describes our approach to graph construction and featurization, in detail. Section \ref{sec:eval} presents experimental evaluation. Section \ref{sec:related} summarizes related work before we conclude in Section \ref{sec:conclusions}. \section{Proposed Approach: \textsc{AdGraph}\xspace} \label{sec:methodology} \vspace{0in} After providing an overview of \textsc{AdGraph}\xspace's architecture, we discuss the construction of multi-layer graph representation that incorporates fine-grained HTTP, HTML, and JavaScript information during page load. We then discuss \textsc{AdGraph}\xspace's featurization approach to train a machine learning model for detecting ads and trackers. \subsection{Overview} Figure \ref{figure:classifier_flow} shows an architectural overview of \textsc{AdGraph}\xspace---a graph-based machine learning approach for detecting ads and trackers on a given web page. \textsc{AdGraph}\xspace uses an instrumented version of Chromium web browser to extract HTML, HTTP, and JavaScript information during page load. The HTML, HTTP, and JavaScript \textit{layers} of the web stack are processed and converted into a graph structure. This graph structure captures both relationships \emph{within} the same layer (e.g. parent-child HTML elements) and \emph{across} layers (e.g. which JavaScript code units created which HTML elements) of the web stack. We compute a variety of structural (degree and connectivity) and content (domain and keyword) features, and use these features to classify nodes as either \textsf{AD} or \textsf{NON-AD} using a supervised machine learning model. \subsection{HTML, HTTP, and JavaScript Layers} We use an instrumented version of Chromium web browser to build a graph of the relationships between HTML, HTTP, and JavaScript actions related to a web page \cite{LiNDSS18JSgraph}. \point{HTML} For HTML layer extraction, we inject and execute a script in the website and traverse the complete DOM tree to extract all HTML elements. We start from the root of the DOM tree and traverse it in the breadth-first order. At each level of the DOM tree, we record \texttt{outerHTML} (the serialized HTML fragment, which includes its complete contents) and \texttt{baseURI} (to link elements to where they are loaded from) attributes of all HTML elements. \point{JavaScript} For JavaScript layer extraction, we leverage the logs obtained from the instrumented version of Chromium. Specifically, the instrumented Chromium browser records all interactions between JavaScript snippets and HTML elements. The instrumentation extends Chromium's \textsc{DevTools} to log the addition of new HTML elements, modification to existing HTML elements, and event listeners attached to HTML elements via JavaScript. From these logs, we extract JavaScript snippets, URLs that load the JavaScript snippets, and interactions between JavaScript snippets and HTML elements. \point{HTTP} For HTTP layer extraction, we rely on information already obtained during extraction of HTML and JavaScript layers. More specifically, we use \texttt{baseURI} attribute extracted from the HTML layer and HTTP URLs extracted from the JavaScript layer to construct HTTP layer. We further enrich the HTTP layer by adding HTTP URLs from \texttt{href} and \texttt{src} tags of HTML elements. \begin{figure}[tb] \begin{lstlisting}[style=htmlcssjs,caption=A sample HTML page., label={lst:sample_html}] <html> <head> <script src="thirdparty.com/script1.js" type="text/javascript"></script> <script src="thirdparty1.com/script2.js" type="text/javascript"></script> <script type="text/javascript"> var iframe=document.createElement('iframe'); var html='<img src="adnetwork.com/ads.gif">'; iframe.src='adnetwork.com'; document.body.appendChild(iframe); </script> <link rel="stylesheet" href="../style1.css"> </head> <body> <div class = "class1" id = "id1"> <img id="image1" src="example.com/img.gif" height="10" width="10"> ... </div> <div class = "class1" id = "id2"> <iframe id = "iframe1" src = "adnetwork.com"> ... </iframe> ... <div class = "class2" id = "id3"> <button type="button" onclick="func()"></button> ... <ul> <li>List item.</li> ... </ul> </div> </div> ... </body> </html> \end{lstlisting} \end{figure} \vspace{-0in} \vspace{-0in} \subsection{Graph Construction} We next combine information from HTML, HTTP, and JavaScript layers. To this end, we represent information from different layers in the form of a graph structure. As we elaborate below, each HTML element, HTTP URL, and JavaScript snippet is represented as a node, and relationships among these nodes are represented as edges. By way of illustration, Figure \ref{figure:graph_proto} visualizes the graph for a toy example in Code \ref{lst:sample_html}. \subsubsection{Graph Nodes} \label{sec:graph_nodes} We categorize nodes in the graph according to their layer information. We have three types of nodes: HTML element, HTTP URL, and JavaScript snippet. For the example graph in Figure~\ref{figure:graph_proto}, HTML nodes are represented by \tikz\draw[MyYellow,fill=MyYellow] (0,0) circle (1.0ex);, HTTP nodes are represented by \tikz\draw[MyGreen,fill=MyGreen] (0,0) circle (1.0ex);, and JavaScript nodes are represented by \tikz\draw[MyBlue,fill=MyBlue] (0,0) circle (1.0ex);. We further categorize HTML, HTTP, and JavaScript nodes to capture more fine-grained information about them. \point{HTML element nodes} HTML element nodes are further categorized as HTML \texttt{iframe} element, HTML image element, HTML style element, and HTML miscellaneous element based on their tag name information. HTML \texttt{iframe} element nodes are HTML element nodes with \texttt{iframe} tag name. HTML image element nodes are HTML element nodes with \texttt{img} tag name. HTML style element nodes are HTML element nodes with \texttt{style} tag name. HTML miscellaneous element nodes are HTML element nodes with tag name other than \texttt{iframe}, \texttt{img}, or \texttt{style}. \point{HTTP URL nodes} HTTP URL nodes are further categorized according to their edges with HTML element nodes and JavaScript snippet nodes. We categorize HTTP URL nodes into HTTP script URL, HTTP source URL, HTTP \texttt{iframe} URL, and HTTP element URL nodes. HTTP script URL nodes are HTTP URL nodes that load JavaScript snippets. HTTP source URL nodes are HTTP URL nodes that load all categories of HTML elements other than \texttt{iframe} HTML elements. HTTP \texttt{iframe} URL nodes are HTTP URL nodes that load \texttt{iframe} HTML elements. HTTP element URL nodes are HTTP URL nodes that are present as \texttt{src} of HTML elements. \point{JavaScript snippet nodes} JavaScript snippet nodes are further categorized according to their scope: inline or referenced. JavaScript inline snippet nodes are JavaScript snippet nodes that are present as inline scripts in the DOM. JavaScript reference snippet nodes are JavaScript snippet nodes that are loaded as reference in the DOM. \begin{figure}[tb] \vspace{-0in} \vspace{-0in} \centering \includegraphics[width=1\columnwidth]{html_graph_updated_1.pdf} \vspace{-0in} \vspace{-0in} \vspace{-0in} \caption{The three-layered graph for Listing~\ref{lst:sample_html}. % The graph shows HTML, HTTP, and JavaScript layers, along with edges, both within and across these layers.} \label{figure:graph_proto} \vspace{-0in} \vspace{-0in} \end{figure} \subsubsection{Graph Edges} We represent relationships among nodes within each layer and across different layers as edges. We categorize edges based on the relationships between pairs of nodes into seven broad categories, which further have subcategories. Figure \ref{figure:graph_proto} shows the categories of edges in the example graph using different colors. \point{Edges from HTTP URL nodes to HTML element nodes} Edges from HTTP URL nodes to HTML element nodes represent the loading of an HTML element from an HTTP URL. These edges capture information about the origin of HTML elements. These edges are further categorized into edges from HTTP \texttt{iframe} URL nodes to HTML \texttt{iframe} element nodes and edges from HTTP source URL nodes to all other categories of HTML element nodes. These edges are represented by \tikz\draw[line width=0.50mm, MyBlack] (0,4) -- (0.5,4); in Figure \ref{figure:graph_proto}, which has two such edges: one from node 1 to node 2 and the other from node 9 to node 10. The edge from node 1 to node 2 represents the initial page load and the edge from node 9 to node 10 represents the loading of an \texttt{iframe}. Node 2 is the base HTML element and it corresponds to the line 1 of Code \ref{lst:sample_html}. Node 10 is an HTML \texttt{iframe} element node with id \texttt{iframe1} and it corresponds to the line 20 of Code \ref{lst:sample_html}. \point{Edges from HTTP script URL nodes to JavaScript reference snippet nodes} Edges from HTTP script URL nodes to JavaScript reference snippet nodes represent the loading of a JavaScript snippet from an HTTP URL. These edges capture information about the origin of JavaScript snippets. These edges are represented by \tikz\draw[line width=0.50mm, MyTeal] (0,4) -- (0.5,4); in Figure \ref{figure:graph_proto}. The edge from node 5 to node 7 in Figure \ref{figure:graph_proto} represents the loading of a third-party script. Node 5 is an HTTP script URL node and node 7 is a JavaScript reference snippet node. Both node 5 and node 7 correspond to the \texttt{script} item on line 3 of Code \ref{lst:sample_html}. \point{Edges from HTML element nodes to HTTP element URL nodes} Edges from HTML element nodes to HTTP element URL nodes represent an HTML element that has an HTTP URL as its source. These edges capture information about the content loaded inside an HTML element. These edges are further categorized into edges from HTML image element nodes to HTTP element URL nodes, edges from HTML style element nodes to HTTP element URL nodes, and edges from HTML miscellaneous element nodes to HTTP element URL nodes. These edges are represented by \tikz\draw[line width=0.50mm, MyPurple] (0,4) -- (0.5,4); in Figure \ref{figure:graph_proto}. The edge from node 12 to node 13 in Figure \ref{figure:graph_proto} represents the loading of an image. Node 12 is an HTML image element node and node 13 is an HTTP element URL node. Both node 12 and node 13 correspond to the \texttt{img} item with id \texttt{image1} on line 16 of Code \ref{lst:sample_html}. \point{HTML element nodes to HTTP script URL and JavaScript inline snippet nodes} Edges from HTML element nodes to HTTP script URL and JavaScript inline snippet nodes capture the occurrence of JavaScript snippets in the the DOM tree. These edges are represented by \tikz\draw[line width=0.50mm, MyIndigo] (0,4) -- (0.5,4); in Figure \ref{figure:graph_proto}. We have two such edges in Figure~\ref{figure:graph_proto}: one from node~4 to node~5 and the other from node~4 to node~6. The edge from node~4 to node~5 represents the occurrence of a script with third-party reference and the edge from node~4 to node~6 represents the occurrence of an inline script. Nod~4 is an HTML element node and it corresponds to the \texttt{head} HTML item on line~2 of Code~\ref{lst:sample_html}. Node~5 is an HTTP script URL node and it corresponds to the \texttt{script} item on line~3 of Code~\ref{lst:sample_html}. Node~6 is a JavaScript inline snippet node and it corresponds to the inline \texttt{script} item on line~5 of Code~\ref{lst:sample_html}. \point{HTML element nodes to HTTP \texttt{iframe} URL nodes} Edges from HTML element nodes to HTTP \texttt{iframe} URL nodes represent the loading of an HTML \texttt{iframe} element from an HTTP URL. These edges capture information about the origin of HTML \texttt{iframe} elements and are represented by \tikz\draw[line width=0.50mm, darkgray] (0,4) -- (0.5,4); in Figure~\ref{figure:graph_proto}. The edge from node~9 to node~10 in Figure~\ref{figure:graph_proto} represents the loading of an \texttt{iframe}. Node~9 is an HTML element node and it corresponds to the \texttt{div} HTML item with id \texttt{id2 } on line~19 of Code~\ref{lst:sample_html}. Node~10 is an HTTP \texttt{iframe} URL node and it corresponds to the \texttt{iframe} item with id \texttt{iframe1} on line~20 of Code~\ref{lst:sample_html}. \point{Edges from HTML element nodes to HTML element nodes} Edges from HTML element nodes to HTML element nodes capture the hierarchy of HTML elements in the DOM tree. These edges capture the parent-child relationship among parent HTML elements and child HTML elements. These edges are represented by \tikz\draw[line width=0.50mm, MyGray] (0,4) -- (0.5,4); in Figure~\ref{figure:graph_proto}. A majority of edges in Figure~\ref{figure:graph_proto} are edges from HTML element nodes to HTML element nodes. One such edge from node~3 to node~8 in Figure~\ref{figure:graph_proto} represents the parent-child relationship between two HTML elements. Node~3 is an HTML element node and it corresponds to the \texttt{body} HTML item on line~14 of Code~\ref{lst:sample_html}. Node~8 is an HTML element node and it corresponds to the \texttt{div} HTML item with id id2 on line~19 of Code~\ref{lst:sample_html}. \point{Edges from JavaScript source snippet nodes to HTML element nodes} Edges from JavaScript source snippets nodes to HTML element nodes represent the interaction between a script and an HTML element. Theses edges capture addition of new HTML elements, modification to existing HTML elements, and event listeners attached to HTML elements. There can be multiple edges from one JavaScript source snippet node to an HTML element node. These edges are represented by \tikz\draw[line width=0.50mm, MyOrange] (0,4) -- (0.5,4); in Figure~\ref{figure:graph_proto}. The edge from node~6 to node 11 in Figure~\ref{figure:graph_proto} represents the addition of an HTML \texttt{iframe} element with further sub HTML image element. Node~6 is a JavaScript inline snippet node and it corresponds to the inline \texttt{script} item on line~5 of Code~\ref{lst:sample_html}. Node~11 is an HTML \texttt{iframe} element node with id \texttt{iframe1} and it corresponds to the line~20 of Code~\ref{lst:sample_html}. \subsection{Feature Extraction} \label{sec:feature_extraction} After constructing the graph, we are set to extract features from it to train a machine learning model for classifying ads and trackers. We extract different structural (degree and connectivity) and content (domain and keyword) features for nodes in the graph. Degree features include attributes such as in-degree, out-degree, number of descendants, and number of node additions/modifications. Connectivity features include a variety of centrality metrics. Domain features capture first versus third party domain/sub-domain information. Keyword features capture the presence of certain ad-related keywords in query string. Below, we explain each of these features in detail. \subsubsection{Degree Features} Degree features provide information about the number of edges incident on a node. Below we explain specific degree metrics that we extract as features. \begin{itemize}[leftmargin=1\parindent] \item \textbf{In-Degree:} The in-degree of a node is defined as the number of inward edges incident on the node. % We separately compute in-degree for different edge types based on the type of node they are originating from. % We also use aggregate node in-degree. \item \textbf{Out-Degree:} The out-degree of a node is defined as the number of outward edges incident on the node. % We separately compute out-degree for different edge types based on the type of their destination node. % We also use aggregate node out-degree. \item \textbf{Descendants:} The descendants of a node is defined as the number of nodes reachable from it. \item \textbf{Addition of nodes:} For a node, we count the number of nodes added by it. % This feature specifically captures the addition of new DOM nodes by a script. \item \textbf{Modification of node attributes:} For a node, we count the number of node attribute modifications made by it. % This feature specifically captures the attribute modifications of DOM nodes by a script. % We consider modifications to existing attributes, addition of new attributes, and removal of existing attributes. \item \textbf{Event listener attachment:} For a node, we count the number of event listeners attached by it. % This feature specifically captures the event listener attachments to DOM nodes by a script. \end{itemize} \subsubsection{Connectivity Features} Connectivity features provide information about the relative importance of a node in the graph. Below we explain specific connectivity metrics that we extract as features. \begin{itemize}[leftmargin=1\parindent] \item \textbf{Katz centrality:} The Katz centrality of a node is a centrality-based measure of relative importance. % It is influenced by the degree of a node and degree of its neighboring nodes and degree of nodes reachable by its neighboring nodes. \item \textbf{Closeness centrality:} The closeness centrality of a node is defined as the average length of shortest paths to all nodes reachable from it. \item \textbf{Mean degree connectivity:} The mean degree connectivity of a node is defined as the average of degrees of all its neighboring nodes. \item \textbf{Eccentricity:} The eccentricity of a node is defined as the maximum of distance from that node to all the other nodes in a graph. \end{itemize} \subsubsection{Domain Features} Domain features provide information about the domain specific properties of a node's associated URLs. Below we explain specific domain properties that we extract as features. \begin{itemize}[leftmargin=1\parindent] \item \textbf{Domain party:} For a node, the domain party feature describes whether the domain of the node URL is first-party or third-party. \item \textbf{Sub-domain:} For a node, the sub-domain feature describes whether the domain of the node URL is a sub-domain of the first-party domain. \item \textbf{Base domain in query string:} For a node, the base domain in query string feature describes whether the node URL has base domain as a parameter in query string. \item \textbf{Same base domain and request domain:} For a node, the same base domain and request domain feature describes whether the domain of node URL is same as the base domain. \item \textbf{Node category:} For a node, the node category feature describes the HTTP node type of the node URL as described in Section~\ref{sec:graph_nodes}. \end{itemize} \subsubsection{Keyword Features} Keyword features provide information about the use of certain keywords in URLs. Below we explain specific keyword patterns that we extract as features. \begin{itemize}[leftmargin=1\parindent] \item \textbf{Ad keywords:} For a node, we capture the number of ad-related keywords present in the node URL. % We use keywords such as \texttt{`advertise'}, \texttt{`banner'}, and \texttt{`advert'} because they tend to frequently appear in advertising URLs. % We also capture the number of ad-related keywords that are followed by a special character (e.g., \texttt{`;'} and \texttt{`='}) in the node URL. % This helps us exclude scenarios where ad-related keywords are part of non ad-related text in the URL. \item \textbf{Query string parameters:} For a node, we count the number of semicolon separated query string parameters in the node URL. % We also capture whether query string parameters are followed by a \texttt{`?'} and they are separated by a \texttt{`\&'}. \item \textbf{Ad dimension information in query string:} For a node, we check whether the query string has ad dimensions. % We define a pattern of~2--4 numeric digits followed by the character \texttt{x} and then again followed by~2--4 numeric digits as the presence of ad size in a query string parameter. % We also capture the presence of screen dimension in a query string parameter. % We look for keywords such as \texttt{screenheight}, \texttt{screenwidth}, and \texttt{screendensity}. \end{itemize} \begin{figure}[tb] \centering \vspace{-0in} \includegraphics[width=1\columnwidth]{bbc_sub_graph_2.pdf} \vspace{-0in} \caption{Zoomed-in version of the graph constructed for \url{www.bbc.com}.} \label{figure:examplegraph_sub} \vspace{-0in} \end{figure} \subsection{Feature Analysis} Next we analyze a few of the features that we use to train our machine learning classifier. Consider the graph shown in Figure~\ref{figure:examplegraph_sub} as a reference example during our feature analysis. While nodes in Figure~\ref{figure:examplegraph_sub} follow the color scheme explained in Section \ref{sec:graph_nodes}, ad/tracker nodes (\textsf{AD}) are represented by \tikz\draw[MyMaroon,fill=MyMaroon] (0,0) circle (1.0ex); and non ad/tracker nodes (\textsf{NON-AD}) are represented by \tikz\draw[MyGreen,fill=MyGreen] (0,0) circle (1.0ex);. We explain the ground truth labeling of \textsf{AD} and \textsf{NON-AD} nodes further in the next section. Figure~\ref{figure:closeness} plots the cumulative distribution function (CDF) of closeness centrality. We note that \textsf{AD} nodes tend to have higher closeness centrality values as compared to \textsf{NON-AD} nodes. It means that \textsf{AD} nodes are more well connected than \textsf{NON-AD} nodes. In Figure~\ref{figure:examplegraph_sub}, \textsf{AD} nodes are generally connected to multiple HTML element nodes and JavaScript snippet nodes as compared to \textsf{NON-AD} nodes, which mostly appear as leaf nodes. These additional connections enable extra paths for \textsf{AD} nodes, making them more central than \textsf{NON-AD} nodes. Figure~\ref{figure:eccentricity} plots the CDF of eccentricity of a node. As shown in Figure~\ref{figure:examplegraph_sub}, \textsf{AD} nodes are mostly accompanied by JavaScript snippet nodes that represent analytics scripts. Thus, \textsf{AD} nodes have more paths to reach other nodes in the graph. Analytics scripts usually appear at the start of DOM tree and \textsf{AD} nodes connected to them will have shorter paths (low eccentricity) to other nodes compared to \textsf{NON-AD} nodes that appear as leaf nodes. Figure~\ref{figure:descendants} plots the CDF of number of descendants of a node. The number of descendants of a node provides a clear distinction between \textsf{AD} and \textsf{NON-AD} nodes. As we note in Figure~\ref{figure:examplegraph_sub}, most \textsf{NON-AD} nodes appear as leaf nodes (image URLs, anchor URLs) while most \textsf{AD} nodes appear as non-leaf nodes. This behavior is captured by the number of descendants of a node. Figure~\ref{figure:domain_party} plots the distribution of first-party versus third-party URLs for \textsf{AD} and \textsf{NON-AD} nodes. As expected, it is evident that most ads and trackers are loaded by third-party URLs. \begin{figure}[tb] \centering \subfigure[closeness centrality]{ \includegraphics[width=0.45\columnwidth]{closeness_centrality_complete-eps-converted-to.pdf} \label{figure:closeness} } \subfigure[eccentricity]{ \includegraphics[width=0.45\columnwidth]{eccentricity_complete-eps-converted-to.pdf} \label{figure:eccentricity} } \subfigure[number of descendants]{ \includegraphics[width=0.45\columnwidth]{descendants_complete-eps-converted-to.pdf} \label{figure:descendants} } \subfigure[domain party]{ \includegraphics[width=0.45\columnwidth]{domain_party_complete-eps-converted-to.pdf} \label{figure:domain_party} } \caption{Conditional feature distributions.} \label{figure:feature_distributions} \end{figure} \subsection{Supervised Classification} We use random forest~\cite{Breiman01RandomForest} which is a well-known ensemble supervised learning algorithm for classification. Random forest combines decisions from multiple decision trees, each of which is constructed using a different bootstrap sample of the data, by choosing the mode of the predicted class distribution. Each node for a decision tree is split using the best among the subset of features selected at random. This feature selection mechanism is known to provide robustness against over-fitting issues. We configure random forest as an ensemble of~10 decision trees with each decision tree trained using $int(\log M+1)$ features, where $M$ is the total number of features. \section{Related Work} \label{sec:related} \vspace{0in} In this section, we review prior work that use machine learning to replace or complement crowdsourced filter lists for blocking ads, trackers, and anti-adblockers. \point{HTTP-based approaches} Researchers have previously attempted to identify patterns in HTTP requests using supervised machine learning approaches to aid with filter list maintenance. Bhagavatula et al.~\cite{Bhagavatula14MLforFilterListsAISec} used keywords and query string information in HTTP requests to train supervised machine learning models for classifying ads. Gugelmann et al.~\cite{Gugelmann15ComplementBlacklistPETS} used HTTP request header attributes such as number of HTTP requests and size of payload to identify tracking domains. Yu et al.~\cite{Yu16TrackingTheTrackersWWW} also analyzed HTTP requests to detect privacy leaks by trackers in URL query string parameters. While these approaches are somewhat automated, they are not robust to evasion, due to their reliance on simple HTTP-based features that can be easily evaded by adversaries (publishers or advertisers) who can manipulate domain or query string information. \point{HTML-based approaches} Researchers have also attempted to identify HTML DOM patterns using computer vision and machine learning techniques to detect ads and anti-adblockers. For example, Storey et al.~\cite{grant17futureadblocking} proposed a perceptual adblocking approach to detect ads based on their visual properties. Their key insight is that ads need to be distinguishable from organic content as per government regulations~\cite{ftcadrule} and industry self-regulations~\cite{adchoices}. Mughees et al.~\cite{Mughees17AntiABPETS} analyzed patterns in DOM changes to train a machine learning model for detecting anti-adblockers. These approaches are also not robust to evasion due to their reliance on simple HTML-based features. Adversarial publishers and advertisers can easily manipulate HTML element attributes to defeat these approaches. \point{JavaScript-based approaches} Since publishers and advertisers extensively rely on JavaScript to implement advertising and tracking functionalities, researchers have also tried to use JavaScript code analysis for detecting ads and trackers. To this end, one thread of research leverages static analysis of JavaScript code using machine learning techniques. Ikram et al.~\cite{Ikram17SeamlessTrackingPETS} conducted n-gram analysis of JavaScript dependency graphs using one-class support vector machine for detecting trackers. Iqbal et al.~\cite{Iqbal17AntiABIMC} constructed abstract syntax trees of JavaScript code which were then mapped to features for training machine learning models to detect anti-ad blockers. While these static analysis approaches achieve good accuracy, they are not robust against even simple JavaScript code obfuscation techniques~\cite{Tzermias11dynamicstatic,Xu12obfuscation,Xu13jstill}. Researchers have resorted to dynamic analysis techniques to overcome some of these challenges. Wu et al.~\cite{Wu16MLTrackingESORICS} extracted JavaScript API invocations through dynamic analysis and trained a machine learning model to detect trackers. Storey et al.~\cite{grant17futureadblocking} proposed to intercept and modify API calls by JavaScript to bypass anti-adblockers. Zhu et al.~\cite{Zhu18AntiABNDSS} conducted differential JavaScript execution analysis to identify branch statements such as if/else that are triggered by anti-adblocking scripts. While dynamic analysis is more resilient to obfuscation than static analysis, it is possible for websites to conceal and obfuscate JavaScript APIs to reduce the effectiveness of dynamic analysis. \point{Multi-layer approaches} To improve the accuracy and robustness of ad blocking, researchers have looked at integrating multiple information types (e.g., HTML, HTTP, JavaScript). Bau et al.~\cite{Bau13PromisingTrackerW2SP} proposed to leverage both HTML and HTTP information with machine learning to automatically capture relationships among web content to robustly detect evasive trackers. More specifically, the authors constructed the DOM tree, labeled each node with its domain, and then extracted a wide range of graph properties (e.g., depth, degree) for each domain. Kaizer and Gupta~\cite{Kaizer16JSTrackingIWSPA} utilized both JavaScript and HTTP information to train a machine learning model for detecting trackers. More specifically, the authors used JavaScript navigation and screen properties such as \texttt{appName} and \texttt{plugins} and HTTP attributes, such as cookies and URL length to detect tracking URLs. Our proposed approach \textsc{AdGraph}\xspace significantly advances this line of research. To the best of our knowledge, \textsc{AdGraph}\xspace represents the first attempt to comprehensively capture interactions between HTML, HTTP, and JavaScript on a web page to detect ads and trackers. As our evaluations have shown, leveraging all of the available information at multiple layers helps \textsc{AdGraph}\xspace in accurately and robustly detecting ads and trackers. \section{Evaluation} \label{sec:eval} \vspace{0in} We first evaluate \textsc{AdGraph}\xspace by measuring how closely its classification matches popular crowdsourced filter lists in blocking ads and trackers. \textsc{AdGraph}\xspace replicates the behavior of crowdsourced filter lists with 97.7\%\xspace accuracy, 83.0\%\xspace precision, and 81.9\%\xspace recall. We next manually analyze disagreements between classifications by \textsc{AdGraph}\xspace and crowdsourced filter lists. We find that more than 65\% of \textsc{AdGraph}\xspace's false positives are in fact false negatives of crowdsourced filter lists. We also evaluate the effectiveness of \textsc{AdGraph}\xspace against adversarial obfuscation that attempts to bypass crowdsourced filter lists. We show that \textsc{AdGraph}\xspace is fairly robust against obfuscation attempts, the precision decreases by 7\% while the recall decreases by at most 19\% for the most aggressive obfuscation. Overall, our experimental evaluation shows that \textsc{AdGraph}\xspace's graph-based machine learning approach outperforms manually curated, crowdsourced filter lists in terms of both accuracy and robustness. \subsection{Experimental Setup} \label{sec: setup} \point{Ground Truth} We use an instrumented version of Chromium web browser \cite{LiNDSS18JSgraph} with Selenium WebDriver to automatically crawl the home pages of Alexa top-10K websites. We are able to construct graphs incorporating information across HTML, HTTP and JavaScript layers of the web stack for 7,699\xspace websites.\footnote{Note that we are unable to crawl 1,275 websites due to server-side errors (e.g., HTTP error codes 404 and 500) and 1,026 websites due to issues in the instrumented browser which is meant to be a research prototype. Our eyeball analysis showed that there was not something specific in the nature of functionality of these sites that would bias our evaluation.} We need a web-scale ``ground truth'' to evaluate the accuracy of \textsc{AdGraph}\xspace in blocking ads and trackers. We use the union of 9 popular crowdsourced filter lists\footnote{We use EasyList \cite{easylist_web}, EasyPrivacy \cite{EasyPrivacy}, Anti-Adblock Killer \cite{killer}, Warning Removal List \cite{warning}, Blockzilla \cite{blockzilla}, Fanboy Annoyances List \cite{fanboy_annoynace}, Fanboy Social List \cite{fanboy_social}, Peter Lowe's list \cite{peter_lowe_list}, and Squid Blacklist \cite{squid_blacklist}.} despite their well-known shortcomings for two reasons. First, the popularity of these crowdsourced filter lists suggests that they are reasonably accurate even though they are imperfect. Second, a better alternative---building a web-scale, expert-labeled ground truth---would require funding and labor at a scale not available to the research community. Thus, despite their shortcomings, we make the methodological choice to treat these crowdsourced filter lists as ground truth for labeling ads and trackers. Using these crowdsourced filter lists, we label HTTP URL nodes as \textsf{AD}\xspace or \textsf{NON-AD}\xspace. While we do not label non-HTTP nodes, we do use HTML and JavaScript layers to enhance the constructed graph by capturing fine-grained information flows across HTTP, HTML, and JavaScript layers. Overall, our ground truth labeled data set has 131,917\xspace \textsf{AD}\xspace and 1,906,763\xspace \textsf{NON-AD}\xspace nodes. \point{Cross-Validation} We train a machine learning model to detect \textsf{AD}\xspace and \textsf{NON-AD}\xspace nodes. Specifically, we train a singe model using the available labeled data that would generalize to any unseen website. We train and test \textsc{AdGraph}\xspace using stratified~10-fold cross-validation. We use~9 folds for training \textsc{AdGraph}\xspace's machine learning model and the leftover fold for testing it on unseen websites. We repeat this process~10 times using different training and test folds. \point{Accuracy Results} \label{point:accuracy-results} We measure the accuracy of \textsc{AdGraph}\xspace in terms of true positive (TP), false negative (FN), true negative (TN), and false positive (FP). \begin{itemize}[leftmargin=1\parindent] \item \textbf{TP}: An ad or tracker is correctly labeled \textsf{AD}\xspace. \item \textbf{FN}: An ad or tracker is incorrectly labeled \textsf{NON-AD}\xspace. \item \textbf{TN}: A non-ad or non-tracker is correctly labeled \textsf{NON-AD}\xspace. \item \textbf{FP}: A non-ad or non-tracker is incorrectly labeled \textsf{AD}\xspace. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=.85\columnwidth]{roc_curve_complete-eps-converted-to.pdf} \caption{ROC curve of \textsc{AdGraph}\xspace for detecting ads and trackers.} \label{figure:roc_curve} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{results_aggregate_complete-eps-converted-to.pdf} \caption{Precision and recall using different feature categories.} \label{figure:results_aggregate} \end{figure*} Overall, \textsc{AdGraph}\xspace achieves 97.7\%\xspace accuracy, with 83.0\%\xspace precision and 81.9\%\xspace recall. Figure \ref{figure:roc_curve} shows the Receiver Operating Characteristic (ROC) curve for \textsc{AdGraph}\xspace. The curve depicts the trade-off between true positive rate and false positive rate as the discrimination threshold of our classifier is varied. Using this formulation, we can also quantify the accuracy of \textsc{AdGraph}\xspace by using the area under the ROC curve (AUC). High AUC values reflect high positive rate and low false positive rate. \textsc{AdGraph}\xspace's true positive rate quickly converges to near 1.0 for fairly small false positive rates with AUC = 0.98. \point{Ablation Results} In order to maximize the accuracy of \textsc{AdGraph}\xspace, we combine four categories of features (degree, connectivity, domain, and keyword) defined in Section \ref{sec:feature_extraction}. We now evaluate the relative importance of different feature categories. To this end, we compare the precision and recall of all possible combinations of different feature categories. Specifically, we first use each feature category individually, then in combinations of two and three, and finally use all of them together. Figure \ref{figure:results_aggregate} reports the precision and recall of different feature category combinations. For individual feature categories, none of them provide sufficient accuracy to be used stand-alone with the exception of connectivity features. We note that keyword features (also used in prior work \cite{Bhagavatula14MLforFilterListsAISec}) have the highest precision but lowest recall as compared to other feature categories. As we start combining features, we find that precision and recall improve by the addition of each feature category, and that precision and recall are maximized by combining all four feature categories.
2024-02-18T23:40:31.084Z
2018-05-24T02:11:11.000Z
algebraic_stack_train_0000
2,634
11,057
proofpile-arXiv_065-12833
\section{Introduction} Digital transformation has been described in various industries, such as travel \cite{Fonzone16}, tourism \cite{Minghetti2010}, medical \cite{Agarwal2013}, insurance \cite{Singh2015}, consumer \cite{Maenpaa2015}, high-tech \cite{SAP2014}, energy \cite{Hagenmeyer2016, Fang2012}, public sector \cite{West2012}, and education \cite{Kahkipuro2015}. It appears, however, that there are a paucity of published in-depth, systematic analyses that examine airport ground operations; although it might have relevant potential for more efficient and sustainable processes (e.g., by reduced fuel consumption \cite{Singh15}) and an improve customer experience \cite{Wattanacharoensil16}. The objective of this paper is to fill this gap by providing a resource that contains a comprehensive review of major research directions, methods and applications focused on the digital transformation of airport ground operations. Our findings are organized as follows: First, we construct a working definition of the term digital transformation. Second, we provide an overview of an airport ground operations value chain. Next, we present a comprehensive collection of references that consider the digital transformation of airport ground operations, classifying these papers according to their respective business processes within the value chain. We discuss a real business scenario of integrated invoicing at Swissport International Ltd. We conclude this paper by highlighting key observations and offering recommendations for future research. \section{A working definition of digital transformation} \label{sec:digitaltrans} Digital transformation, also referred to as digitalization \cite{GartnerITglossary}, has become a very popular term in academia and industry, yet it lacks a clear definition. In our research, we attempted to draw a clear boundary between digital transformation and simple application of information and communication technology (ICT). To this purpose, we developed a working definition and used this to select papers for our review. New digital technologies are seen as a key driver of digital transformation, but there is no consensus about the impact of digital transformation on business. Some researchers argue that digital transformation uses technology to improve the performance of enterprises \cite{Westerman2011} and to enable major business improvements \cite{Capgemini2015}. Other papers suggest that digital transformation facilitates changes in business models, provides new revenue opportunities \cite{GartnerITglossary}, and creates new digital businesses \cite{McDonald2012}. For this paper, we decided to use a definition that reflects the general tenor of the current debate. We defined digital transformation as \textit{the use of new digital technologies, such as cloud, mobile, big data, social media and connectivity technologies, to improve customer experience, streamline operations or create new business models}. We did not review papers that are related to, though not strictly part of, digital transformation unless they contained at least one direct application of new digital technologies -- either to improve customer experience, streamline operations or to create new business models. Returning to the definition, let us now look at each of four main elements that we considered: \textbf{Usage of new digital technologies}: Current industry publications agree that major technologies accelerating digital transformation are cloud, mobile, big data, social media, and connectivity, especially smart sensors and internet of things \cite{Microsoft2015, Westerman2011}. In our research, we relied on the following definitions of these technologies: Cloud computing provides shared computer processing resources on demand \cite{Hassan2011, Dinh2011}. Mobile technology involves using mobile devices to access mobile applications, data, and to communicate \cite{Microsoft2015}. Big data are high-volume, high-velocity and high-variety data available in structured and unstructured form that require new ways of information processing for enhanced analyses and decision making \cite{Laney2001}. Social media cover technologies that facilitate social interactions and are enabled by Internet or mobile device, such as wikis, blogs, social networks, and web conferencing \cite{Microsoft2015}. Smart sensors are inexpensive wireless sensors of small size with an on-board microprocessor \cite{Spencer2004}. \textbf{Improving customer experience}: Customer service has changed and shifted towards digital self-service, such as self-service banking (funds transfer, account history, bill payments), self-service gasoline stations, self-service scanning and checkout lanes at grocery stores, electronic voting, and self-service check-in kiosks at airports \cite{Castro2010}. Self-service technology offers a broad set of benefits to consumers: it is available outside of regular business hours, saves time, protects privacy, and other benefits. Businesses invest in self-service technology to reduce costs and free up workers from routine transactions \cite{Castro2010}. Despite these various benefits, the current literature has raised concerns about technology-based self-service: self-service simply shifts the work to the customer, reduces employment opportunities, eliminates both customer choice and human contact \cite{Reinders2008}, and is often inaccessible for elderly and disabled consumers \cite{Petrie2014}. In this review, we provided some examples of self-service technologies in airport ground operations. We have paid particular attention to concerns associated with using these technologies. \textbf{Streamlining operations}: Companies historically have used automation to make processes more efficient and reliable, such as in the areas of enterprise resource planning \cite{Shehab2004, Samaranayake2009}, manufacturing, research and development. Presently, process automation is often used in context of digital transformation. Industry publications report that companies drive digital transformation by automating internal processes \cite{Westerman2011, Brown2012}. Is this just another case of \textit{d\'ej\`a vu}? Earlier papers claim that process automation leads to fundamental changes in the business \cite{Venkatraman1994}. More recent publications argue that digital transformation has impacts beyond simply the automation of existing tasks and activities: it provides new digital services and creates new business processes. Unlike automation (modernization) in the past decades, which has focused on technology application and management, current digital transformation addresses the effects and implications of technological change \cite{Lynch2000}. In this review, we included scientific papers that proposed how to embrace digital transformation and industry publications, showing how companies respond to digital transformation by automating internal processes. We did not consider papers reporting simple application of ICT technology in process automation, such as automating workflows, going paperless or simply adding a digital representation of written information (digitizing information), or reducing human factors. \textbf{New business models}: Companies apply new technologies not only to their core business, but use them to find new profit pools \cite{Brown2012, Bower1995}. In this context, digital transformation is often used as a framework with digital disruption. Digital disruption is a technological innovation that exerts a negative impact on existing industries and creates new business opportunities \cite{Bower1995}. The literature reports several examples of how digital disruption has changed media \cite{Gilbert2012, Karimi2015}, financial \cite{Ondrus2005, Dermine2016}, and consumer \cite{Rigby2011} industries in the past decades. \section{Value chain of airport ground operations} \label{sec:ValueChain} Airport ground operations, also referred to as ground handling, cover those services required by an airline between landing and take-off of the aircraft, such as marshaling of aircraft, (un)loading, refueling, cleaning, catering, baggage handling, passenger handling, cargo handling, aircraft maintenance, and aviation security services. In Figure \ref{fig:airport-procedures}, the role of the ground handling agent is shown within the scope of normal airport procedures for passenger service. The services are grouped by landside (prior to clearing security) and airside (after clearing security). The value contribution of ground handling agents within the overall aviation value chain can be summarized as preparing the aircraft from its ground-time until the next flight. \begin{figure*}[hbtp] \centering [width=\linewidth]{airport-procedures.jpg} \caption{Key airport procedures for passenger transport \cite{Kovynyov2016}} \label{fig:airport-procedures} \end{figure*} In Figure \ref{fig:valuechain}, we present a value chain \cite{Porter1985, Porter2001} that illustrates the key processes. These processes provide service to the customer. The value chain can be divided into five core processes for airport ground handling operations. Core business processes relate directly to the creation and delivery of ground handling services and reflect the essential functions of any ground handling agent: \begin{itemize} \item Passenger handling consists of the arrival (passenger de-boarding, transfer, baggage delivery, lost \& found services, arrival lounge) and departure processes (ticketing and reservation, check-in, waiting area, lounge, boarding). \item Aircraft preparation involves parking, (un)loading, load control, fueling, pushback and towing, deicing and others. \item Baggage handling covers, for instance, baggage drop-off, x-raying, sorting, load planning, loading and transportation. \item Cargo handling consists of such processes as (un)loading, customs clearance, x-raying, storage and others. \end{itemize} Supporting processes facilitate execution of core processes: \begin{itemize} \item Planning \& scheduling process involves subprocesses like demand planning, shift planning, rostering (i.e. scheduling periods of duty and assignment of employees to particular shifts), daily personnel disposition and task scheduling. \item Human resources (HR) management and training deals with employee recruitment, training demand planning, training preparation and delivery, quality assurance, payroll and dismissal. \item Commercial processes cover negotiations of prices, tariffs and service level agreements (SLA), customer relationship management and contract entry. \item Financial processes typically involve subprocesses such as service data capturing, invoicing, financial accounting (accounts payable, accounts receivable, balance sheet, asset accounting), financial statements, management accounting, and treasury. \item Management processes include target setting, monitoring, reporting of key performance indicators (KPI), incentives, and leadership. \item Procurement process focuses on buying ground support equipment (GSE) and consulting services. \item IT process consists of solution delivery, service operations and support, and controlling. \end{itemize} \begin{figure*} \footnotesize \begin{tikzpicture}[table/.style={ matrix of nodes, row sep=-\pgflinewidth, column sep=-\pgflinewidth, nodes={rectangle,text width=1.6cm,align=center}, text depth=1.25ex, text height=2.5ex, nodes in empty cells}] \matrix (mat) [table] { || & || & || & || & || & \\ || & || & || & || & || & \\ || & || & || & || & || & || \\ || & || & || & || & || & || \\ || & || & || & || & || & || \\ || & || & || & || & || & || \\ || & || & || & || & || & \\ || & || & || & || & || & \\ }; \foreach \row in {2,3,4} \draw[black] (mat-\row-1.north west) -- (mat-\row-6.north east); \draw[black] (mat-5-1.north west) -- (mat-5-6.north east); \foreach \row in {1} \draw[black] (mat-\row-1.north west) -- (mat-\row-5.north east); \foreach \row in {8} \draw[black] (mat-\row-1.south west) -- (mat-\row-5.south east); \foreach \col in {1,2,3,4,5} \draw[black] (mat-5-\col.north west) -- (mat-8-\col.south west); \foreach \col in {1} \draw[black] (mat-1-\col.north west) -- (mat-5-\col.south west); \node at (mat-1-3) {Planning \& Scheduling}; \node at (mat-2-3) {HR \& Training}; \node at (mat-3-3) {Commercial Process}; \node at (mat-4-3) {Financial, Management, Procurement, IT Processes}; \node at ([yshift=-10pt]mat-6-1) {\parbox[t]{2cm}{\centering Passenger Arrival}}; \node at ([yshift=-10pt]mat-6-2) {\parbox[t]{2cm}{\centering Aircraft Preparation}}; \node at ([yshift=-10pt]mat-6-3) {\parbox[t]{2cm}{\centering Baggage Handling}}; \node at ([yshift=-10pt]mat-6-4) {\parbox[t]{2cm}{\centering Cargo Handling}}; \node at ([yshift=-10pt]mat-6-5) {\parbox[t]{2cm}{\centering Passenger Departure}}; \node[rotate = 90] at ([xshift=-52pt]mat-3-1.north) {Supporting Processes}; \node at ([yshift=-19pt,xshift=-0.5cm]mat-8-3.south) {Core Processes}; \fill[white] (mat-1-5.north east) -- (mat-5-6.north east) -- (mat-1-6.north east) -- cycle; \fill[white] (mat-8-5.north east) -- (mat-5-6.north east) -- (mat-8-6.north east) -- cycle; \shade[top color=white,bottom color=white,middle color=white,draw=black] (mat-1-5.north) -- (mat-5-6.north) -- (mat-8-5.south) -- (mat-8-5.south east) -- (mat-5-6.north east) -- (mat-8-5.south east) -- (mat-5-6.north east) -- (mat-1-5.north east) -- cycle; \begin{scope}[decoration={markings,mark=at position .5 with \node[transform shape] {Margin};}] \path[postaction={decorate}] ( $ (mat-1-5.north)!0.5!(mat-1-5.north east) $ ) -- ( $ (mat-5-6.north)!0.5!(mat-5-6.north east) $ ); \path[postaction={decorate}] ( $ (mat-5-6.north)!0.5!(mat-5-6.north east) $ ) -- ( $ (mat-8-5.south)!0.5!(mat-8-5.south east) $ ); \end{scope} \draw[decorate,decoration={brace,mirror,raise=6pt}] (mat-1-1.north west) -- (mat-5-1.north west); \draw[decorate,decoration={brace,mirror,raise=6pt}] (mat-8-1.south west) -- (mat-8-5.south); \end{tikzpicture} \caption{Value chain of airport ground operations} \label{fig:valuechain} \end{figure*} \section{Applications} \label{sec:Applications} The papers for this section were selected based on our working definition of digital transformation, which we described previously. We categorized each paper according to the process of the ground handler's value chain involved (see Table \ref{tab:summary-core} for core processes and Table \ref{tab:summary-support} for supporting processes). For the sake of simplicity, passenger arrival and passenger departure are referred to as passenger handling. We carried out the search for papers using a wide range of electronic libraries across the world, electronic journal collections, general web searches, communication with authors and interviews with industry experts (airlines, airports, ground handling companies). Based on our interviews, we identified further examples of digital transformation in airport ground operations; however, we were unable to locate papers relevant to these scenarios. We have included these examples in Table \ref{tab:summary-core} and Table \ref{tab:summary-support}, and introduced selected scenarios in detail in this section. \begin{table*}[htp] \caption{Key areas of digital transformation of airport ground operations (core processes)} \footnotesize \begin{center} \renewcommand{\arraystretch}{0.9} \begin{tabular}{ p{2cm}p{3cm}cccp{5cm} } \hline Business process & Business scenario & CUS & OPS & BIZ & Authors \\ \hline Passenger handling & Indoor navigation & * & & * & Darvishy et al. \cite{Darvishy2008}, Fallah et al. \cite{Fallah2013}, Hat el al. \cite{Han2014}, Odijk and Kleijer \cite{Odijk2008}, Radaha et al. \cite{Radaha2015}\\ & Digital processing of irregularity vouchers & * & * & & McCollough et al. \cite{McCollough2000}\\ & Self-service check-in kiosks & * & * & & Abdelaziz et al. \cite{Abdelaziz2010}, Castillo-Manzano et al. \cite{Castillo-Manzano2013}, Chang and Yang \cite{Chang2008}, Howes \cite{Howes2006}, Ku and Chen \cite{Ku2013}, Liljander et al.\cite{Liljander2006}, Wittmer \cite{Wittmer2011a}\\ & Self-boarding & * & * & & Jaffer and Timbrell \cite{Jaffer2014} \\ & Smart wheelchairs & * & * & & Berkvens et al.\cite{Berkvens2012}, Morales et al. \cite{Morales2014}, Romero et al. \cite{Romero2009} \\ & Smart wearables & * & * & & n.a. \\ & Biometric services & * & * & & Oostveen et al. \cite{Oostveen2014}, Palmer and Hurrey \cite{Palmer2012}, Sumner \cite{Sumner2007}, Sasse \cite{Sasse2002}, Scherer and Ceschi \cite{Scherer2000} \\ \hline Baggage handling & RFID baggage tags & * & * & & Berrada and Salih-alj \cite{Berrada2015}, Bite \cite{Bite2010}, Datta et al. \cite{Datta2016}, DeVries \cite{DeVries2008}, Mishra and Mishra \cite{Mishra2010}, Zhang et al. \cite{Zhang2008}\\ & Automated baggage drop-off & * & * & & Jaffer and Timbrell \cite{Jaffer2014}, Wittmer \cite{Wittmer2011a}\\ & Digital bag tags & * & * & & n.a. \\ & Self-tagging & * & * & & n.a. \\ & Lost luggage kiosks & * & * & & n.a. \\ & Real-time luggage tracking & * & * & & Sennou et al.\cite{Sennou2013} \\ & Advanced analytics & * & * & & n.a. \\ \hline Lounge services & Lounge access gates & * & * & & n.a. \\ & New lounge ticket types & * & & * & n.a. \\ \hline \end{tabular} \end{center} * Business scenario addresses: CUS = improving customer experience, OPS = streamlining operations, BIZ = creating new business models. \label{tab:summary-core} \end{table*} \begin{table*}[htp] \caption{Key areas of digital transformation of airport ground operations (supporting processes)} \footnotesize \begin{center} \renewcommand{\arraystretch}{0.9} \begin{tabular}{ p{2cm}p{3cm}cccp{5cm} } \hline \rowcolor[gray]{.9} Business process & Business scenario & CUS & OPS & BIZ & Authors \\ \hline Planning \& Scheduling & Automated centralized planning & & * & & Herbers \cite{Herbers2005}, Herbers and Kutschka \cite{Herbers2016}, Ernst et al. \cite{Ernst2004}, Ip et al. \cite{Ip2010}, Ernst et al. \cite{Ernst2004a}, Dowling et al. \cite{Dowling1997}, Mason and Ryan \cite{Mason1998}, Stolletz and Zamorano \cite{Stolletz2014}, Brusco et al. \cite{Brusco1995}, Stolletz \cite{Stolletz2010}, Chu \cite{Chu2007}, Dorndorf \cite{Dorndorf2006}, Keller and Kruse \cite{Keller2002} \\ & De-peaking & & * & & Kisseleff and Luethi \cite{Kisseleff2008}, Luethi and Nash \cite{Luethi2009} \\ & Shift trading & & * & * & n.a. \\ \hline GSE management & Automated GSE scheduling and routing & & * & & Padron et al. \cite{Padron2016}, Norin et al. \cite{Norin2012}, Kuhn and Loth \cite{Kuhn2009} \\ \hline HR \& Training & Digital employee profiles & & * & & n.a. \\ & Web-based training & & * & & n.a. \\ & Integrated employee lifecycle management & & * & & n.a. \\ & Mobile apps and social-media for corporate communications & & * & & n.a. \\ \hline Financial process & Integrated invoicing & & * & & Kovynyov et al. \cite{Kovynyov2010, Kovynyov2016} \\ \hline Management process & Integrated KPI reporting & & * & & Schmidberger et al. \cite{Schmidberger2009} \\ \hline IT process & Digital workplace and virtualization & & * & & n.a. \\ \hline \end{tabular} \end{center} * Business scenario addresses: CUS = improving customer experience, OPS = streamlining operations, BIZ = creating new business models. \label{tab:summary-support} \end{table*} \subsection{Passenger Handling} \textbf{Self-service check-in kiosks} are computer terminals for passenger check-in which remove the need for ground staff (Fig. \ref{fig:cuss}). Previously, airlines had only dedicated kiosks for their own passengers; today, kiosks share check-in applications for multiple airlines \cite{Howes2006}. Shared kiosks bring several benefits: airports can better utilize the limited space in airport terminals, airlines eliminate capital spending, since they buy these services directly from a ground handling agent, ground handling agents can cut operating costs by engaging fewer ground staff for passenger check-in, and passengers can save time as they can use any kiosk instead of having to search for dedicated kiosks. Ground handling agents install self-service check-in facilities and promote them by engaging floor-walkers to assist passengers in the land-side of check-in areas \cite{Wittmer2011a}. \begin{figure}[hbt] \centering [width=0.9\columnwidth]{cuss.jpg} \caption{Self-service check-in at Zurich airport (Photo courtesy Swissport)} \label{fig:cuss} \end{figure} \textbf{Self-boarding}: Quick boarding gates allow passengers to self-scan the boarding pass at the gate. After the boarding pass has been verified, the gates are released and the passenger can proceed to the aircraft. In this arrangement, ground staff are not involved into the passenger boarding and can focus on supervisory tasks or special cases. This helps reduce the need for ground staff, and ground handling agents are able to further reduce operating costs. Current literature argues that self-boarding has a positive impact on customer satisfaction, because it significantly reduces processing times at the gate \cite{Jaffer2014}. Quick boarding gates also record various operational data which can offer more insight into customer behavior or efficiency of the boarding process, such as processing times by flight and gate, passenger groups using boarding gates, and passenger distribution over the time. \textbf{Indoor navigation:} Many recently published papers study indoor navigation in airport terminals \cite{Han2014}, indoor navigation for passengers with reduced mobility \cite{Darvishy2008}, for transit passengers \cite{Radaha2015}, and location-based services at airports \cite{Odijk2008}. An overview of the development of this technology can be traced through earlier review papers \cite{Fallah2013}. Currently, ground handling agents provide special assistance services for elderly passengers, those with reduced mobility, and unaccompanied minors. The impact of indoor navigation on this aspect of the business has not yet been described in the literature. Indoor navigation technologies have the potential to disrupt the market for airport assistance services, if passengers adopt this technology and instead use a mobile device to navigate, rather than booking special assistance services. \textbf{Smart wheelchairs}: Recent papers report some applications of autonomous, self-driving GSE vehicles \cite{Morris2015}. Further applications of autonomous driving technologies are reported in automobile, truck, public transportation, industrial and military services \cite{Bishop2000}. So far, we have not found any papers on self-driving vehicles within airport terminals. However, we believe that providing self-driving cars within the airport terminal can open up new possibilities in the special assistance of passengers with reduced mobility. Currently, ground staff drive passengers using golf-carts or in wheelchairs through the airport terminal. We believe that ground handling agents can reduce the amount of ground staff employed by using self-driving, electrically powered wheelchairs or golf-carts with manual and visual control systems \cite{Romero2009}. \textbf{Digital processing of irregularity vouchers:} Irregularity vouchers are free cash vouchers for meals, hotel accommodation, airport transfers, and bag replacement provided to passengers by an airline (or ground handling agent on behalf of the airline) in case of a flight delay. The vouchers are usually personalized. The vouchers are redeemed from the airline and are for either pre-arranged services or a maximum indicated value. Usually, airlines have a preexisting agreement with specific hotels regarding accommodation and rates. For meals, vouchers are in many cases presented to multiple vendors and usually include a maximum redemption value. Some merchants who accept such vouchers in bulk, redeem them from the airline using electronic data processing. Businesses that do not receive many vouchers or do not use an electronic system usually physically mail the vouchers to the airline or ground handling agent in order to receive payment. We believe that digital processing of irregularity vouchers has significantly reduced processing costs for service providers, airlines and ground handling agents. Irregularity vouchers have been extensively studied in the literature relating to customer satisfaction after service failure and recovery \cite{McCollough2000}. However, so far, we have not found any publications describing digital processing of irregularity vouchers. \textbf{Smart wearables}: Smart watches, miniature wrist-mounted computers with a time-keeping functionality and an array of sensors \cite{Rawassizadeh2014}, are widely used by passengers at airports. With a smart watch, passengers can get alerts on gate changes or flight delays, scan a boarding pass at security late or at the gate. As the smart watch is permanently worn on the wrist, ground handling agents can use this channel to broadcast real-time information to passengers outside of traditional visual display monitors, voice announcements and send messages to other mobile devices. We have some found some evidence of pilot projects with smart watches. For example, a major U.S. airline announced a trial of smart glasses and smart watches, using the products to greet passengers by name, provide real-time travel information and start the check-in process before the passenger even reached the front door of the terminal. Nonetheless, we have not found any papers that describe the usage of smart watches in the passenger handling. We propose the following for future research directions: indoor navigation at airports using smart watches for visually impaired passengers, sharing health data for cross-border disease control, and emergency notifications on smart watches. \textbf{Biometric services}: Biometrics are automated person identification using physiological characteristics (face, fingerprints, hand geometry, handwriting, iris, retinal, vein, voice) \cite{Heracleous2006}. Current research reports several applications of biometric technology at airports, such as airport security, biometric travel documents, airport access control \cite{Sumner2007}, biometrics in security, biometrics in baggage claim \cite{Scherer2000}, airport immigration systems \cite{Sasse2002, Oostveen2014, Palmer2012}, and seamless travel \cite{Heracleous2006}. \subsection{Baggage Handling} Recent innovations in the area of baggage handling are mainly driven by the following trends: airlines boost productivity by reducing ground times (timeframe between landing and take-off), seat capacity of new aircraft increases, and timely delivery of baggage became part of the service level agreements between ground handling agents and airlines. Clearly, these trends introduce new requirements to baggage handling systems and related processes in terms of correctness and speed of baggage processing. Consider now some examples how did ground handling agents respond. \textbf{RFID baggage tags:} the radio frequency identification (RFID) technology enables identification from a distance and does not require a line of sight, unlike the bar-code technology \cite{Want2006}. The benefits of RFID baggage tags have been extensively discussed in the current literature: RFID tags can improve baggage tracing \cite{Zhang2008}, baggage routing during air transit \cite{Datta2016}, and reduce the amount of misrouted luggage \cite{DeVries2008}. Furthermore, RFID tags can incorporate additional data such as manufacturer, product type, and even measure environmental factors such as temperature \cite{Hunt2006}. RFID systems can identify many different tags located in the same general area without human assistance. Embedded in barcode labels, RFID tags could eliminate the need for manual inspections and routing by ground handling agents. At the moment, RFID tags are inserted into paper and then attached as paper labels to the baggage (Fig. \ref{fig:rfid-bag-tag}). \begin{figure}[hbt] \centering [width=0.5\columnwidth]{RFID_bag_tag.jpg} \caption{RFID tag incorporated into a bag tag (Photo Vanderlande)} \label{fig:rfid-bag-tag} \end{figure} \textbf{Self-tagging} is one of latest ideas for bag tagging. Passengers tag their own bags, print luggage tags at home and track their bags on smartphones. In this context, digital bag tags are gaining more importance. \textbf{Digital bag tags} are the digital alternative to conventional paper-based baggage tags. Bags receive a permanent bag tag that displays a digital bar-code. Airlines or ground handling agents are able to change this bar-code remotely, if the flight plan has changed or a passenger has been re-toured. Combined with a tracking device, that is stored inside the bag, passengers are able to track the luggage on smartphone in real-time. \textbf{Automated baggage drop-off:} Combined with home-printed or electronic bag tags, passengers can use fully automated machines to receive passenger-tagged bags without any interaction with ground staff or airline employees. Several airports have already installed automated bag-drop machines \cite{Jaffer2014}. \textbf{Lost luggage kiosks} are self-service computer terminals for reporting lost baggage. They are connected to the global database for lost luggage and help passengers to report delayed or missing bags upon arrival (see Figure \ref{fig:lost-luggage-kiosk}). In order to report lost luggage, a passenger scans the boarding pass, describes the missing item and enters contact details for the delivery, for when the luggage has been found. Passengers can obtain the latest information to the report by accessing a website and entering the number of their report. \begin{figure}[hbt] \centering [width=0.9\columnwidth]{lost-luggage-kiosk.jpg} \caption{Lost luggage kiosks at Geneva Airport (Photo courtesy Swissport International Ltd.)} \label{fig:lost-luggage-kiosk} \end{figure} \textbf{Advanced analytics in baggage handling}: Performance-related contracts between ground handling agents and airlines require reliable data on speed and efficiency of baggage handling systems, such as in-time delivery of baggage (first bag, last bag), processing times, and correctness of baggage processing. In addition, ground handling agents collect and maintain data pockets on operational baggage flow, develop simulations and models to understand past behavior and performance of baggage handling systems. \subsection{Lounge Services} \textbf{Lounge access gates:} Most passengers use status-based access or class-of-travel access rather than membership. Status passengers can swipe the card in order to access a lounge. Passengers with business class or first class tickets are required to show the boarding pass to enter the lounge. Some cardmembers may receive access privileges to the lounges. In order to access the lounge, cardmembers must present their card. Furthermore, some passengers are allowed to bring guests. Ground handling agents need to know such regulations for several airlines and lounge clubs. In addition, lounge personnel need to be able to apply these rules immediately when a particular passenger is attempting to enter a lounge. To this purpose, ground handling agents have installed automated access gates at lounge entrances. The access gate is connected to a back-end application with the rule sheet. As soon as a passenger has swiped the card or scanned the boarding pass, the application applies the rule sheet and decides if the passenger is able to access the lounge. \textbf{New lounge ticket types}: Until recently, ground handling agents have been selling single entries to airlines or directly to passengers. Today, ground handling agents have begun introducing new types of products, such as family tickets and lounge access vouchers offered by tour operators. Family tickets can be booked in advance and offer group access to the lounge. Family tickets are sold for a fixed price irrespective of the status or booking class of the passengers. Furthermore, ground handling agents have begun forming cooperations with charter airlines and tour operators, in order to include lounge access into offered travel packages. New types of products introduce new and possible greater obligations on ground handling agent's IT systems: lounge access gates need to be coordinated with family tickets and tour operator tickets, invoicing systems need to able to price and bill the tickets appropriately, and data analytics facilities need to be able to analyze customer behavior and provide insights into the new product types. \subsection{Staff Planning and Scheduling} Since labor costs in airport ground operations account for 60 to 80 percent of the total cost, automation of staff planning and scheduling is a top priority for every ground handling agent \cite{Herbers2014a}. The development of research on rostering and task scheduling of airport ground staff can be traced through earlier review papers \cite{Herbers2005, Herbers2016, Ernst2004, Ip2010, Ernst2004a}. \textbf{Automated centralized planning} is the most common approach reported in the literature. This approach is based on prior demand modeling and stepwise reduction of the planning horizon. Usually, the planning process begins six months in advance, using the flight schedule forecast for the next season (winter or summer). The aggregate flight schedule forecast is achieved via the production management system, and is likely to be just a distribution of flights by month, airline and destination. The aggregate flight schedule forecast is used for holiday planning and strategic workforce planning. Two months in advance, the aggregate forecast is transformed into a detailed flight schedule forecast. Based on the detailed forecast, the shift plan is compiled. The shift plan is a result of balancing demand quantified in the flight schedule forecast against the available workforce. The shift plan includes the distribution of different shifts per day throughout the month and associated skills required for performing these shifts. The shift plan is usually performed using a dedicated planning application. Next, the shift plan is transformed into the rostering plan. The rostering plan assigns employees to particular shifts and is prepared at least two weeks before its application \cite{Dowling1997, Mason1998, Stolletz2014, Brusco1995, Stolletz2010, Chu2007}. The rostering plan is usually completed using the time and attendance system. Hours prior, the rostering plan is incorporated with the task schedule\cite{Dorndorf2006}. The task schedule assigns particular tasks to the shifts. Usually, the task schedule is created using a dedicated real-time application with broadcasting capacity to the employees' hand-helds. \textbf{De-peaking} is a strategy whereby air traffic between peak and off-peak times is scheduled so as to distribute traffic more evenly throughout the day. Conventional planning approaches do not include de-peaking by default. Ground handling agents operating at hub airports incorporated additional de-peaking strategies into their planning procedures \cite{Kisseleff2008, Luethi2009}. \textbf{Shift trading} is an on-line service to exchange work shifts. As soon as a new shift plan is announced, employees can request shift trades. Shift trading brings many benefits to ground handling agents: it helps eliminate planning conflicts along the way, it provides a collaborative environment to employees and gives the freedom of choice (both are highly appreciated by employees, which increases employee satisfaction), and employee self-organization is more cost effective for the company. We observed two organizational models for shift trading -- either it is incorporated into the time and attendance recording system, or results as a product of from employee self-organization, via a satellite website or closed groups in social networks. Employee-led shift trading has the potential to create new business models. \subsection{GSE Management} Ground support equipment (GSE) is usually found on the ramp (i.e. servicing area of airport, also referred to as the apron). GSE equipment includes, for instance, refuellers, container loaders, belt loaders, transporters, water trucks, ground power units, air starter units, lavatory service vehicles, catering vehicles, passenger boarding stairs, passenger buses, pushback tractors, de-icing vehicles, container dollies, cargo pallets, and others. \textbf{Automated GSE scheduling} is one of the most common applications in this area. It minimizes the total number of GSE vehicles required to handle flights and, consequently, reduces ground handling agent's capital expenditures. Recent literature describes advanced scheduling methods for GSE vehicles \cite{Padron2016} and deicing trucks \cite{Norin2012}. Advanced scheduling and routing algorithms minimize the overall apron traffic and reduce ground handling agent's fuel costs \cite{Kuhn2009}. Ground handling agents use GPS localization and tracking of GSE vehicles to collect data on effective use, and use them for scheduling and forecasting. \subsection{HR \& Training} The data on digital transformation in the human resources (HR) \& training processes in airport ground operations are scarce. Based on our interviews with industry experts, we were able to identify some business scenarios that illustrate how digital transformation has changed the HR \& training processes: \textbf{Digital employee profiles}: Ground handling agents have a large number of employees and need to access employee information quickly. Digital employee profiles create a centralized overview of employees by integrating multiple sources of employee-related information: personal (age, gender, address, civil status), financial (compensation, including rewards and benefits), qualifications (certifications, trainings, skills, performance reviews), and workforce data (productivity, absences, holidays, overtime balances). \textbf{Web-based trainings}: Ground handling agents started using web-based training instead of classroom training. Web-based training involves using browser-based learning programs available on the corporate intranet. Such trainings can be accessed as desktop applications or from a mobile device (e.g. tablet or smartphone). Web-based training includes single and recurrent standard training programs (e.g. code of conduct, health and safety), management training (e.g. anti-corruption and fair competition guidelines), and technical training for operational staff (e.g. dangerous goods, security regulations, customer service). Most web-based training programs are oriented to knowledge transfer (e.g. new regulations, rules, procedures) and do not require interaction with trainers. Web-based training frees up in-house trainers from routine tasks and reduces training costs. Furthermore, web-based training can be easily translated into other languages or enhanced using local information. \textbf{Mobile apps and social media for corporate communications}: Ground handling agents need to be able to distribute the latest news, reports and corporate announcements to a large number of employees in a timely fashion. Operational staff, in particular, need to be promptly informed about security deficits, bomb threats, aircraft damages, irregularities in airport operations, and other considerations. Consequently, many of ground handling agents have installed visual displays in lunch rooms and offices, developed corporate mobile apps or started engaging with employees via social media networks. \textbf{Integrated employee lifecycle management} considers all steps an employee follows during their time within a ground handling company. This includes recruitment, on-boarding, goal setting and performance reviews, personal development, talent management, succession planning, and departure (retirement, dismissal, leave). Ground handling agents manage large numbers of employees and must be cost-sensitive. HR organizations need to be highly efficient in executing their tasks. For example, they might improve cost-effectiveness of advertising by tapping in to social networks to engage with potential candidates, publishing job openings via internal job markets, or ensure job application submissions are quick and easy to review, responding to candidates promptly, relying on cost-effective selections methods. \begin{table}[hbt] \caption{Key performance management ratios} \footnotesize \begin{center} \renewcommand{\arraystretch}{0.9} \begin{tabularx}{\columnwidth}{p{.3\columnwidth}p{.6\columnwidth}} \hline Operational ratios & Total hours worked per departing flight \\ & Total minutes worked per departing passenger \\ \hline Financial ratios & Direct cost per flight departure \\ & Labor cost per flight departure \\ & Total revenues per flight departure \\ \hline HR ratios & Overhead ratio (overhead : total FTEs*) \\ & Absence ratio (absence hours : total work hours) \\ & Staff turnover rate (entries \& leaves : average number of employees) \\ \hline Safety \& quality ratios & Employee injuries per 100 flights \\ & On-time performance (delay minutes per 100 flights) \\ & Aircraft damage ratio (severe aircraft damages per 100 flights) \\ \hline \end{tabularx} \end{center} * FTE = full time equivalent Source: our analysis and Schmidberger et al. \cite{Schmidberger2009} \label{tab:key-ratios} \end{table} \subsection{Management Process} \textbf{Integrated KPI reporting}: Table \ref{tab:key-ratios} shows a summary of ratios used by ground handling agents to measure and track corporate performance. These ratios are designed to address key areas of concern, which include operations, finance, human resources, safety and quality. The calculation of these ratios requires an intelligent combination of information from multiple sources, such as the latest worked hours from the time and attendance system (for operational rations), as well numbers of handled flights and passengers maintained in the production management system. Similarly, financial ratios require data from the payroll system, the enterprise resource planning (ERP) system and the production management system. All ratios should be calculated automatically and provided to the management at least on a daily basis. Consequently, ground handling agents invest in consolidation, standardization and active management of data pockets providing this kind of information. Advanced analytics and big data technologies create the possibility to develop further insight into the business. In our research, we were not able to find any evidence that ground handling agents are using these technologies in the context of such business scenarios. \subsection{Integrated Invoicing: a Case Study} \label{sec:UseCase} We provide a real business scenario showing how digital transformation has changed financial and commercial processes in airport ground operations. The ground handling agent Swissport International Ltd. undertook a program to radically automate and standardize the invoicing process and codify customer contracts in the invoicing system \cite{Kovynyov2010, Kovynyov2016}. The insights presented here consider the results of this program. The program executed by Swissport is one example of digital transformation for the following reasons: \begin{itemize} \item new digital technologies (e.g. smart sensors, embedded devices, mobile apps) were used to establish a connection between data capturing facilities and the invoicing system; \item big data technologies and advanced analytics were employed to ensure data quality; \item the financial department and technology staff collaborated to digitize customer contracts and enter all customer-related prices, tariffs and incentives into the invoicing system. These measures drove changes in business processes and systems, enabling automated processing of invoices; \item process automation halved manual workloads and significantly reduced processing times during billing cycles; \item new technology eased data sharing with business partners and facilitated e-billing (i.e. digital transmission of invoices to customers); \item customers saw substantial improvements through new invoice formats and greater consistency in the way the services were presented on the invoice, leading to improved customer satisfaction and retention. \end{itemize} \begin{landscape} \begin{figure}[hbtp] \footnotesize \begin{tikzpicture}[ thick, >=stealth, database/.style={ cylinder, cylinder uses custom fill, cylinder body fill=gray!20, cylinder end fill=gray!50, shape border rotate=90, minimum width=1.6cm, minimum height=1cm, aspect=0.25, draw=gray}, widebox/.style={ rectangle, minimum width=1.6cm, minimum height=.7cm, draw=gray, fill=gray!25}, tapebox/.style={ double copy shadow,fill=gray!20,draw=gray, tape, minimum width=1.6cm, minimum height=1cm, draw } ] \node[rectangle, draw, minimum width=3cm, minimum height=.6cm] at (0,1.3) {\textbf{Data capturing}}; \node[rectangle, draw, minimum width=4cm, minimum height=.6cm] at (3.5,1.3) {\textbf{Data consolidation}}; \node[rectangle, draw, minimum width=9cm, minimum height=.6cm] at (10,1.3) {\textbf{Issue of invoice}}; \node[database] (n0) at (0,0) {AODB}; \node[align=center] (n1) at (0,-1.5) {[scale=.026]{sensor.png} \\ Sensors}; \node[widebox] (n2) at (0,-2.7) {Mobile}; \node[tapebox] (n3) at (0,-4) {Scans}; \node[widebox] (n4) at (0,-5.3) {Manual}; \node[cloud, draw=gray, aspect=2, fill=gray!25] (n5) at (0,-6.6) {Cloud}; \node[cylinder, cylinder uses custom fill, cylinder end fill=gray!50,cylinder body fill=gray!20, minimum width=0.8cm, draw=gray] (n6) at (2.5,0) {Flights}; \node[rectangle, draw=gray, fill=gray!20, minimum width=5cm, minimum height=0.8cm, rotate=90] (n7) at (2.5,-4) {Services}; \node[database, minimum height=5cm, align=center] (n8) at (4.5,-4) {Data \\ mart}; \node[tapebox, align=center] (n18) at (4.5,-7.5) {Analyses, \\ Stats}; \node[rectangle, draw, anchor=north, minimum height=7cm, minimum width=8.5cm, label=above: ERP system] (n9) at (10,0.4) {}; \node[double copy shadow, fill=gray!20,draw=gray, rectangle, align=center, minimum height=5cm] (n10) at (6.8,-4) {Import \\tables}; \node[double copy shadow, fill=gray!20,draw=gray, rectangle, align=center, minimum height=1.5cm] (n11) at (11,-0.8) {Customer \\ contracts}; \node[rectangle, draw=gray, align=left, fill=gray!20] (n12) at (11,-4) {\textbf{Sales order:} \\ customer \\ material \\quantity \\ price}; \node[double copy shadow, fill=gray!20,draw=gray, rectangle, align=center, minimum height=1.5cm] (n13) at (8.8,-0.8) {Mapping \\ tables}; \node[circle, minimum size=1.5mm,inner sep=0pt,outer sep=0pt, fill=black, draw] (n15) at (8.8,-4) {}; \node[rectangle, draw=gray, fill=gray!20, align=center] (n16) at (13.3,-2.5) {Billing \\ doc}; \node[tape, draw=gray, fill=gray!20, minimum height=1cm] (n17) at (13.3,-4) {Invoice}; \node[align=center] (n19) at (13.3, -1) {General \\ ledger}; \path [->, thick] (n0) edge node [above left] {} (n6) (n1.base east) edge node [above left] {} (n7) (n2.east) edge node [above left] {} (n7) (n3.east) edge node [above left] {} (n7) (n4.east) edge node [above left] {} (n7) (n5.puff 9) edge node [above left] {} (n7) (n7) edge node [above left] {} (n8) (n8) edge node [above left] {} (n10) (n13) edge node [fill=white, align=center] {customer, \\material} (n15) (n11) edge node [fill=white] {price} (n12) (n15) edge node [above left] {} (n12) (n12) edge node [above left] {} (n16.south) (n12) edge node [above left] {} (n17.west) (n8.south) edge node [above left] {} (n18.north) (n16) edge node [above left] {} (n19) ; \draw[->] (n6.east) -| (n8.north); \draw (n10.east) -- node [fill=white] {qty} (n15.west); \end{tikzpicture} \caption{High-level architecture of the integrated invoicing} \label{fig:rims-architecture} \end{figure} \end{landscape} Figure \ref{fig:rims-architecture} shows the high-level architecture of integrated invoicing aligned to key steps of the invoicing process (i.e. data capturing, data consolidation, invoice issue). In this section, we describe each of these procedural steps and discuss their solutions in detail. Digital transformation of the invoicing process may have its origin in automation of data capture. A ground handling agent usually provides services at different places: on the ramp (servicing area of the airport), inside the airport terminal, and in airport lounges. Clearly, the company may face challenges in collecting, consolidating and reporting all relevant data about performed services. Consequently, Swissport began automation of data capture. Related digitalization initiatives addressed particular service groups one by one, such as ramp, lounge and deicing services. After the majority of digitalization initiatives were implemented, the company developed their capacity to automatically collect and store information regarding customer services. \begin{figure}[hbt] \centering [width=0.5\columnwidth]{handhelds.jpg} \caption{Handhelds used to record ramp services (Photo courtesy of Swissport)} \label{fig:handhelds} \end{figure} The following are some examples of how ground handling agents collect the flight and service data: \begin{itemize} \item flight data are imported from the airport operational database (AODB); \item ramp services (e.g. loading equipment) are tracked by employees using hand-held devices (Fig. \ref{fig:handhelds}); \item ground support equipment (e.g. passenger boarding steps) employed during aircraft turnaround are tracked by means of GPS tracking and localization sensors; \item the usage of deicing fluids and hot water is captured by sensors installed on de-icing trucks; \item ground power units record the time required for aircraft batteries to recharge, using integrated sensors; \item lounge entries are recorded using control gates when a passenger swipes the cards or scans the ticket; \item lost and found services are recorded using mobile apps and dedicated desktop applications; \item other infrequent or irregular services are captured on paper receipts (e.g. meal vouchers, taxi vouchers, handling work orders for private aviation flights, handling of VIP flights, special check-in counter reservations). \end{itemize} Once all the data on these services have been collected, they need to be consolidated and verified. In order to achieve this, the ground handling agent created a data-mart, which functions like a central repository for all flight and service data was created. The data-mart is designed to connect to multiple data sources at regular intervals, consolidate service data and, in the last step, link services to the corresponding flight. The service data can be then verified automatically using data-mining methods \cite{Ramkumar2013, Mikut2011}. Finally, flight and service data are finally uploaded to the enterprise resource planning (ERP) system. The ERP system uses mapping tables to recognize the customer and the product or service (i.e. material) sold to the customer. Based on this information, the system automatically creates the sales order containing all individual items sold to the customer. All price data are included in the ERP system. The ERP system uses the price data to determine customer prices for the purchased items. Ultimately, the sales orders are billed and the invoices are transferred to the customer. The ERP system integrates the invoicing as well as accounting capabilities, which is an advantage over other billing and invoicing solutions that are separate from the accounting systems. Once the invoice has been created and transferred to the customer, the system automatically creates corresponding positions on the customer account in the general ledger. After all, this case study presents a best practice approach for implementing data intensive ERP systems. However, some contributions are novel and relate specifically to the field of airport ground operations: \begin{itemize} \item Key automated data flows across the ground handling agent and his external partners are introduced and discussed. In addition, key automated data flows inside the ground handling agent's organization are outlined and corresponding implications on the solution architecture are stated; \item This case study explains why digital transformation projects in airport ground operations require high data quality and short data processing times. In addition, it shows how heterogeneous data collection procedures in airport ground operations have affected the overall design of the new invoicing solution; \item This case study describes how specific circumstances in airport ground operations can be transferred into a standardized industry best practice solution without compromising on quality and performance. New procedures are lean, simple and, obviously, can be easily understood by anyone from outside the aviation industry; \item The business scenario reported in this case study describes a digitalization initiative that has been successfully implemented without any interruptions in regular operations of the ground handling agent. Immediately after its introduction, the new invoicing solution has started to generate significant benefits towards costs, processing times and data quality, which is not obvious within a highly competitive environment of airport ground operations. \end{itemize} \section{Discussion} \label{sec:Discussion} The scenarios of digital transformation in airport ground operations and related technologies discussed in the previous section differ significantly in their maturity. So, we assessed these technologies and applications with respect to their impact and major improvements. In this section, we share key results of this assessment. We rated all technologies and applications from the previous section according to their usefulness and maturity using a unified framework with clear evaluation criteria. On top of that, we developed hints and ideas for future research directions for rated technologies. We selected the technology readiness levels (TRLs) as a qualified measurement system to access maturity level of the related technologies. TRLs were initially introduced by the National Aeronautics and Space Administration (NASA) to evaluate new technologies developed for space missions \cite{Mankins95} and, later on, generalized for all kinds of technology assessments\cite{Horizon2020}. TRLs are widely used in academia and industry, e.g. mandatory technology assessments for Horizon 2020\cite{Horizon2020} (a 80 billion Euro research program funded by the European Union), evaluation of cyber-physical systems \cite{Leitao16}, monitoring and evaluation of system development process \cite{Magnaye10}. TRLs are based on a scale from 1 to 9. TRL 1 is the lowest level and TRL 9 is the highest. Then a technology is at TRL 1, there are preliminary results of scientific research are available and have been successfully translated into future research development. In the end, after a technology has been "mission proved", it achieves TRL 9 (cf. Table~\ref{tab:trl}). \begin{table}[hbt] \caption{Overview of technology readiness levels \cite{Mankins95}} \footnotesize \begin{center} \renewcommand{\arraystretch}{0.9} \begin{tabularx}{\columnwidth}{lp{.75\columnwidth}} \hline TRL 1 & Basic principles observed and reported \\ \hline TRL 2 & Technology concept and/or application formulated \\ \hline TRL 3 & Analytical and experimental critical function and characteristic proof of concept\\ \hline TRL 4 & Component or subsystem validated in laboratory environment \\ \hline TRL 5 & Component, subsystem or system validated in relevant environment \\ \hline TRL 6 & Component, subsystem, system or prototype demonstrated in relevant end-to-end environment \\ \hline TRL 7 & System prototype demonstrated in an operational environment \\ \hline TRL 8 & System completed and "mission qualified" through test and demonstrated in an operational environment \\ \hline TRL 9 & System "mission proven" through successful mission operations \\ \hline \end{tabularx} \end{center} \label{tab:trl} \end{table} As previously mentioned, we have evaluated the technologies discussed in the previous section against the parameters for each TRL and then assigned a TRL rating based on reported applications, relevant research papers and our interviews with industry experts. We summarized our key results in Tables~\ref{tab:trl_cp} (core processes) and~\ref{tab:trl_sp} (supporting processes). The applications reported in these tables are grouped by their position within the value chain of an airport ground handling agent (Figure~\ref{fig:valuechain}). \begin{table}[hbt] \caption{TRL ratings for technologies relating to core processes of a ground handling agent} \footnotesize \begin{center} \renewcommand{\arraystretch}{0.9} \begin{tabularx}{\columnwidth}{p{.25\columnwidth}p{.5\columnwidth}p{.12\columnwidth}} \hline Business process & Technology or business scenario & TRL \newline rating \\ \hline Passenger \newline handling & Indoor navigation & 8 \\ & Digital processing of irregularity vouchers & 2 \\ & Self-service check-in kiosks & 9 \\ & Self-boarding & 9 \\ & Smart wheelchairs & 2 \\ & Smart wearables & 6 \\ & Biometric services & 8 \\ \hline Baggage \newline handling & RFID baggage tags & 8 \\ & Automated baggage drop-off & 9 \\ & Digital bag tags & 8 \\ & Self-tagging & 8 \\ & Lost luggage kiosks & 9 \\ & Real-time luggage tracking & 2 \\ & Advanced analytics in baggage handling & 7 \\ \hline Lounge services & Lounge access gates & 9 \\ & New lounge ticket types & 2 \\ \hline \end{tabularx} \end{center} \label{tab:trl_cp} \end{table} \begin{table}[hbt] \caption{TRL ratings for technologies relating to supporting processes of a ground handling agent} \footnotesize \begin{center} \renewcommand{\arraystretch}{0.9} \begin{tabularx}{\columnwidth}{p{.25\columnwidth}p{.5\columnwidth}p{.12\columnwidth}} \hline Business process & Technology or business scenario & TRL \newline rating \\ \hline Planning \& \newline Scheduling & Automated centralized planning & 9 \\ & De-peaking & 9 \\ & Shift trading & 7 \\ \hline GSE \newline management & Automated GSE scheduling and routing & 6 \\\hline HR \& \newline Training & Digital employee profiles & 6 \\ & Web-based training & 9 \\ & Integrated employee lifecycle management & 2 \\ & Mobile apps and social-media for corporate communications & 9 \\ \hline Financial process & Integrated invoicing & 9 \\ \hline Management process & Integrated KPI reporting & 7 \\ \hline IT process & Digital workplace and virtualization & 7 \\ \hline \end{tabularx} \end{center} \label{tab:trl_sp} \end{table} A considerable number of technologies has already achieved the highest TRL, e.g. self-service check-in kiosks, self-boarding, automated baggage drop-off. There is a wide range of scientific papers and industry reports available stating a broad commercial usage of these technologies (see the previous section for examples). Such technologies are "mission proven" and have been successfully operated in actual missions. Another big set of technologies is currently at TRL 8, e.g. biometric services, digital bag tags, baggage self-tagging. These technologies have been tested and "mission qualified". We were able to find the evidence of the first commercial usage of these technologies, e.g. in terms of a successful pilot implementation. We rated the technologies with system prototypes working in operational environment on full-scale realistic problems at TRL 7. Consider, for instance, the shift-trading. Some ground handling agents are using shift-trading systems with partial functionality. General engineering feasibility is fully demonstrated. Limited documentation is available. Furthermore, products covering partial functionality are available for commercial usage. In order to achieve the next TRL, system prototypes covering all key functionality need to be developed and integrated with existing operational software and hardware systems. These prototypes need to demonstrate full operational feasibility. For instance, several shift-trading systems are not fully integrated with time and attendance systems. The trades are put into the time and attendance systems manually by the back office staff or uploaded a-synchronically to the system as last step adjustments. Consider now the usage of smart wearables in the airport ground operations. Smart wearables represent a bunch of technologies, e.g. smart watches and smart glasses. We rated these technologies at TRL 7 as well. We were able to find scientific papers and industry publications reporting well-functioning prototypes in operational environment, e.g. the pilot usage of smart glasses by ground handling staff at the lounge entry. Nonetheless, we were not able to find any evidence of a broad commercial usage of smart wearables in airport ground handling operations. TRL 6 technologies have a full functioning prototype or a representational model, but they have not been tested in operational environment on full-scale realistic problems yet. Consider, for instance, the automated GSE scheduling and routing. Component and subsystem prototypes have been successfully demonstrated in operational environment; partially integrated with existing hardware and software systems. Products covering partial functionality for commercial usage are available, e.g. tracking of GSE vehicles and routing of de-icing trucks. In order to achieve the next TRL, end-to-end-software prototypes need to be developed and connected to existing systems conforming with target operating environment. New prototypes need to be tested in relevant environment and provide evidence of meeting expected performance. Scaling requirements need to be defined. On top of that, we were able to identify a set of technologies at the very early stage of development. For instance, smart wheelchairs, real-time luggage tracking and digital processing of lounge vouchers are currently at TRL 2. Practical applications of these technologies have been successfully identified, but they are speculative. No experimental proof or detailed analysis are available to support the conjecture. Both analytical and laboratory studies are required to see if the technology is viable and ready to proceed through the further development process. For instance, a proof-of-concept model for digital processing of vouchers needs to be constructed in order to achieve TRL 3. Overall, ground handling agents employ a broad set of high TRL technologies (TRL 6 and higher). Many of these technologies deliver cost reductions and efficiency improvements to ground handling agents and are considered to be state-of-the-art technologies. Additionally, ground handling agents are seeking further differentiation and invest into new technologies (represented by low TRL technologies) in order to outpace competitors in terms of process efficiency and new revenue streams. In addition to the necessary implementation in industry (including testing and improving new technologies, linking information flows etc.), some scientific challenges have to be solved. We have listed below some proposed research directions: \begin{enumerate} \item \textbf{Quantitative criteria for the evaluation of technologies}: Ground handling agents usually employ a number of quantitative criteria in order to evaluate technologies and make investment decisions relating to the use of new technologies. For instance, low TRL technologies are basically evaluated against a number of technical criteria, e.g. minimal and maximal position error in indoor navigation. High TRL technologies are usually assessed in terms of economic prospects, e.g. expected return on investment, expected cost and staff reductions, new revenue streams based on new digital products or services. Unfortunately, we were not able to obtain reliable data on such technology assessments. Our interviews with industry experts indicate that ground handling agents are reluctant to publish such data because they consider them as part of the competitive advantage. We propose to develop generic evaluation criteria and procedures which can be used by ground handling agents to assess technologies and build reliable business cases. \item \textbf{Incorporate uncertainty into forecasting and planning models}: Several tasks and activities in airport ground handling are affected by uncertain and random events, e.g. flight delays, utilized capacity in flights, weather conditions and technical incidents. In contrast to another industries, the most forecasting and planning models used in the airport ground operations do not consider this kind of uncertainty explicitly (see forecasting of renewable generation and customer load in power systems \cite{GonzalezOrdiano18}, optionally using Big Data frameworks as Apache Spark \cite{GonzalezOrdiano18a}). By incorporating uncertainty into planning and forecasting models, we expect a significant increase of the robustness and reliability of developed plans. \item \textbf{Multi-objective optimization}: The majority of underlying optimization problems in planning and scheduling are formulated as optimization problems under technological or financial restrictions. The incorporated optimization function does not include such factors as customer satisfaction, employee satisfaction or ecological footprint. We believe that such optimization criteria can be incorporated into the existing optimization models at a low cost and without significant increase in recurring costs. On top of that, several optimization problems are formulated in terms of local optimization, e.g. reducing the number of check-in staff, and do not consider the optimization across the entire value chain. Hopefully, we were able to find some rare evidence of globally formulated optimization problems, e.g. considering the trade-off between minimum staff requirement against the quality of service. \item \textbf{Use of collected data to generate new business models}: The reported applications relating to the usage of enterprise data are focused basically on increasing process efficiency. We were able to find only a limited number of scenarios where ground handling agents were using available data to a greater extend or to generate new business models. For instance, globally acting ground handling agents have a large amount of data on flight delays and can use them to optimize flight schedule forecasts by exchanging these data across the local stations. On top of that, collected data may be a powerful resource for a new service offering to the airlines, e.g., passenger preferences for different airlines on the same routes and estimating the potential of promising direct routes to avoid changed flights. Such tasks require prototypical projects of ground handling agents together with research institutions or consulting companies with a digitalization profile. \end{enumerate} \section{Conclusions} \label{sec:Conclusions} We found that the majority of digitalization initiatives in airport ground operations focus on operational improvements of airport ground operations. In the core processes of the airport ground operations value chain, the digitalization is mainly driven by cost pressures. The majority of customer-centric innovations contribute further cost reductions to ground handling agents, as well as benefits to the customers through time savings, improved service quality and transparency. Several passenger-related services have shifted towards digital self-service. Recent publications discuss several different digitalization scenarios in the passenger and baggage handling; however, some topics remain underrepresented, such as lounge service digitalization. In supporting processes of the airport ground operations value chain, ground handling agents are trying to achieve further cost reductions by improving planning and scheduling procedures. We were able to find several papers on relevant methods and models, but very few discussed concrete solutions for implementation. Other supporting processes of the value chain, such as HR, IT, management and financial processes, were only infrequently discussed. Nonetheless, we identified sound digitalization scenarios relating to these processes, such as digital employee profiles, web-based trainings, mobile apps and social media for corporate communications, integrated KPI reporting, and integrated invoicing. Ground handling agents rarely use new digital technologies to create new business models and disrupt markets; however, we identified various scenarios where digitalization has the potential to create new business models, such as with recent approaches to shift-trading, lounge ticketing and indoor navigation. Ground handling agents are engaging with business partners in order to create new revenue streams, such as baggage pickup and delivery at home, cooperation with taxis, duty-free shops in arrival lounges, lounge tickets sold by charter airlines and tour operators. With these types of changes, the capacity to collect and analyze data has increased in importance. Ground handling agents are investing to consolidate, standardize and actively manage data pockets, in order to provide information on passenger flow, efficiency of baggage-handling systems, timeliness of flights, speed and effectiveness of passenger handling, and others. Ground handling agents aim to become more responsive and efficient by using this kind of information in operational and tactical decisions. Based on our findings, we believe that the followings topics may be important future research directions in the area of airport ground operation digitalization: new cooperation modes across various business partners at airports, seamless travel, new digital business models, digital security solutions, multi-objective optimization, advanced forecasting and planning models with incorporated factors of uncertainty, general evaluation criteria for technology assessment in airport ground handling, and data collection and advanced analytics for operational decision making. \begin{backmatter} \section*{Declarations:} \section*{Acknowledgments} The authors also acknowledge careful English language editing by Rebecca Klady. \section*{Competing interests} The authors declare that they have no competing interests. \section*{Funding} This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. \bibliographystyle{bmc-mathphys}
2024-02-18T23:40:31.277Z
2018-05-24T02:11:04.000Z
algebraic_stack_train_0000
2,643
10,130